Signal Deep Dive No. 002

The Trust Layer.
Where Mid-Market AI Security Actually Breaks.

Most AI security debates aren't about models. They're about where trust should live — and whether anyone in the building agrees on the answer.

BLUF

Mid-market companies have real data and small security teams. They're being asked to choose between Bedrock, Vertex, Copilot, Claude, confidential computing, and "just buy GPUs." That choice looks technical. It isn't.

The real question isn't which platform is most secure. It's which risks you're willing to own, and which risks you're willing to rent. Most companies can't answer that because they don't know who's supposed to decide.

The Setup

Four teams. Four definitions of "secure." One product decision.

Walk into any mid-market AI security meeting and you'll hear four different definitions of the same word.

The CTO says "secure" means the vendor won't train on our data. The security lead says "secure" means the prompts never cross the public internet. Legal says "secure" means we hold the encryption keys. The CFO says "secure" means we can pass the audit. Each is correct. Each is talking about a different layer of the stack.

That's where most AI security conversations go off the rails — not because the vendors are lying, but because the buyers are arguing about different problems using the same vocabulary.

Most AI security debates aren't about models. They're arguments about where trust should live.
The Stakes

Mid-market companies have the data. What they don't have is the decision rights.

The numbers in the 2026 surveys are bleak enough to be honest.

79%
of organizations face challenges adopting AI — a double-digit increase from 2025.
WRITER, 2026 Survey
78%
of business executives lack full confidence their organization could pass an independent AI governance audit within 90 days.
Grant Thornton 2026 AI Impact Survey (n=950)
67%
of executives believe their company has already suffered a leak from unapproved AI tools.
WRITER, 2026 Survey

Those aren't enterprise-only numbers. They describe mid-market reality — companies with real data, real regulatory exposure, and security teams built for a pre-AI threat model. They're being asked to make infrastructure decisions that bigger companies have entire governance functions for.

The WRITER survey found that 54% of C-suite executives said AI adoption is "tearing their company apart," and 67% believe their company has already suffered a leak or data breach due to unapproved AI tools. Grant Thornton's 2026 AI Impact Survey of 950 senior US business leaders, released in April, put it differently but no more comfortably: "more than three-quarters (78%) lack full confidence their organization could pass an independent AI governance audit within 90 days." These aren't governance problems waiting to happen. They're governance problems already compounding.

Mid-market AI security isn't failing because the tools are weak. It's failing because no one agreed on who gets to decide what "secure" means.
The Framework

AI security isn't one question. It's four.

The cleanest way to cut through the four-definition problem is to separate AI security into the four layers every vendor must answer for. Most enterprise platforms are strong on the first three. Very few are strong on the fourth.

Layer 01 — Retention & Training
Will the vendor train on our data?
Table stakes in 2026. Every serious enterprise tier now commits to not training on your inputs or outputs. The real question is what else they log, for how long, and under what conditions they can override that commitment.
Layer 02 — Network & Tenant Boundary
Can traffic or tenant boundaries leak the data?
The CISO-facing question. Does traffic stay inside your cloud account, your VPC, your identity model? AWS PrivateLink for Bedrock and Google's Private Service Connect for Vertex are the clearest answers here.
Layer 03 — Key Control at Rest
Do we control the keys and storage lifecycle?
Customer-managed keys. AWS KMS, Google CMEK, Azure Customer Key. Microsoft Double Key Encryption (DKE) for specifically labeled content. Different scopes, same principle: you can revoke access by rotating or withholding a key.
Layer 04 — Data in Use
Can the cloud operator see the data while the model is running?
The layer almost no vendor deck answers cleanly. This is confidential computing territory — hardware TEEs, remote attestation, and architectures where even the provider's own administrators can't read plaintext during inference. Apple's Private Cloud Compute and NVIDIA's Confidential Computing are the reference implementations.

Most vendor decks answer the first three. Very few truly answer the fourth. That's the gap buyers keep mistaking for "the vendor won't say."

The Vendors

What each platform actually gives you.

Every claim below is sourced to the vendor's own current documentation. For the full platform-by-platform control list — default protections, settings to turn on, and what buyers miss — see The Predicai Playbook. What follows is the narrative summary.

AWS Bedrock

AWS states plainly that Bedrock never shares your data with model providers or uses it to train foundation models. KMS for keys, PrivateLink for private networking, IAM you already know. For most mid-market buyers already on AWS, Bedrock is the cleanest enterprise answer. The caveat: Bedrock protects the boundary and the keys. It doesn't guarantee the AWS operator cannot see data during inference. That's a different problem, and a different product.

Google Vertex AI

Vertex has CMEK, VPC Service Controls, and a documented Zero Data Retention pathway — strong for teams already on Google Cloud. The caveats are specific and buyers routinely miss them. Google's own docs say Grounding with Google Search stores prompts, context, and output for 30 days with "no way to disable the storage of this information if you use Grounding with Google Search." Gemini Live session resumption, if enabled, caches for 24 hours. Abuse-monitoring prompt logging is on by default. None of these are dealbreakers. All of them are surprises if you didn't read the docs.

Microsoft 365 Copilot

Copilot's strongest story is that it lives inside your existing Microsoft 365 identity, permissions, Purview labels, eDiscovery, and Conditional Access. Microsoft documents that prompts, responses, and Graph-accessed data are not used to train foundation models. The risk Microsoft itself foregrounds is the one that doesn't sound like a risk. Per Microsoft's Purview guidance: "generative AI amplifies the problem and risk of oversharing or leaking data." If your SharePoint permissions are messy, Copilot will honor that mess perfectly. The prerequisite is data hygiene — Purview DSPM for AI exists specifically to find it before the rollout, not after.

Anthropic Claude for Work / API

Anthropic's Commercial Terms state: "Anthropic may not train models on Customer Content from Services." Airtight for the API, Claude for Work, Claude Enterprise, and access through Bedrock or Vertex. The distinction buyers miss: when Anthropic extended consumer-plan retention to five years in 2025, they made explicit that the change does not apply to commercial products. Any employee on a personal Claude account is under consumer terms. That's the shadow-AI mechanism hiding inside a platform most people assume is safe.

Azure OpenAI, Oracle OCI, Apple PCC, NVIDIA

The Microsoft-side custom path (Azure OpenAI) commits to no training, supports CMK, and offers confidential computing as a substrate — though not as the default for general inference. Oracle OCI Enterprise AI is the most underrated option in mid-market shortlists, with zero-data-retention endpoints, dedicated AI clusters, and sovereign regions for regulated workloads. Apple's Private Cloud Compute isn't an enterprise buy — it's the reference architecture, with verifiable transparency, a public bounty program, and cryptographic guarantees that even Apple operators cannot access user data. And NVIDIA's H100/H200/Blackwell confidential computing is the hardware substrate behind the strongest "private inference" stories, with TEEs and remote attestation that exclude the cloud operator during execution.

For the full control list on each of these platforms, see The Playbook →

The Matrix

Side by side, across the four layers.

AI Platform Security Posture — April 2026
Strong: documented, enforceable, default. Partial: available with configuration or eligibility requirements. Weak: limited or absent default protection.
Platform L1
Training
L2
Network
L3
Keys
L4
Data-in-Use
AWS Bedrock Strong Strong
PrivateLink
Strong
KMS
Limited
Google Vertex AI Partial
ZDR requires exception
Strong
VPC SC + PSC
Strong
CMEK
Limited
Microsoft 365 Copilot Strong Strong
Tenant isolation
Partial
DKE for labeled content
Limited
Azure OpenAI Strong Strong Strong
CMK
Partial
CC available, not default
Anthropic Claude API Strong Partial
Via AWS/GCP
Partial Limited
Oracle OCI Enterprise AI Strong
ZDR endpoints
Strong
Sovereign regions
Strong Partial
Confidential Inferencing
(NVIDIA CC)
Strong Strong Strong Strong
TEE + attestation
Apple PCC
(reference only)
Strong Strong Strong Strong
Verifiable
Assessments reflect documented defaults as of April 2026. Feature paths, eligibility requirements, and specific service configurations can materially change these ratings. Note that some feature paths — notably Vertex Grounding with Google Search — are incompatible with zero data retention even when other ZDR exceptions have been granted. This matrix is a starting point for conversation, not a compliance artifact.
Companion Guide
Evaluating a platform right now?

The Predicai Playbook breaks each platform down into three things: default protections, controls to turn on, and what buyers miss. Sourced to the vendor's own docs. Use it alongside this piece.

See the full Playbook →
The Real Fight

Mid-market companies don't lose these fights on technology. They lose on governance.

This is where the conversation most vendor decks avoid becomes the conversation that actually matters.

In 2004, Peter Weill and Jeanne Ross (MIT CISR) published the foundational work on IT governance — a study of over 250 enterprises identifying six distinct decision-rights archetypes that shape how technology choices get made. Twenty years later, their framework maps almost perfectly onto what mid-market teams are actually fighting about when they argue over AI security.

The six archetypes, and what each looks like when the product in question is AI:

Business Monarchy
Senior execs decide
"Roll out Copilot company-wide this quarter." Speed-first. Security added later. Vendor contract language treated as sufficient protection.
Risk: speed without structure creates invisible risk.
IT Monarchy
IT/security leaders decide
"Block external models until we evaluate them." Heavy vendor scrutiny, long procurement cycles, tendency toward self-hosted or internal-only deployment.
Risk: control without adoption pushes usage into shadow AI.
Feudal
Department leaders decide independently
Marketing uses ChatGPT. Ops uses Gemini. Finance builds a custom bot. Each department optimizes locally. No shared standard. Each makes a defensible choice; collectively they create exposure.
Risk: fragmentation hides risk until it compounds.
Federal
Business + IT decide together
Central standards. Local flexibility. Explicit decision rights. Business can move on approved platforms; IT holds the guardrails. Slower to land, but durable once it does.
Usually the healthiest posture for mid-market.
IT Duopoly
IT + one business partner
Shared decision-making between IT and a single business unit at a time. Less consensus overhead than federal. Works when IT can act as a facilitator rather than a gatekeeper.
Strong when trust between IT and the business already exists.
Anarchy
Individuals decide for themselves
Employees use whatever tools they want. No policy enforcement. No inventory. The default state if no one designs governance — and where most unapproved AI usage currently lives.
Maximum exposure. No auditability. Silent failure modes.
Companies aren't choosing between Bedrock, Vertex, and Copilot. They're choosing how decisions about those systems get made — usually without realizing that's the choice.

Most mid-market companies are operating under multiple archetypes simultaneously — and don't know it. The exec team pushes like a business monarchy. Security operates like an IT monarchy. Departments run like feudal states. Individual employees default to anarchy. That's not disagreement. That's misalignment. And misalignment at the decision-rights layer creates exactly the inconsistent security posture that 67% of executives already report has caused a leak.

The Diagnostic

Where do you actually sit?

Self-assessment — five questions
If you can't answer these with the same group, you don't have an AI strategy. You have competing systems.
01Who approves new AI tools before they're used?
02Who defines what "acceptable use" means for your organization?
03Who owns the risk when an AI output causes harm, leaks data, or gets something wrong?
04Who is responsible for verifying AI outputs before they're shipped to clients?
05When a model update silently changes behavior, who notices and who responds?
If the answers name different groups, you don't have an AI problem. You have a decision-rights problem. Fix the governance first. The vendor choice gets easier after.
The Scenarios

Three situations mid-market companies actually face.

Scenario 01
Microsoft-heavy company, messy permissions
The company already runs on M365, and executives want Copilot deployed for productivity. Recommendation: Copilot is usually the right answer — but the prerequisite is Purview-driven permission hygiene. Run the DSPM for AI assessment before the rollout, not after. The risk isn't that Microsoft trains on your prompts. The risk is that Copilot surfaces the documents your file-sharing model should have hidden three years ago.
Failure mode: treating "Copilot is secure" as the end of the conversation. The governance question isn't Microsoft's. It's yours.
Scenario 02
AWS shop, regulated but pragmatic
The company is on AWS, wants access to frontier models, and needs assurance that prompts don't reach model providers or the public internet. Recommendation: Bedrock. PrivateLink for network isolation, KMS for keys, and model choice across Anthropic, Meta, Amazon Titan, and others. The enterprise story is legible, the IAM model is familiar, and the no-training commitment is clear.
Failure mode: confusing "private cloud routing" with "data-in-use confidentiality." Bedrock protects the boundary. It doesn't make AWS operators blind to your inference.
Scenario 03
High-sensitivity workload, exec team wants to buy GPUs
Leadership reads a few articles, decides the answer is to own the infrastructure. Recommendation: pause. Ask which exact control is missing. If the concern is data-in-use protection, the answer is confidential inferencing on NVIDIA H100/H200/Blackwell via a provider like Oracle or Azure — not a rack of GPUs in a colo. If the concern is sovereignty, the answer is a sovereign region. If the concern is lock-in, the answer is a multicloud architecture. Each is a different decision. Buying GPUs before naming the threat model usually buys hardware plus an expensive new operating burden.
Failure mode: "we want control" treated as a technical requirement instead of a governance question. GPUs don't give you governance. They give you hardware.
The Bottom Line

What this means for the next 90 days.

The hardest problem in mid-market AI security isn't choosing a vendor. Every vendor above has a defensible answer for the right use case. The hard problem is deciding who gets to decide — and committing to that structure before the next procurement meeting.

If you do nothing else in the next 90 days, do these three things:

One: run the five-question diagnostic with your exec team. Put the answers on one page. If the names don't match, that's the finding.

Two: pick one governance archetype you can actually defend and commit to it. Federal is usually the right answer for mid-market companies that need both velocity and control. Duopoly works when IT can facilitate rather than gatekeep. Pick the one that fits your culture — but pick one.

Three: map each active AI use case to one of the four security layers. If no one can articulate which layer a given workload needs protected, that workload shouldn't be in production yet.

The biggest risk in AI isn't what the model can see. It's who's allowed to decide how it's used — and whether that decision was ever actually made.

The vendors will keep shipping better enterprise controls. The frameworks will keep maturing. None of that matters if the four teams in your company are still using the same word to describe four different things.

The model is not the system. And the system is not the security. The security is the governance — and until that's built, every tool choice is a guess in a fog.