Mid-market companies have real data and small security teams. They're being asked to choose between Bedrock, Vertex, Copilot, Claude, confidential computing, and "just buy GPUs." That choice looks technical. It isn't.
The real question isn't which platform is most secure. It's which risks you're willing to own, and which risks you're willing to rent. Most companies can't answer that because they don't know who's supposed to decide.
Four teams. Four definitions of "secure." One product decision.
Walk into any mid-market AI security meeting and you'll hear four different definitions of the same word.
The CTO says "secure" means the vendor won't train on our data. The security lead says "secure" means the prompts never cross the public internet. Legal says "secure" means we hold the encryption keys. The CFO says "secure" means we can pass the audit. Each is correct. Each is talking about a different layer of the stack.
That's where most AI security conversations go off the rails — not because the vendors are lying, but because the buyers are arguing about different problems using the same vocabulary.
Mid-market companies have the data. What they don't have is the decision rights.
The numbers in the 2026 surveys are bleak enough to be honest.
Those aren't enterprise-only numbers. They describe mid-market reality — companies with real data, real regulatory exposure, and security teams built for a pre-AI threat model. They're being asked to make infrastructure decisions that bigger companies have entire governance functions for.
The WRITER survey found that 54% of C-suite executives said AI adoption is "tearing their company apart," and 67% believe their company has already suffered a leak or data breach due to unapproved AI tools. Grant Thornton's 2026 AI Impact Survey of 950 senior US business leaders, released in April, put it differently but no more comfortably: "more than three-quarters (78%) lack full confidence their organization could pass an independent AI governance audit within 90 days." These aren't governance problems waiting to happen. They're governance problems already compounding.
AI security isn't one question. It's four.
The cleanest way to cut through the four-definition problem is to separate AI security into the four layers every vendor must answer for. Most enterprise platforms are strong on the first three. Very few are strong on the fourth.
Most vendor decks answer the first three. Very few truly answer the fourth. That's the gap buyers keep mistaking for "the vendor won't say."
What each platform actually gives you.
Every claim below is sourced to the vendor's own current documentation. For the full platform-by-platform control list — default protections, settings to turn on, and what buyers miss — see The Predicai Playbook. What follows is the narrative summary.
AWS Bedrock
AWS states plainly that Bedrock never shares your data with model providers or uses it to train foundation models. KMS for keys, PrivateLink for private networking, IAM you already know. For most mid-market buyers already on AWS, Bedrock is the cleanest enterprise answer. The caveat: Bedrock protects the boundary and the keys. It doesn't guarantee the AWS operator cannot see data during inference. That's a different problem, and a different product.
Google Vertex AI
Vertex has CMEK, VPC Service Controls, and a documented Zero Data Retention pathway — strong for teams already on Google Cloud. The caveats are specific and buyers routinely miss them. Google's own docs say Grounding with Google Search stores prompts, context, and output for 30 days with "no way to disable the storage of this information if you use Grounding with Google Search." Gemini Live session resumption, if enabled, caches for 24 hours. Abuse-monitoring prompt logging is on by default. None of these are dealbreakers. All of them are surprises if you didn't read the docs.
Microsoft 365 Copilot
Copilot's strongest story is that it lives inside your existing Microsoft 365 identity, permissions, Purview labels, eDiscovery, and Conditional Access. Microsoft documents that prompts, responses, and Graph-accessed data are not used to train foundation models. The risk Microsoft itself foregrounds is the one that doesn't sound like a risk. Per Microsoft's Purview guidance: "generative AI amplifies the problem and risk of oversharing or leaking data." If your SharePoint permissions are messy, Copilot will honor that mess perfectly. The prerequisite is data hygiene — Purview DSPM for AI exists specifically to find it before the rollout, not after.
Anthropic Claude for Work / API
Anthropic's Commercial Terms state: "Anthropic may not train models on Customer Content from Services." Airtight for the API, Claude for Work, Claude Enterprise, and access through Bedrock or Vertex. The distinction buyers miss: when Anthropic extended consumer-plan retention to five years in 2025, they made explicit that the change does not apply to commercial products. Any employee on a personal Claude account is under consumer terms. That's the shadow-AI mechanism hiding inside a platform most people assume is safe.
Azure OpenAI, Oracle OCI, Apple PCC, NVIDIA
The Microsoft-side custom path (Azure OpenAI) commits to no training, supports CMK, and offers confidential computing as a substrate — though not as the default for general inference. Oracle OCI Enterprise AI is the most underrated option in mid-market shortlists, with zero-data-retention endpoints, dedicated AI clusters, and sovereign regions for regulated workloads. Apple's Private Cloud Compute isn't an enterprise buy — it's the reference architecture, with verifiable transparency, a public bounty program, and cryptographic guarantees that even Apple operators cannot access user data. And NVIDIA's H100/H200/Blackwell confidential computing is the hardware substrate behind the strongest "private inference" stories, with TEEs and remote attestation that exclude the cloud operator during execution.
For the full control list on each of these platforms, see The Playbook →
Side by side, across the four layers.
| Platform | L1 Training |
L2 Network |
L3 Keys |
L4 Data-in-Use |
|---|---|---|---|---|
| AWS Bedrock | Strong | Strong PrivateLink |
Strong KMS |
Limited |
| Google Vertex AI | Partial ZDR requires exception |
Strong VPC SC + PSC |
Strong CMEK |
Limited |
| Microsoft 365 Copilot | Strong | Strong Tenant isolation |
Partial DKE for labeled content |
Limited |
| Azure OpenAI | Strong | Strong | Strong CMK |
Partial CC available, not default |
| Anthropic Claude API | Strong | Partial Via AWS/GCP |
Partial | Limited |
| Oracle OCI Enterprise AI | Strong ZDR endpoints |
Strong Sovereign regions |
Strong | Partial |
| Confidential Inferencing (NVIDIA CC) |
Strong | Strong | Strong | Strong TEE + attestation |
| Apple PCC (reference only) |
Strong | Strong | Strong | Strong Verifiable |
The Predicai Playbook breaks each platform down into three things: default protections, controls to turn on, and what buyers miss. Sourced to the vendor's own docs. Use it alongside this piece.
See the full Playbook →Mid-market companies don't lose these fights on technology. They lose on governance.
This is where the conversation most vendor decks avoid becomes the conversation that actually matters.
In 2004, Peter Weill and Jeanne Ross (MIT CISR) published the foundational work on IT governance — a study of over 250 enterprises identifying six distinct decision-rights archetypes that shape how technology choices get made. Twenty years later, their framework maps almost perfectly onto what mid-market teams are actually fighting about when they argue over AI security.
The six archetypes, and what each looks like when the product in question is AI:
Most mid-market companies are operating under multiple archetypes simultaneously — and don't know it. The exec team pushes like a business monarchy. Security operates like an IT monarchy. Departments run like feudal states. Individual employees default to anarchy. That's not disagreement. That's misalignment. And misalignment at the decision-rights layer creates exactly the inconsistent security posture that 67% of executives already report has caused a leak.
Where do you actually sit?
Three situations mid-market companies actually face.
What this means for the next 90 days.
The hardest problem in mid-market AI security isn't choosing a vendor. Every vendor above has a defensible answer for the right use case. The hard problem is deciding who gets to decide — and committing to that structure before the next procurement meeting.
If you do nothing else in the next 90 days, do these three things:
One: run the five-question diagnostic with your exec team. Put the answers on one page. If the names don't match, that's the finding.
Two: pick one governance archetype you can actually defend and commit to it. Federal is usually the right answer for mid-market companies that need both velocity and control. Duopoly works when IT can facilitate rather than gatekeep. Pick the one that fits your culture — but pick one.
Three: map each active AI use case to one of the four security layers. If no one can articulate which layer a given workload needs protected, that workload shouldn't be in production yet.
The vendors will keep shipping better enterprise controls. The frameworks will keep maturing. None of that matters if the four teams in your company are still using the same word to describe four different things.
The model is not the system. And the system is not the security. The security is the governance — and until that's built, every tool choice is a guess in a fog.
AWS Bedrock — Security, Privacy, and Responsible AI documentation
Google Cloud — Vertex AI Zero Data Retention documentation
Microsoft Learn — Data, Privacy, and Security for Microsoft 365 Copilot
Microsoft Learn — Microsoft Purview data security and compliance protections for generative AI apps
Microsoft Learn — Data, privacy, and security for Azure Direct Models in Microsoft Foundry
Anthropic — Commercial Terms of Service
Anthropic — Updates to Consumer Terms and Privacy Policy (scope exclusion for commercial products)
Oracle — OCI Enterprise AI with zero data retention endpoints
Apple Security Research — Private Cloud Compute: A new frontier for AI privacy in the cloud
NVIDIA Developer Blog — Protecting Sensitive Data and AI Models with Confidential Computing
Peter Weill & Jeanne Ross (MIT CISR, 2004) — IT Governance: How Top Performers Manage IT Decision Rights for Superior Results
WRITER (April 2026) — 2026 Enterprise AI Adoption Survey (n=2,400)
Grant Thornton (April 2026) — 2026 AI Impact Survey (n=950 senior US business leaders)
Deloitte — State of AI in the Enterprise, 2026