This is the section most AI security essays skip. Not because the information is hard — but because it's boring. Vendor docs. Admin settings. Policy language. The actual mechanics of protecting data are unglamorous, and that's exactly why most mid-market companies get them wrong.
What follows is a platform-by-platform playbook for the specific controls that matter most. Every claim below is drawn from the vendor's own current documentation. Every control listed is something you can turn on, negotiate, or demand — today.
Three questions to bring to every AI vendor conversation.
Before you open a single product page, print these three questions. The answers determine whether a platform is a serious contender for the workload you're trying to protect.
AWS Bedrock
Default protections. Per AWS's own documentation, Bedrock "never shares your data with model providers or uses it to train foundation models." Data is encrypted in transit and at rest.
Controls to turn on. AWS KMS with customer-managed keys for encryption control. AWS PrivateLink to keep traffic off the public internet. IAM policies scoped to the minimum viable identity. VPC endpoints for the Bedrock runtime. Bedrock Guardrails for content filtering, PII redaction, and prompt-attack protection — these apply to any foundation model you use through Bedrock, including third-party ones.
What buyers miss. Bedrock's no-training commitment applies to the base foundation models. If you fine-tune a model, AWS creates a private copy available only to you — but your training data and the resulting fine-tuned weights are still subject to how you configure storage, KMS, and access. The perimeter is strong; the configuration is yours.
Google Vertex AI
Default protections. Per Google's Service Terms, Google will not use your data to train or fine-tune AI/ML models without your permission. This applies to all managed Gemini models.
Controls to turn on. CMEK for customer-managed encryption. VPC Service Controls to create a security perimeter around Vertex. Private Service Connect for private networking. For true Zero Data Retention, you must take two distinct actions: disable in-memory caching at the project level (via the Vertex admin API), and request an approved exception for abuse-monitoring prompt logging.
What buyers miss. Two feature paths have retention you cannot opt out of if you use the feature. Grounding with Google Search stores prompts, context, and output for 30 days with no disable option — Google explicitly recommends Web Grounding for Enterprise as the ZDR-compatible alternative. Gemini Live API session resumption, if enabled, caches text, video, and audio for 24 hours. If your ZDR strategy depends on never retaining data anywhere, these features are incompatible regardless of how your abuse-monitoring exception is configured.
Microsoft 365 Copilot
Default protections. Copilot uses pre-trained LLMs hosted by Microsoft. Per Microsoft's privacy documentation, prompts, responses, and Graph-accessed data are not used to train foundation models and are never shared with OpenAI or other third parties. Copilot only returns data the user is already authorized to access. Data stays within Microsoft's cloud boundary, with EU Data Boundary protections for EU users.
Controls to turn on. Microsoft Purview sensitivity labels with encryption — Copilot honors EXTRACT and VIEW usage rights. DSPM for AI to detect oversharing risk before Copilot is turned on. Purview DLP policies that inspect prompts in real time and block sensitive content. Purview Insider Risk Management with the "Risky AI usage" policy template. Restricted SharePoint access for sites that contain the highest-sensitivity content. SharePoint Advanced Management for lifecycle and oversharing remediation.
What buyers miss. Microsoft's own Purview documentation puts it directly — "generative AI amplifies the problem and risk of oversharing or leaking data." Copilot will honor your current SharePoint permissions perfectly. If those permissions were built in the pre-AI era, deploying Copilot without a Purview-driven cleanup is the single most common path to a self-inflicted leak. The control isn't a feature toggle. It's a pre-deployment hygiene pass.
Azure OpenAI / Azure AI Foundry
Default protections. Per Microsoft's Foundry data privacy documentation, prompts, completions, embeddings, and fine-tuning data are not used to train or improve Microsoft or OpenAI foundation models without explicit permission. Prompts and generations are not stored in the model itself — the models are stateless. Data stays inside Microsoft's Azure environment, isolated from OpenAI-operated services.
Controls to turn on. Customer-managed keys (CMK) for Assistants Threads and Files. Regional or Data Zone deployments for data residency. Private endpoints and Azure Private Link. Microsoft Entra authentication in place of API keys wherever possible. Request an abuse-monitoring exception if you have a qualifying regulatory need — Microsoft offers a modified abuse-monitoring path for sensitive workloads. For high-sensitivity workloads, Azure Confidential Computing is available as a substrate, though it is not the default for general Azure OpenAI inference.
What buyers miss. Fine-tuned models belong to the customer exclusively and are encrypted at rest — but training data uploaded for fine-tuning is stored in your Foundry resource in your Azure tenant. Where that data lives, who can read it, and how long it persists all depend on your configuration, not Azure's defaults. The commitment is that Microsoft won't misuse it. The responsibility for storing it safely is still yours.
Anthropic Claude (API & Claude for Work)
Default protections. Anthropic's Commercial Terms of Service state: "Anthropic may not train models on Customer Content from Services." This applies to API access, Claude for Work, Claude Enterprise, Claude Gov, and Claude for Education. When Claude is used through AWS Bedrock or Google Vertex AI, those platforms' enterprise protections apply on top.
Controls to turn on. Negotiate a Zero Data Retention agreement for eligible API workloads — under ZDR, inputs and outputs are not stored at rest after the API response returns, except where needed to comply with law or combat misuse. HIPAA-ready API access with a signed Business Associate Agreement for protected health information. A signed Data Processing Addendum (DPA) for GDPR compliance. For organizations on Team or Enterprise plans, disable the feedback thumbs-up/down button at the org level if you don't want even aggregated feedback leaving your tenant.
What buyers miss. The biggest shadow-AI exposure on Claude isn't in your commercial contract — it's in your employees' personal accounts. Anthropic's 2025 consumer policy change extended retention to five years for Free, Pro, and Max accounts (with opt-in training), but Anthropic was explicit that the change does not apply to commercial products. If your employees are pasting company data into personal Claude accounts, you're in the consumer terms, not the commercial ones. That's an identity and policy problem, not a vendor problem.
Oracle OCI Enterprise AI
Oracle shows up here not because of mindshare, but because sovereignty, dedicated clusters, and data-residency controls matter more to some buyers than model branding. If you've already standardized on Oracle infrastructure or operate under residency mandates, it belongs on your shortlist.
Default protections. OCI Enterprise AI offers dedicated AI clusters accessible only within the customer's tenancy, managed access to leading models with zero-data-retention endpoints, and sovereign AI options for data hosting and processing. For federal workloads, FedRAMP High authorization with IL4/IL5 support and air-gapped isolated cloud regions are available.
Controls to turn on. Sovereign cloud regions for data residency requirements. Dedicated AI clusters rather than shared endpoints for sensitive inference. Oracle Cloud@Customer for customers who need AI processing fully inside their own facility. Immutable audit logging aligned with NIST and FISMA frameworks. IAM controls, always-on encryption, and integration with existing Oracle Database protections.
What buyers miss. Oracle is underrated in most mid-market shortlists because of mindshare, not capability. For regulated industries, data sovereignty mandates, or companies already running Oracle databases, OCI Enterprise AI is often a stronger fit than the buyer realizes — and the Oracle Database 23ai/24ai releases embed vector search and RAG directly into the database engine, reducing the "move data to AI" pattern that creates governance exposure in the first place.
NVIDIA Confidential Computing (for the Layer 4 buyer)
This is a substrate, not a product you buy. If your threat model doesn't specifically name Layer 4 — the cloud operator seeing your plaintext during inference — you don't need this. If it does, this is what the serious answer looks like underneath whoever you contract with.
What it is. Hardware-based trusted execution environments on NVIDIA H100, H200, and Blackwell GPUs. Per NVIDIA's technical documentation, data and AI models remain encrypted and cannot be accessed during execution by the hypervisor, cloud provider, host operating system, or anyone with physical access to the infrastructure. Remote attestation lets you verify cryptographically that you're dealing with an authentic, unmodified NVIDIA GPU before you send any data.
How to access it. Azure confidential computing, Google Cloud Confidential VMs with H100, and Oracle OCI confidential computing all offer TEE-backed inference for specific model workloads. Red Hat OpenShift with Confidential Containers (CoCo) is the open-source path. On-premises, NVIDIA's Secure AI with Protected PCIe mode is available on HGX H100 and H200 8-GPU systems.
When it's the right answer. Healthcare workloads where the data model must exclude cloud operators. Financial services workloads with trade-secret or MNPI exposure. Government and defense workloads requiring verifiable hardware isolation. When your threat model specifically names "the cloud operator sees our plaintext during inference" as the risk you cannot accept. For most mid-market workloads, confidential computing is over-engineered. For the ones where it's needed, nothing else qualifies.
The shadow-AI layer (every platform you don't have a contract with)
Most companies already have an AI governance problem. They just haven't inventoried it yet.
The problem. Every mid-market company has employees using AI tools that were never approved, never logged, and never scoped. Consumer ChatGPT. Personal Gemini. Free Claude. Browser extensions. Meeting-summarizer bots. Each one a different data handling agreement. Each one invisible to your security team.
Controls that actually work. Network-level detection of AI API traffic (OpenAI, Anthropic, Google AI endpoints) to build an inventory of what's actually being used. Endpoint DLP that blocks or warns on sensitive content being pasted into browser-based AI tools — Microsoft Purview Endpoint DLP does this for Copilot-adjacent browsers; Defender for Cloud Apps catalogs detected AI apps as "Generative AI." Identity-level policies through Entra or Okta that enforce corporate accounts only for approved AI tools. A clearly written and communicated acceptable-use policy that names specific tools and specific data categories.
What doesn't work. Blanket bans. They push usage further into shadow channels and create no visibility. The WRITER 2026 survey found that 67% of companies already believe they've suffered a breach from unapproved AI — that number only grows if your response to AI is prohibition rather than governance. Channel the experimentation. Don't block it.
Want the full argument?
This Playbook is the operational layer. The Deep Dive is the strategic one — the four-layer framework, the governance archetypes, the three-scenario walkthrough. Read it alongside for the full picture.
Read Deep Dive No. 002 →AWS Bedrock — Security, Privacy, and Responsible AI documentation
Google Cloud — Vertex AI Zero Data Retention documentation
Microsoft Learn — Data, Privacy, and Security for Microsoft 365 Copilot
Microsoft Learn — Microsoft Purview data security and compliance protections for generative AI apps
Microsoft Learn — Data, privacy, and security for Azure Direct Models in Microsoft Foundry
Anthropic — Commercial Terms of Service
Anthropic — Updates to Consumer Terms and Privacy Policy (scope exclusion for commercial products)
Oracle — OCI Enterprise AI with zero data retention endpoints
NVIDIA Developer Blog — Protecting Sensitive Data and AI Models with Confidential Computing
WRITER (April 2026) — 2026 Enterprise AI Adoption Survey (n=2,400)