Skip to main content
  1. Posts/

Federal Zero Trust Architecture Blueprint — Snowflake as the AI + Data Control Plane

Federal agencies have a mandate: implement Zero Trust Architecture (OMB M-22-09). But the mandate was written before LLM-based workloads became a production reality. Agencies now face a harder problem: how to extend Zero Trust not just to data access, but to AI inference — model invocations, agent tool calls, retrieval-augmented generation — without building a second governance stack.

This is where Snowflake’s architecture becomes uniquely relevant. Cortex AI runs inside the same platform as the data it operates on. That means the same identity, network, RBAC, masking, row access, tagging, and audit controls that govern data access also govern AI inference. There is no separate AI control plane. Horizon governs both.

This blueprint maps that unified control plane to NIST SP 800-207, CISA’s Zero Trust Maturity Model, and OMB M-22-09 — with SQL examples, a maturity assessment, and the specific architecture that makes ZTA-governed AI possible.


Why AI Needs a Zero Trust Control Plane
#

Most organizations bolt AI onto existing infrastructure as a separate layer: a different API endpoint, different credentials, different network path, different (or no) governance model. The result:

  • Shadow AI — teams call external model APIs with data that bypasses classification and access controls
  • No audit trail — who invoked which model, with what data, producing what output? Nobody knows
  • Policy divergence — data governance says PII must be masked, but the AI pipeline reads raw columns because it’s a “different system”
  • Network blind spots — AI traffic routes through public endpoints even when data access requires PrivateLink

The fix isn’t layering more controls on top. It’s running AI inference inside the data platform, so the existing ZTA stack governs everything uniformly.


The Unified Control Plane Architecture
#

Snowflake’s architecture collapses the data + AI governance split into a single enforcement stack. Whether the request is a SQL query, a Cortex COMPLETE() call, a Cortex Agent tool invocation, or a Cortex Search retrieval — it traverses the same ZTA layers:

                        ┌──────────────────────────────────┐
                        │     Policy Engine (PE)           │
                        │                                  │
                        │  Identity Provider (Okta/Entra)  │
                        │  + Snowflake RBAC Engine         │
                        │  + Masking & Row Access Policies │
                        │  + Horizon Classification        │
                        └────────────┬─────────────────────┘
                                     │ grant / deny
                        ┌────────────▼─────────────────────┐
                        │    Policy Administrator (PA)      │
                        │                                   │
                        │  SAML Assertion / OAuth Token      │
                        │  Key-Pair Authentication           │
                        │  Session Policy Enforcement        │
                        └────────────┬──────────────────────┘
                                     │ session credential
  ┌──────────┐          ┌────────────▼──────────────────────┐
  │  Subject │─────────►│  Policy Enforcement Point (PEP)   │
  │  (User / │  request │                                   │
  │  Agent / │◄─────────│  Network Policies + Network Rules │
  │  Service)│  allow / │  Private Connectivity (PrivateLink)│
  └──────────┘  deny    │  Session-Level Controls            │
                        └────────────┬──────────────────────┘
              ┌──────────────┼──────────────┼──────────────┐
              ▼              ▼              ▼              ▼
   ┌────────────────┐ ┌────────────────┐ ┌────────────────┐ ┌────────────────┐
   │ Data Workloads │ │ AI Inference   │ │ AI Agents      │ │ Managed MCP    │
   │                │ │                │ │                │ │                │
   │ Tables, Views  │ │ COMPLETE()     │ │ Cortex Agents  │ │ MCP Servers    │
   │ Queries, Tasks │ │ SUMMARIZE()    │ │ Search Services│ │ External AI    │
   │ Streams, Pipes │ │ SENTIMENT()    │ │ Tool Calls     │ │ Client Access  │
   └────────────────┘ └────────────────┘ └────────────────┘ └────────────────┘
              │              │              │              │
              └──────────────┼──────────────┼──────────────┘
          ┌──────────────────────────────────────────────────┐
          │              Horizon Governance Layer             │
          │                                                  │
          │  Classification · Tagging · Lineage · Policies   │
          └──────────────────────────────────────────────────┘

  ┌──────────────────────────────────────────────────────────┐
  │              Continuous Monitoring                        │
  │                                                          │
  │  LOGIN_HISTORY · QUERY_HISTORY · ACCESS_HISTORY          │
  │  Trust Center · SIEM Integration · Anomaly Detection     │
  └──────────────────────────────────────────────────────────┘

This maps directly to NIST SP 800-207’s three core components (PE, PA, PEP) — but with a critical extension: the resources behind the PEP include both data objects and AI inference services, all governed by the same Horizon layer.


How ZTA Controls Apply to AI Inference
#

When a user or application calls a Cortex function, the request passes through every enforcement layer — the same layers that govern a SELECT query:

  ┌───────────────────────────────────────────────────┐
  │  Cortex AI Request                                 │
  │  SELECT SNOWFLAKE.CORTEX.COMPLETE(                 │
  │    'mistral-large2', prompt || context_data)       │
  └──────────────────────┬────────────────────────────┘
  ┌───────────────────────────────────────────────────┐
  │  Layer 1: Identity — SSO/MFA/key-pair verified?   │
  └──────────────────────┬────────────────────────────┘
  ┌───────────────────────────────────────────────────┐
  │  Layer 2: Network — PrivateLink / network policy? │
  └──────────────────────┬────────────────────────────┘
  ┌───────────────────────────────────────────────────┐
  │  Layer 3: RBAC — Role has USAGE on Cortex?        │
  └──────────────────────┬────────────────────────────┘
  ┌───────────────────────────────────────────────────┐
  │  Layer 4: Data policies — Row/column access on    │
  │           the data fed into the model             │
  └──────────────────────┬────────────────────────────┘
  ┌───────────────────────────────────────────────────┐
  │  Layer 5: Audit — QUERY_HISTORY + ACCESS_HISTORY  │
  │           records the inference call + data read   │
  └───────────────────────────────────────────────────┘

The model never sees data the user’s role can’t access. If a masking policy masks SSNs for ANALYST_ROLE, then a Cortex SUMMARIZE() call running as ANALYST_ROLE receives masked data as input. The policy enforcement happens before the data reaches the model — not after.

Cortex Agents, Search Services, and Managed MCP Under ZTA
#

Cortex Agents, Cortex Search Services, and Snowflake’s managed MCP (Model Context Protocol) servers all operate under the same ZTA enforcement. An agent that queries data, calls tools, and generates responses does so as a Snowflake role. Every tool call is a query. Every query is governed by:

  • RBAC — the agent’s role determines which tables, views, and functions it can access
  • Row access policies — the agent only sees rows it’s authorized for
  • Dynamic masking — sensitive columns are masked before the agent reads them
  • Network policies — the agent’s service connection is bound to the same private endpoint rules
  • Object tagging — data classified as ITAR, PII, or CUI is governed by tag-based policies regardless of whether a human or an agent accesses it

Snowflake Managed MCP extends this to external AI clients. When an external agent (Claude Code, Cursor, a custom LangGraph agent) connects to Snowflake via a managed MCP server, it authenticates through the same identity stack (OAuth, key-pair, SSO) and operates under the same role-based access control. The MCP server doesn’t bypass the ZTA layers — it’s another entry point into the same enforcement stack.

This is architecturally significant: MCP is becoming the standard protocol for AI agents to access data sources and tools. In most implementations, MCP servers run outside the data platform with their own credentials and no governance. Snowflake’s managed MCP servers run inside the platform boundary, which means:

  • The MCP connection authenticates through Snowflake’s identity layer (OAuth tokens, key-pair auth)
  • Network policies restrict which endpoints can reach the MCP server
  • The MCP server executes queries as the authenticated role — all RBAC, masking, and row access policies apply
  • Every MCP tool call is logged in QUERY_HISTORY with full lineage via ACCESS_HISTORY
  • No data is extracted outside the platform boundary without passing through the policy stack
-- Grant a Cortex agent / MCP service role access to specific data only
CREATE ROLE cortex_agent_role;
GRANT USAGE ON DATABASE analytics TO ROLE cortex_agent_role;
GRANT USAGE ON SCHEMA analytics.public TO ROLE cortex_agent_role;
GRANT SELECT ON TABLE analytics.public.incidents TO ROLE cortex_agent_role;

-- Apply a network policy to the service account used by MCP
ALTER USER svc_mcp_agent SET NETWORK_POLICY = private_only_policy;

-- The agent inherits all row access and masking policies on that table
-- No additional configuration needed — Horizon enforces uniformly
-- Whether accessed via SQL, Cortex, or MCP — same policies apply

The AI ZTA Mapping
#

ZTA Concern for AISnowflake Approach
Who can invoke AI models?RBAC — grant USAGE on Cortex functions to specific roles
What data can the model see?Row access + masking policies apply before data reaches the model
Where does inference run?Inside Snowflake’s boundary — no data leaves to external APIs
Is AI usage audited?QUERY_HISTORY captures every Cortex and MCP call with full lineage
Can AI access classified data?Tag-based policies (ITAR, CUI) enforce classification-driven access
Network path for AI traffic?Same PrivateLink / network rules — no separate AI or MCP endpoint
How do external AI agents connect?Managed MCP servers authenticate via OAuth/key-pair, operate under RBAC

The Federal ZTA Foundation
#

The AI control plane above works because Snowflake’s underlying security architecture aligns with federal Zero Trust requirements. Three documents define the mandate:

DocumentRole
NIST SP 800-207Defines the architecture — Policy Engine, Policy Administrator, Policy Enforcement Point
CISA Zero Trust Maturity Model v2.0Defines the five pillars and four maturity levels (Traditional → Optimal)
OMB M-22-09Operationalizes EO 14028 — specific requirements with deadlines

The following sections map Snowflake’s capabilities to each CISA pillar — these are the foundation controls that enable ZTA-governed AI.

Pillar 1: Identity
#

Validate every user and service identity, continuously.

Maturity LevelSnowflake Capability
TraditionalUsername/password authentication
InitialSSO via SAML 2.0 (Okta, Entra ID, Ping); MFA enforcement
AdvancedSCIM provisioning for automated lifecycle; key-pair auth for service accounts; OAuth with scoped tokens
OptimalPhishing-resistant MFA via IdP (PIV/CAC → SAML); session policies enforcing re-auth; zero standing privileges
-- Enforce MFA for all users (requires SECURITYADMIN or higher)
CREATE AUTHENTICATION POLICY require_mfa
  MFA_AUTHENTICATION_METHODS = ('TOTP')
  CLIENT_TYPES = ('SNOWFLAKE_UI', 'DRIVERS', 'SNOWSQL');

ALTER ACCOUNT SET AUTHENTICATION POLICY = require_mfa;
-- SCIM provisioning (Okta example) — automates user lifecycle
CREATE SECURITY INTEGRATION okta_scim
  TYPE = SCIM
  SCIM_CLIENT = 'OKTA'
  RUN_AS_ROLE = 'GENERIC_SCIM_PROVISIONER';
-- Key-pair auth for service accounts — eliminates passwords entirely
ALTER USER svc_etl_pipeline SET RSA_PUBLIC_KEY = 'MIIBIjANBgkqhki...';
-- Session policies for continuous re-validation
CREATE SESSION POLICY strict_session
  SESSION_IDLE_TIMEOUT_MINS = 15
  SESSION_UI_IDLE_TIMEOUT_MINS = 10;

ALTER ACCOUNT SET SESSION POLICY = strict_session;

Pillar 2: Devices
#

Validate device health and compliance before granting access.

Snowflake does not directly inspect device state — this is handled at the IdP layer. Configure your IdP’s conditional access policies to evaluate device compliance (managed device, EDR active, OS patched) before issuing the SAML assertion to Snowflake. Non-compliant devices never receive a token.

Network policies provide an indirect device posture signal by restricting access to known corporate VPN egress IPs:

CREATE NETWORK RULE corporate_egress
  TYPE = IPV4
  VALUE_LIST = ('10.0.0.0/8', '203.0.113.0/24')
  MODE = INGRESS;

CREATE NETWORK POLICY corp_only
  ALLOWED_NETWORK_RULE_LIST = ('corporate_egress');

ALTER ACCOUNT SET NETWORK_POLICY = corp_only;

Pillar 3: Networks
#

Encrypt all traffic. Segment access. Eliminate implicit trust from network location.

Maturity LevelSnowflake Capability
TraditionalPublic endpoint, no IP restrictions
InitialNetwork policies with IP allow-lists
AdvancedNetwork rules (VPC/subnet-level); private connectivity (PrivateLink)
OptimalPrivate-only access (public blocked); per-user/per-integration network policies

This pillar is critical for AI workloads: the same PrivateLink and network rules that govern data queries also govern Cortex inference calls. There is no separate AI network path.

-- Private-only access — the Optimal configuration
CREATE NETWORK RULE private_endpoint_only
  TYPE = AWSVPCEID
  VALUE_LIST = ('vpce-0a1b2c3d4e5f67890')
  MODE = INGRESS;

CREATE NETWORK RULE admin_breakglass
  TYPE = IPV4
  VALUE_LIST = ('198.51.100.10/32')
  MODE = INGRESS;

CREATE NETWORK POLICY zero_trust_network
  ALLOWED_NETWORK_RULE_LIST = ('private_endpoint_only', 'admin_breakglass')
  BLOCKED_NETWORK_RULE_LIST = ();

ALTER ACCOUNT SET NETWORK_POLICY = zero_trust_network;
-- Per-user network policies for granular segmentation
ALTER USER svc_etl_pipeline SET NETWORK_POLICY = private_only_policy;
ALTER USER svc_cortex_agent SET NETWORK_POLICY = private_only_policy;
ALTER USER jsmith SET NETWORK_POLICY = vpn_and_private_policy;

Pillar 4: Applications & Workloads
#

Authorize each application individually. Control what workloads can do.

Maturity LevelSnowflake Capability
TraditionalShared service accounts with broad access
InitialDedicated roles per application; OAuth integrations
AdvancedScoped OAuth tokens per workload; external function controls; Trust Center scanning
OptimalPer-workload identity + network policy; query guardrails via views; session policies per integration

For AI workloads, this means each Cortex-consuming application gets its own scoped OAuth integration, its own role, and its own network policy:

-- Scoped OAuth for an AI application
CREATE SECURITY INTEGRATION ai_app_oauth
  TYPE = EXTERNAL_OAUTH
  EXTERNAL_OAUTH_TYPE = AZURE
  EXTERNAL_OAUTH_ISSUER = 'https://login.microsoftonline.com/<tenant>/v2.0'
  EXTERNAL_OAUTH_TOKEN_USER_MAPPING_CLAIM = 'upn'
  EXTERNAL_OAUTH_SNOWFLAKE_USER_MAPPING_ATTRIBUTE = 'LOGIN_NAME'
  EXTERNAL_OAUTH_AUDIENCE_LIST = ('https://analysis.usgovcloudapi.net/...')
  EXTERNAL_OAUTH_ANY_ROLE_MODE = 'DISABLE';

-- Dedicated network policy for this AI integration
ALTER SECURITY INTEGRATION ai_app_oauth
  SET NETWORK_POLICY = ai_private_only;

Trust Center (Business Critical+) continuously scans your account for security misconfigurations — users without MFA, overly broad network policies, ACCOUNTADMIN overuse, stale credentials. These scans cover AI-related configurations the same way they cover data configurations.

Pillar 5: Data
#

Classify, protect, and monitor all data access — including data consumed by AI models.

This is where the unified control plane matters most. The same policies that protect data from direct SQL access also protect it from AI inference access:

  ┌───────────────────────────────────────────────────┐
  │  Request (SQL Query or Cortex AI Inference)        │
  └──────────────────────┬────────────────────────────┘
  ┌───────────────────────────────────────────────────┐
  │  Layer 1: RBAC — Role has privilege on object?    │
  └──────────────────────┬────────────────────────────┘
  ┌───────────────────────────────────────────────────┐
  │  Layer 2: Row Access Policy — Row visible?        │
  └──────────────────────┬────────────────────────────┘
  ┌───────────────────────────────────────────────────┐
  │  Layer 3: Dynamic Masking — Column value masked?  │
  └──────────────────────┬────────────────────────────┘
  ┌───────────────────────────────────────────────────┐
  │  Layer 4: Encryption — AES-256 at rest, TLS 1.2+  │
  │           Tri-Secret Secure (customer-managed key) │
  └──────────────────────┬────────────────────────────┘
  ┌───────────────────────────────────────────────────┐
  │  Audit: ACCESS_HISTORY — column-level read/write  │
  └───────────────────────────────────────────────────┘

Classification-driven masking with object tags — tag once, enforce everywhere (data queries and AI inference):

-- Tag sensitive columns
ALTER TABLE customers MODIFY COLUMN ssn SET TAG pii_classification = 'SSN';
ALTER TABLE customers MODIFY COLUMN email SET TAG pii_classification = 'EMAIL';

-- Create masking policy driven by tag
CREATE MASKING POLICY pii_mask AS (val STRING)
RETURNS STRING ->
  CASE
    WHEN CURRENT_ROLE() IN ('DATA_ENGINEER', 'COMPLIANCE_OFFICER') THEN val
    WHEN SYSTEM$GET_TAG_ON_CURRENT_COLUMN('pii_classification') = 'SSN'
      THEN 'XXX-XX-' || RIGHT(val, 4)
    WHEN SYSTEM$GET_TAG_ON_CURRENT_COLUMN('pii_classification') = 'EMAIL'
      THEN REGEXP_REPLACE(val, '.+@', '****@')
    ELSE '********'
  END;

-- Attach policy to the tag — auto-applies to all tagged columns
-- This protects data from both direct queries AND Cortex AI calls
ALTER TAG pii_classification SET MASKING POLICY pii_mask;

Row access policy for multi-tenant isolation — applies to human analysts and AI agents equally:

CREATE ROW ACCESS POLICY agency_isolation AS (agency_code VARCHAR)
RETURNS BOOLEAN ->
  agency_code = CURRENT_SESSION_CONTEXT('AGENCY_CODE')
  OR CURRENT_ROLE() = 'CROSS_AGENCY_ANALYST';

ALTER TABLE shared_incidents ADD ROW ACCESS POLICY agency_isolation
  ON (agency_code);

Tri-Secret Secure — customer-managed key wrapping (Business Critical+). Snowflake’s encryption hierarchy includes a composite master key derived from both Snowflake’s key and the customer’s key in AWS KMS / Azure Key Vault / GCP Cloud KMS. Revoking your key renders all data — and all AI inference on that data — inaccessible.


Cross-Cutting Capabilities
#

Visibility & Analytics
#

Zero Trust demands continuous monitoring. Snowflake provides this through ACCOUNT_USAGE views — a built-in security telemetry layer that captures both data and AI workloads.

Authentication anomaly detection:

SELECT USER_NAME, CLIENT_IP, ERROR_CODE, ERROR_MESSAGE,
       COUNT(*) AS failure_count
FROM SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY
WHERE IS_SUCCESS = 'NO'
  AND EVENT_TIMESTAMP > DATEADD('hour', -24, CURRENT_TIMESTAMP())
GROUP BY 1, 2, 3, 4
HAVING failure_count >= 5
ORDER BY failure_count DESC;

AI inference audit — who called Cortex, with what data:

-- All Cortex function invocations in the last 7 days
SELECT USER_NAME, ROLE_NAME, QUERY_TEXT, START_TIME,
       TOTAL_ELAPSED_TIME, ROWS_PRODUCED
FROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY
WHERE QUERY_TEXT ILIKE '%SNOWFLAKE.CORTEX.%'
  AND START_TIME > DATEADD('day', -7, CURRENT_TIMESTAMP())
ORDER BY START_TIME DESC;

Cortex data lineage — which columns did the AI model read:

SELECT qh.USER_NAME, qh.ROLE_NAME,
       ah.DIRECT_OBJECTS_ACCESSED,
       ah.BASE_OBJECTS_ACCESSED,
       qh.START_TIME
FROM SNOWFLAKE.ACCOUNT_USAGE.ACCESS_HISTORY ah
JOIN SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY qh
  ON ah.QUERY_ID = qh.QUERY_ID
WHERE qh.QUERY_TEXT ILIKE '%SNOWFLAKE.CORTEX.%'
  AND qh.START_TIME > DATEADD('day', -30, CURRENT_TIMESTAMP())
ORDER BY qh.START_TIME DESC;

Privilege escalation detection:

SELECT USER_NAME, QUERY_TEXT, START_TIME, CLIENT_APPLICATION_ID
FROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY
WHERE ROLE_NAME = 'ACCOUNTADMIN'
  AND START_TIME > DATEADD('day', -7, CURRENT_TIMESTAMP())
  AND USER_NAME NOT IN ('BREAK_GLASS_ADMIN')
ORDER BY START_TIME DESC;

Automation & Orchestration
#

  • SCIM automates identity lifecycle — user deprovisioned in IdP → deactivated in Snowflake (loses access to both data and AI).
  • Tasks + Alerts can automate security response (e.g., detect anomalous Cortex usage → call external function → create SIEM ticket).
  • Trust Center runs scheduled security scans with automated findings.

Governance — Horizon as the Unified Layer
#

This is the architectural keystone. Snowflake Horizon provides unified governance for data and AI:

  • Automatic data classification — detects PII, PHI, and sensitive data across tables that Cortex consumes
  • Column-level lineage — tracks which columns were read by which Cortex calls (via ACCESS_HISTORY)
  • Tag propagation — tag a schema as ITAR and all downstream objects inherit the classification; AI workloads reading those objects are governed by the same tags
  • Policy management — masking and row access policies apply uniformly to SQL and Cortex

The result: one governance layer for both data and AI. No policy divergence, no shadow AI, no governance blind spots.


OMB M-22-09 Compliance Matrix
#

OMB M-22-09 operationalizes the ZTA mandate with specific requirements. This mapping includes both data and AI workloads:

M-22-09 RequirementSnowflake Implementation (Data + AI)Maturity
Phishing-resistant MFASSO (SAML 2.0) → PIV/CAC-capable IdP; MFA authentication policyAdvanced
Centralized identity managementSCIM provisioning — governs access to both data and CortexAdvanced
Device-level signal in authorizationIdP conditional access (device compliance → SAML claim)Advanced
Encrypted network trafficTLS 1.2+ enforced; PrivateLink for data and AI trafficOptimal
Network micro-segmentationPer-user/per-integration network policies; AI integrations isolatedOptimal
Data categorization and taggingHorizon classification + tagging; applies to AI-consumed dataAdvanced
Comprehensive audit loggingQUERY_HISTORY captures SQL + Cortex calls; ACCESS_HISTORY for lineageAdvanced
Least-privilege accessRBAC for Cortex functions + data; masking + row access policiesOptimal
Encryption at rest and in transitAES-256 + TLS; Tri-Secret Secure covers data read by AI modelsOptimal
Application-level security testingTrust Center scanning covers AI-related configurationsAdvanced

ZTA Maturity Assessment
#

Run this against your Snowflake account to assess your current Zero Trust maturity across both data and AI:

-- =============================================================================
-- Zero Trust Maturity Self-Assessment (Data + AI)
-- Run as SECURITYADMIN or ACCOUNTADMIN
-- =============================================================================

-- 1. IDENTITY: Are all users on SSO with MFA?
SELECT 'IDENTITY - Users without SSO' AS check_name,
       COUNT(*) AS finding_count,
       CASE WHEN COUNT(*) = 0 THEN 'PASS' ELSE 'FAIL' END AS status
FROM SNOWFLAKE.ACCOUNT_USAGE.USERS
WHERE DELETED_ON IS NULL
  AND HAS_PASSWORD = 'true'
  AND NAME NOT IN ('SNOWFLAKE')

UNION ALL

-- 2. IDENTITY: Users without MFA?
SELECT 'IDENTITY - Users without MFA',
       COUNT(*),
       CASE WHEN COUNT(*) = 0 THEN 'PASS' ELSE 'FAIL' END
FROM SNOWFLAKE.ACCOUNT_USAGE.USERS
WHERE DELETED_ON IS NULL
  AND EXT_AUTHN_DUO = 'false'
  AND HAS_PASSWORD = 'true'

UNION ALL

-- 3. NETWORK: Account-level network policy active?
SELECT 'NETWORK - Account-level network policy',
       CASE WHEN (SELECT COUNT(*) FROM TABLE(INFORMATION_SCHEMA.POLICY_REFERENCES(
         REF_ENTITY_DOMAIN => 'ACCOUNT', REF_ENTITY_NAME => CURRENT_ACCOUNT()))
         WHERE POLICY_KIND = 'NETWORK_POLICY') > 0
       THEN 0 ELSE 1 END,
       CASE WHEN (SELECT COUNT(*) FROM TABLE(INFORMATION_SCHEMA.POLICY_REFERENCES(
         REF_ENTITY_DOMAIN => 'ACCOUNT', REF_ENTITY_NAME => CURRENT_ACCOUNT()))
         WHERE POLICY_KIND = 'NETWORK_POLICY') > 0
       THEN 'PASS' ELSE 'FAIL' END

UNION ALL

-- 4. DATA: Masking policies deployed?
SELECT 'DATA - Masking policies in use',
       CASE WHEN (SELECT COUNT(*) FROM SNOWFLAKE.ACCOUNT_USAGE.POLICY_REFERENCES
         WHERE POLICY_KIND = 'MASKING_POLICY') > 0
       THEN 0 ELSE 1 END,
       CASE WHEN (SELECT COUNT(*) FROM SNOWFLAKE.ACCOUNT_USAGE.POLICY_REFERENCES
         WHERE POLICY_KIND = 'MASKING_POLICY') > 0
       THEN 'PASS' ELSE 'FAIL' END

UNION ALL

-- 5. DATA: Row access policies deployed?
SELECT 'DATA - Row access policies in use',
       CASE WHEN (SELECT COUNT(*) FROM SNOWFLAKE.ACCOUNT_USAGE.POLICY_REFERENCES
         WHERE POLICY_KIND = 'ROW_ACCESS_POLICY') > 0
       THEN 0 ELSE 1 END,
       CASE WHEN (SELECT COUNT(*) FROM SNOWFLAKE.ACCOUNT_USAGE.POLICY_REFERENCES
         WHERE POLICY_KIND = 'ROW_ACCESS_POLICY') > 0
       THEN 'PASS' ELSE 'FAIL' END

UNION ALL

-- 6. VISIBILITY: ACCOUNTADMIN usage (should be minimal)
SELECT 'VISIBILITY - ACCOUNTADMIN queries (7d)',
       COUNT(*),
       CASE WHEN COUNT(*) < 10 THEN 'PASS' ELSE 'REVIEW' END
FROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY
WHERE ROLE_NAME = 'ACCOUNTADMIN'
  AND START_TIME > DATEADD('day', -7, CURRENT_TIMESTAMP())

UNION ALL

-- 7. AI: Cortex usage audit (who is calling AI functions?)
SELECT 'AI - Cortex invocations (7d)',
       COUNT(*),
       CASE WHEN COUNT(*) > 0 THEN 'REVIEW' ELSE 'NONE' END
FROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY
WHERE QUERY_TEXT ILIKE '%SNOWFLAKE.CORTEX.%'
  AND START_TIME > DATEADD('day', -7, CURRENT_TIMESTAMP())

ORDER BY check_name;

Federal Certification Context
#

For government workloads, Snowflake provides dedicated infrastructure:

CertificationScope
FedRAMP HighGovernment regions (AWS GovCloud US-East/West)
DISA IL-5Provisional authorization for DoD workloads — CUI and mission-critical data
FIPS 140-2Validated cryptographic modules in government regions
SOC 2 Type IIAnnual audit (available under NDA)
HIPAABAA available
ITARSupported in government regions — U.S. data residency, U.S. person access controls
StateRAMPState and local government compliance

Government accounts operate on isolated infrastructure (separate control plane, U.S.-only data residency, U.S. person access), with a distinct URL format (*.snowflakecomputing.us). Private connectivity within GovCloud ensures traffic — both data queries and AI inference — never traverses the public internet.


Getting Started
#

  1. Assess current state. Run the maturity assessment SQL above — it now includes AI usage checks.
  2. Identity first. Enforce SSO + MFA, deploy SCIM, eliminate password-based service accounts. This governs access to both data and Cortex.
  3. Lock the network. Deploy private connectivity, create network rules referencing VPC endpoints, block public access. AI traffic is covered automatically.
  4. Classify and protect data. Use Horizon automatic classification, tag sensitive columns, deploy masking and row access policies. These policies protect data from both SQL queries and AI inference.
  5. Govern AI workloads. Create dedicated roles for Cortex-consuming applications. Apply per-integration network policies. Audit Cortex usage via QUERY_HISTORY.
  6. Monitor everything. Export ACCOUNT_USAGE views to your SIEM. Build alerting on failed logins, ACCOUNTADMIN usage, and Cortex invocation patterns.
  7. Validate continuously. Enable Trust Center (Business Critical+) for automated security posture scanning.

Related#

Kevin Keller
Author
Kevin Keller
Personal blog about AI, Observability & Data Sovereignty. Snowflake-related articles explore the art of the possible and are not official Snowflake solutions or endorsed by Snowflake unless explicitly stated. Opinions are my own. Content is meant as educational inspiration, not production guidance.
Share this article

Related