Contents#
- Introduction
- Snowflake Container Services Security Overview
- Data Governance and Security in One Platform
- Snowpark Container Services Security Architecture
- SPCS Controllers Security
- SPCS Compute Pools Security
- SPCS Service Security
- SPCS Image Registry Security
- SPCS Auditing and Monitoring (event tables, structured logging, metrics, alerting)
- SPCS Communication Security
- Tunneling Into and Out of SPCS
- Conclusion
Introduction#
Snowpark Container Services (SPCS) is a fully managed container runtime that enables developers to deploy, manage, and scale containerized workloads – jobs, services, service functions – on secure Snowflake-managed infrastructure with configurable hardware options including GPUs. This article focuses on how Snowflake hardens container deployments: compute isolation, storage encryption, communication security, and the governance model that keeps data within Snowflake’s security boundary. For a comprehensive overview of the capability, see the SPCS documentation and tutorials.
Snowpark Container Services is a container service that was built to simplify the overall deployment process. It introduces and reuses vocabulary that maps to the standard Kubernetes vocabulary.
NOTE: Most simple Kubernetes workloads that do not deal with the orchestration layer will run as-is on top of Snowpark Container Services.
Table 1 – Snowpark Vocabulary Mapping with Kubernetes#
| Snowpark | Kubernetes | Definition |
|---|---|---|
| Node | Node / Worker | SPCS defines compute pools that have similar nodes |
| Container | Container | Same definition as standard containers |
| Controller | Controller | Same as Kubernetes, with one exception: fully managed by Snowflake – customers have no access to the controllers |
| Cluster | Cluster | Same as Kubernetes: compute pools + controllers |
| Image Registry | Image Registry | OCI v2 compliant service. Fully provisioned and managed by Snowflake. Customers interact using standard Docker client or SnowCLI |
| Service | Deployment + network spec | Represents overall specifications: containers list, mounted volumes, endpoints, images, resources, and the compute pool to use. Long-running workloads that do not terminate automatically |
| Service Instance | Pod | An instance that runs on a single node. Min/max instances control autoscaling |
| Job / Job Service | N/A | A special service that runs one time only, similar to stored procedures |
| Batch Job | N/A | A new workload type for parallelized, batch-oriented workloads across multiple nodes |
| Service Function | N/A | A UDF linked to a service, used to call a service from inside your Snowflake account |
Snowflake Container Services Security Overview#
This Snowpark runtime is built with two main pillars:
- Security is Paramount: The out-of-the-box security model meets the Snowflake high security standard.
- Ease of Use: Most modern container service infrastructure can be very complicated to build and maintain. Snowpark Container Services hides this complexity by providing a unique user experience where developers can focus on building applications and enabling their critical business functions without the hassle of managing the infrastructure.
By building and offering a fully managed container service natively integrated with Snowflake governance and security, Snowflake delivers ease of use and security together. The following layers are provided with the necessary security controls:
- Container Registry: Snowflake provisions and maintains the image registry infrastructure including networking, load balancing, storage OS updates, and security patches transparently.
- Storage: Any related storage such as image registry storage, container mounted volumes, and block storage volumes are provisioned and secured by Snowflake. Any data stored in the Snowflake account is protected by Snowflake data security and the data governance capabilities built into the platform. Learn more about integrated security and governance with Snowflake Horizon.
- Compute: Snowflake provisions and maintains the compute infrastructure including networking, load balancing, storage, OS updates, and security patches transparently for the host OS and the container OS. In the event of a mandatory system reboot, Snowflake provides additional nodes with new hardened images and transparently redeploys containers without impacting the customer business. Snowflake absorbs the resourcing costs of the additional nodes to meet service and security objectives.
- Container Orchestration Services: Snowflake maintains the Kubernetes infrastructure including networking, load balancing, storage OS updates, and security patches transparently.
- Container GPU Run Time: Snowflake transparently maintains and updates Nvidia drivers.
- Metrics/Logs: Snowflake provisions and manages automatically a log collection capability using containers on all nodes, which automatically provides customers with audit logs out of the box. Compute Pool metrics (CPU, memory per node and container) are generally available and integrate with Datadog and Prometheus.
- Controllers: Snowflake provisions and maintains the controller infrastructure including networking, load balancing, storage OS updates, and security patches transparently.
- Networking and Communications: Snowflake provisions and orchestrates all container infrastructure networking components including routing, proxies, firewalling, DNS, mutual TLS (mTLS), and certificate management.
- Load Balancing / Application Gateways: Snowflake provisions and maintains all load balancing and application gateway infrastructure to scale and provide secure ingress and egress access communications within Snowflake account.
- Authentication: Customers can leverage Snowflake user identities to interact with Snowpark Container Services. Users can be provisioned manually or via SCIM. Authentication and authorization is handled automatically among various containers. When a service or function needs to access data in your Snowflake account, Snowflake authentication and access control applies automatically – including Programmatic Access Tokens (PAT), OAuth, SAML, and Keypair authentication. More on this in the Data Governance and Security in One Platform section.
Data Governance and Security in One Platform#
SPCS is built on top of the Snowflake platform and leverages Snowflake’s high-standard security and data governance.
By deploying containers with Snowpark, data stays within the security and governance boundary of Snowflake. Any access to the data must go through Snowflake’s defense-in-depth layers:
- Network Access Controls: Network policies, data encryption in transit via TLS 1.2+.
- Identity and Access Management: Snowflake supports OAuth, SAML, Keypair, Programmatic Access Tokens (PAT), Workload Identity Federation, and username/password authentication methods.
- Data Governance: Any role-based access control, dynamic data masking policies, and row access policies are automatically applied. Customers do not need to maintain different copies of data or separate access control functions as data moves around – in contrast to traditional container infrastructure on other platforms.
- Encryption: Customer data in the Snowflake service is encrypted at rest using AES-256 by default, in transit using TLS 1.2+, and for inter-service communication Snowflake provisions and maintains mTLS. Block storage volumes now support Tri-Secret Secure (TSS) for customers requiring customer-managed keys.
- High Availability and Data Protection: Customer data stored in Snowflake leverages Snowflake built-in high availability and protection such as time travel and fail safe. In addition, cross-regional cross-cloud high availability is available with Snowflake business continuity and disaster recovery.
- End-to-End Observability: Customers have full visibility of their data processing and SPCS logs, including lineage, access history, account usage, and service monitoring and logs.
- Snowflake Horizon: The unifying framework across all the above, including the expanding Snowflake portfolio of security and compliance reports.
Snowpark Container Services Security Architecture#
Components listed in Table 1 are deployed inside Snowflake’s cloud infrastructure environment. All communication among those components is private over Snowflake’s internal networking infrastructure and leverages TLS to encrypt data flows.
Each customer has a dedicated cluster which includes controllers and compute pools – their security boundary – isolated from other customers using security groups (Network Security Groups). Customer A containers cannot communicate directly with Customer B’s containers. The only possible way of cross-customer communication is for Customer B to publish a publicly accessible endpoint, and then Customer A goes via the egress proxy and enters Customer B’s public endpoint via the ingress proxy. Within the same customer account, containers can communicate using the Snowflake internal network.
SPCS is generally available across AWS, Microsoft Azure, and Google Cloud Platform commercial regions.
SPCS Controllers Security#
Snowflake automatically provisions and deprovisions the controllers. Customers have no access to controllers. This helps developers focus on their applications instead of building and maintaining controller infrastructure. Snowflake engineers maintain the high availability of those controllers; in case of failure, any failed controllers are automatically rebuilt.
Controller nodes are not permanent. Snowflake routinely upgrades operating systems, drivers, and patches any security vulnerabilities found on those nodes as part of maintenance. A particular node may stay up for a maximum of one month, after which Snowflake retires it and replaces it with a newly updated one. Failure in controller nodes is transparent to customers, as Snowflake automatically rebuilds those nodes. Configurations are stored in the Snowflake cloud service layer that leverages built-in Snowflake high availability.
Finally, controller nodes run in their own network security groups that control all communications with the compute pools.
SPCS Compute Pools Security#
Customers create compute pools using SQL:
CREATE COMPUTE POOL tutorial_compute_pool
MIN_NODES = 2
MAX_NODES = 3
INSTANCE_FAMILY = GPU_NV_M;Or using the Snowflake Python API:
new_compute_pool_def = ComputePool(
name="MyComputePool",
instance_family="CPU_X64_XS",
min_nodes=1,
max_nodes=2,
)
new_compute_pool = api_root.compute_pools.create(new_compute_pool_def)Compute pools are the trust boundary where service(s)/job(s) can run. They are a group of similar nodes. Customers choose the instance family and Snowflake automatically provisions the nodes. As part of maintenance, Snowflake manages upgrades to operating systems and drivers and patches any security vulnerabilities. A particular node may stay up for a maximum of one month; then SPCS retires it and replaces it with a newly updated one. The expected maintenance window is 30 minutes.
Snowflake controls which node a service instance runs on, based on customer specifications. A node could run one or more service instances/jobs. Multiple services can share the same compute pool, or a customer can have many compute pools and dedicate one per service or job, depending on functional and security requirements.
NOTE: When the customer drops a compute pool, the underlying nodes are terminated at the cloud service provider layer.
Compute Pool Metrics#
Compute Pool metrics are generally available for all SPCS customers. Customers can observe the performance of nodes, services, and jobs, including free memory, memory used by a specific container, and CPU utilization. Metrics can be integrated with enterprise monitoring solutions such as Datadog and Prometheus.
Block Storage#
Block storage is generally available for all SPCS customers on AWS, with additional cloud platforms in expansion. Block storage enables customers to provision and attach volumes with:
- Up to 16TiB of capacity
- 3,000–16,000 IOPS
- 125–1,000 MiB/s throughput
- Point-in-time snapshot backups
Block storage volumes now support Tri-Secret Secure (TSS) (GA November 2025), allowing customers requiring customer-managed encryption keys to satisfy their compliance requirements. When a customer drops a compute pool or service, attached block storage volumes can be preserved or explicitly deleted.
SPCS Service Security#
The service definition is the main component that defines the service or job running in SPCS. You define a service by specifying the specs, compute pool, and min/max instances:
CREATE SERVICE echo_service
IN COMPUTE POOL tutorial_compute_pool
FROM @tutorial_stage
SPEC = 'echo_spec.yaml'
MIN_INSTANCES = 1
MAX_INSTANCES = 3;The service has the following security properties:
- Single-node instances: Service instances run on a single node. Snowflake orchestrates additional instances on other available nodes.
- Intra-service container communication: Containers on the same service instance communicate over localhost.
- Cross-service communication: Containers across services and service instances use the automatically provisioned FQDN:
<service-name>.<schema-name>.<db-name>.snowflakecomputing.internal - Load balancing: Snowflake manages all load balancing aspects of Kubernetes. Customers have no access to the underlying infrastructure.
- DNS: Snowflake manages all DNS naming resolutions.
- Role-based trust boundary: The role that creates the service defines the security context of that service. For services to communicate with each other, they must be created by the same role. As a result of this security control, services cannot be created under the default
ACCOUNTADMIN,SECURITYADMIN, orORGADMINroles. - User-mode containers: Containers in services run as user mode and cannot change hardware configurations of the host.
- Networking controls:
- Egress: Controlled by External Access Integration (EAI) objects.
- Ingress: Controlled by
BIND SERVICE ENDPOINTand network policies. Ingress endpoints support authenticated access – end users must have a valid Snowflake account. Programmatic access using OAuth is fully supported. Programmatic Access Tokens (PAT) can be used in theAuthorizationheader for API and CORS requests.
CORS Configuration#
CORS (Cross-Origin Resource Sharing) support is generally available (August 2025). Without CORS, browsers block JavaScript from making requests to a different origin than the page was loaded from. Since your frontend app (https://myapp.example.com) and your SPCS endpoint (https://<hash>.snowflakecomputing.app) are different origins, the browser will reject the request unless the SPCS endpoint returns the correct CORS headers.
Snowflake’s ingress proxy handles all CORS headers automatically based on corsSettings in your service specification – your service code does not need to handle CORS itself. The proxy intercepts the browser’s preflight OPTIONS request, returns the configured headers, and adds them to actual responses.
Service spec configuration:
endpoints:
- name: myendpoint
port: 8080
public: true
corsSettings:
Access-Control-Allow-Origin:
- "https://myapp.example.com"
- "https://staging.example.com"
Access-Control-Allow-Methods:
- GET
- POST
- PUT
- DELETE
Access-Control-Allow-Headers:
- Authorization
- Content-Type
- X-Request-IDHow the browser flow works:
- Your frontend JavaScript makes a
fetch()call to the SPCS endpoint - The browser sends a preflight
OPTIONSrequest to check if the origin is allowed - Snowflake’s ingress proxy responds with the CORS headers from your
corsSettings - If the origin matches, the browser proceeds with the actual request
- The
Authorizationheader carries the PAT – this is whyAuthorizationmust be listed inAccess-Control-Allow-Headers
Frontend example (JavaScript):
const PAT = "<your-pat-token>";
const SPCS_ENDPOINT = "https://<ingress-hostname>/echo";
// The browser automatically sends a preflight OPTIONS request first.
// Snowflake's proxy handles it — your service never sees it.
const response = await fetch(SPCS_ENDPOINT, {
method: "POST",
headers: {
"Authorization": `Snowflake Token="${PAT}"`,
"Content-Type": "application/json",
},
body: JSON.stringify({
data: [[0, "hello world"]]
}),
});
const result = await response.json();
console.log(result);Common pitfalls:
- Missing
Authorizationin allowed headers: If you don’t listAuthorizationinAccess-Control-Allow-Headers, the browser will block the preflight and your request never reaches the service. - Wildcard origins:
corsSettingsrequires explicit origins –*is not supported. This is intentional: Snowflake enforces that you know which origins should access your service. - Forgetting the PAT format: The header must be
Snowflake Token="<pat>"(with quotes around the token inside the header value), notBearer <pat>. - CSP interaction: Snowflake’s ingress proxy enforces a baseline Content-Security-Policy. If your service also serves a frontend that needs to call external APIs, configure an EAI – the CSP is automatically extended to allow browser-side requests to the same egress destinations.
Gateways#
Gateways allow routing ingress requests to multiple service endpoints behind a single hostname. This simplifies multi-service architectures and reduces the number of public endpoints exposed. For more information, see Use Gateways to route ingress requests to multiple endpoints.
SPCS Image Registry Security#
Snowflake provisions and orchestrates the underlying OCI v2 compliant image registry infrastructure and security. The image registry is accessed using a per-customer fully qualified domain name:
<orgname>-<acctname>.registry.snowflakecomputing.comThe Image Registry has the following attributes:
- Built on internal stage: The image registry is built on top of the Snowflake account internal stage.
- Multiple repositories: Repositories can be created with the
CREATE IMAGE REPOSITORYcommand. A role can have read, write, or ownership permissions on the repository, as detailed in the documentation. - Image metadata: Customers can query image metadata using
SHOW IMAGES IN IMAGE REPOSITORY <name>, including image name, creation date, tags, digest, and image path. - Docker CLI: Customers can use Docker CLI to push/pull images.
- Snowflake REST API / SnowCLI: For listing, creating, and dropping repositories and managing access controls.
Authentication#
Snowflake accounts enforce MFA by default, which is incompatible with the docker login command. As a result, username/password authentication is not supported for Docker CLI unless the account administrator explicitly enables password-only login (not recommended). The recommended authentication methods are:
| Method | Docker CLI | REST API / SnowCLI |
|---|---|---|
| PAT (Programmatic Access Token) – Recommended | Yes | Yes |
| Keypair Authentication | via SnowCLI token | Yes |
| OAuth | via SnowCLI token | Yes |
| Workload Identity Federation (AWS, Azure, GCP, OIDC) | via SnowCLI token | Yes |
| Username/Password (only if MFA disabled – not recommended) | Yes | Yes |
Option 1 – SnowCLI (recommended): The Snowflake CLI handles all authentication methods transparently and calls docker login on your behalf:
snow spcs image-registry login
# Supports keypair, OAuth, PAT, Workload Identity -- no manual token management
# For private link environments:
snow spcs image-registry login --private-linkOption 2 – PAT with Docker CLI: Generate a PAT, then use it directly with Docker using the literal string USER as the username and the PAT value as the password:
docker login <orgname>-<acctname>.registry.snowflakecomputing.com \
--username USER \
--password "<your-pat-token>"Option 3 – Short-lived token (CI/CD pipelines): Use snow spcs image-registry token to generate a short-lived OAuth token and pipe it into any OCI-compatible client:
snow spcs image-registry token --format=JSON | \
jq -r '.token' | \
docker login <orgname>-<acctname>.registry.snowflakecomputing.com \
--username 0sessiontoken --password-stdinThis approach is suitable for automated pipelines where storing a long-lived PAT is undesirable.
Encryption#
The image registry is built on top of the internal stage. Images are encrypted in transit using TLS 1.2+ and at rest using the internal stage encryption (AES-256).
Note on Tri-Secret Secure for Image Registry: Image repositories built on internal stages follow Snowflake’s standard internal stage encryption model. For customers requiring Tri-Secret Secure encryption for container workload data, SPCS block storage volumes support TSS (GA November 2025). Customers should validate their compliance and security policy requirements for image storage.
Networking#
- Network Policies: Customers can leverage Snowflake network policies to control access to the image registry from authorized IP addresses.
- Private Link: The image registry now supports private link connectivity. Use
SYSTEM$GET_PRIVATELINK_CONFIGand create a CNAME record pointing to thespcs-registry-privatelink-urlvalue returned. This eliminates the need to route image push/pull traffic over the public internet. See Configuring private connectivity.
SPCS Auditing and Monitoring#
Snowflake provides and maintains logging containers to allow customers to monitor and access container logs. Snowflake manages the updating and patching of those logging containers. Three mechanisms are available, each suited to different use cases:
SYSTEM$GET_SERVICE_LOGS: Real-time log tail for development and debugging. Limited to 100 KB.SPCS_GET_LOGStable function: Scoped to a single service, queries historical logs with full SQL.- Event Tables: Full-featured log and metric persistence across all services – the foundation for production auditing, alerting, and monitoring.
Setting Up an Event Table#
Event tables capture everything your containers emit to stdout and stderr, plus platform metrics. Create one and associate it with your account:
CREATE EVENT TABLE my_db.my_schema.spcs_events
DATA_RETENTION_TIME_IN_DAYS = 90;
-- Associate with your account (requires ACCOUNTADMIN)
ALTER ACCOUNT SET EVENT_TABLE = my_db.my_schema.spcs_events;
-- Verify
SHOW PARAMETERS LIKE 'event_table' IN ACCOUNT;Control what gets logged per service in the service spec:
spec:
logExporters:
eventTableConfig:
logLevel: INFO # INFO = stdout + stderr, ERROR = stderr only, NONE = disabledEvent Table Schema#
Every row in the event table follows a fixed schema. The key columns:
| Column | What’s in it |
|---|---|
TIMESTAMP | UTC time of the event |
RECORD_TYPE | LOG, METRIC, SPAN, or EVENT |
RESOURCE_ATTRIBUTES | Source identification: service name, container name, compute pool, instance ID |
RECORD | Metadata: severity_text for logs, metric.name for metrics |
RECORD_ATTRIBUTES | Extra context: log.iostream (stdout/stderr) |
VALUE | The actual log message or metric value |
SCOPE | Scoping info (e.g., snow.spcs.platform for platform events) |
Distinguish between services, containers, and instances using RESOURCE_ATTRIBUTES:
{
"snow.service.name": "MY_SERVICE",
"snow.service.container.name": "main",
"snow.container.instance": "0",
"snow.compute_pool.name": "MY_POOL"
}Querying Logs#
Recent logs for a service:
SELECT TIMESTAMP, VALUE::STRING AS log_message
FROM my_db.my_schema.spcs_events
WHERE TIMESTAMP > DATEADD(hour, -1, CURRENT_TIMESTAMP())
AND RESOURCE_ATTRIBUTES:"snow.service.name" = 'MY_SERVICE'
AND RECORD_TYPE = 'LOG'
ORDER BY TIMESTAMP DESC
LIMIT 50;Errors only (stderr):
SELECT TIMESTAMP, VALUE::STRING AS log_message
FROM my_db.my_schema.spcs_events
WHERE TIMESTAMP > DATEADD(hour, -6, CURRENT_TIMESTAMP())
AND RESOURCE_ATTRIBUTES:"snow.service.name" = 'MY_SERVICE'
AND RECORD_TYPE = 'LOG'
AND RECORD_ATTRIBUTES:"log.iostream" = 'stderr'
ORDER BY TIMESTAMP DESC;Filter by severity (when using structured logging):
SELECT TIMESTAMP, VALUE::STRING AS log_message
FROM my_db.my_schema.spcs_events
WHERE TIMESTAMP > DATEADD(day, -1, CURRENT_TIMESTAMP())
AND RESOURCE_ATTRIBUTES:"snow.service.name" = 'MY_SERVICE'
AND RECORD_TYPE = 'LOG'
AND RECORD:"severity_text" IN ('ERROR', 'FATAL')
ORDER BY TIMESTAMP DESC;Using SPCS_GET_LOGS (simpler, scoped to one service):
-- Last 24 hours (default)
SELECT * FROM TABLE(my_db.my_schema.my_service!SPCS_GET_LOGS());
-- Custom time range, filtered by container
SELECT * FROM TABLE(my_db.my_schema.my_service!SPCS_GET_LOGS(
START_TIME => DATEADD('day', -3, CURRENT_TIMESTAMP())
))
WHERE container_name = 'main';Real-time debugging with SYSTEM$GET_SERVICE_LOGS:
-- Last 100 lines from container 'main' in instance 0
SELECT SYSTEM$GET_SERVICE_LOGS('my_service', 0, 'main', 100);
-- Logs from the PREVIOUS container run (essential for debugging crashes)
SELECT SYSTEM$GET_SERVICE_LOGS('my_service', 0, 'main', 100, true);Structured Logging from Python#
If your container emits JSON to stdout, Snowflake parses it into the event table’s structured columns. This is far more useful than flat text – you can filter by severity, search by attributes, and correlate across services:
import json
import logging
from datetime import datetime, timezone
class SnowflakeStructuredHandler(logging.Handler):
"""Emit JSON logs that Snowflake parses into event table columns."""
def emit(self, record):
log_entry = {
"severity_text": record.levelname, # → RECORD.severity_text
"body": record.getMessage(), # → VALUE
"timestamp": datetime.fromtimestamp( # → TIMESTAMP
record.created, tz=timezone.utc
).strftime("%Y-%m-%dT%H:%M:%S.%fZ"),
"attributes": { # → RECORD_ATTRIBUTES
"module": record.module,
"function": record.funcName,
"line": record.lineno,
}
}
print(json.dumps(log_entry), flush=True)
logger = logging.getLogger("my_service")
logger.setLevel(logging.DEBUG)
logger.addHandler(SnowflakeStructuredHandler())
# Usage
logger.info("Service started on port 8080")
logger.error("Connection to database failed after 3 retries")The JSON field severity_text maps to RECORD:"severity_text", body maps to VALUE, and attributes maps to RECORD_ATTRIBUTES. Unrecognized JSON fields are silently ignored. Non-JSON log lines land in VALUE as plain strings.
Platform Metrics#
Platform metrics (CPU, memory, GPU, network, storage) land in the same event table as logs, distinguished by RECORD_TYPE = 'METRIC'. Enable them in the service spec:
spec:
platformMonitor:
metricConfig:
groups:
- system # CPU, memory, GPU usage
- system_limits # resource limits and requests
- network # egress/ingress packet and byte counters
- storage # volume IOPS, throughput, capacityQuery CPU and memory usage over the last hour:
SELECT
DATE_TRUNC('minute', TIMESTAMP) AS minute,
RESOURCE_ATTRIBUTES:"snow.service.container.name"::STRING AS container,
AVG(CASE WHEN RECORD:"metric.name" = 'container.cpu.usage'
THEN CAST(VALUE AS FLOAT) END) AS avg_cpu_cores,
AVG(CASE WHEN RECORD:"metric.name" = 'container.memory.usage'
THEN CAST(VALUE AS FLOAT) / (1024*1024*1024) END) AS avg_memory_gb
FROM my_db.my_schema.spcs_events
WHERE TIMESTAMP > DATEADD(hour, -1, CURRENT_TIMESTAMP())
AND RESOURCE_ATTRIBUTES:"snow.service.name" = 'MY_SERVICE'
AND RECORD_TYPE = 'METRIC'
AND RECORD:"metric.name" IN ('container.cpu.usage', 'container.memory.usage')
GROUP BY 1, 2
ORDER BY 1 DESC;Check container restarts and exit codes (debugging crashes):
SELECT
TIMESTAMP,
RESOURCE_ATTRIBUTES:"snow.service.container.name"::STRING AS container,
RECORD:"metric.name"::STRING AS metric,
VALUE
FROM my_db.my_schema.spcs_events
WHERE RESOURCE_ATTRIBUTES:"snow.service.name" = 'MY_SERVICE'
AND RECORD_TYPE = 'METRIC'
AND RECORD:"metric.name" IN (
'container.restarts',
'container.state.last.finished.exitcode',
'container.state.last.finished.reason'
)
AND TIMESTAMP > DATEADD(hour, -6, CURRENT_TIMESTAMP())
ORDER BY TIMESTAMP DESC;Using SPCS_GET_METRICS (scoped to one service):
SELECT * FROM TABLE(my_db.my_schema.my_service!SPCS_GET_METRICS(
START_TIME => DATEADD('hour', -1, CURRENT_TIMESTAMP())
));Alerting on Container Errors#
Combine event tables with Snowflake Alerts to get notified when things go wrong:
CREATE OR REPLACE ALERT spcs_error_alert
WAREHOUSE = my_wh
SCHEDULE = '5 minute'
IF (EXISTS (
SELECT 1
FROM my_db.my_schema.spcs_events
WHERE TIMESTAMP > DATEADD(minute, -5, CURRENT_TIMESTAMP())
AND RECORD_TYPE = 'LOG'
AND RESOURCE_ATTRIBUTES:"snow.service.name" = 'MY_SERVICE'
AND (RECORD:"severity_text" IN ('ERROR', 'FATAL')
OR RECORD_ATTRIBUTES:"log.iostream" = 'stderr')
))
THEN
CALL SYSTEM$SEND_SNOWFLAKE_NOTIFICATION(
SNOWFLAKE.NOTIFICATION.TEXT_PLAIN(
'SPCS service MY_SERVICE has errors - check event table'
),
'{"my_email_integration": {}}'
);
ALTER ALERT spcs_error_alert RESUME;Creating Views for Easier Access#
Raw event table queries are verbose. Create views to simplify daily use:
-- Clean log view
CREATE OR REPLACE VIEW my_db.my_schema.service_logs AS
SELECT
TIMESTAMP,
RESOURCE_ATTRIBUTES:"snow.service.name"::STRING AS service_name,
RESOURCE_ATTRIBUTES:"snow.service.container.name"::STRING AS container_name,
RESOURCE_ATTRIBUTES:"snow.container.instance"::STRING AS instance_id,
RECORD:"severity_text"::STRING AS severity,
RECORD_ATTRIBUTES:"log.iostream"::STRING AS stream,
VALUE::STRING AS log_message
FROM my_db.my_schema.spcs_events
WHERE RECORD_TYPE = 'LOG';
-- Now querying is simple
SELECT * FROM service_logs
WHERE service_name = 'MY_SERVICE'
AND severity = 'ERROR'
AND TIMESTAMP > DATEADD(hour, -1, CURRENT_TIMESTAMP());Prometheus and OpenTelemetry Integration#
For application-level custom metrics, Snowflake provides a Prometheus sidecar container that scrapes your app’s /metrics endpoint and writes to the event table:
spec:
containers:
- name: main
image: /my_db/my_schema/my_repo/my_app:latest
- name: prometheus
image: /snowflake/images/snowflake_images/monitoring-prometheus-sidecar:0.0.1
args:
- "-e"
- "localhost:8000/metrics,1m" # scrape /metrics every 1 minuteFor more control, use OpenTelemetry directly from your application code to emit custom counters, histograms, and traces – these land in the same event table and can be queried alongside platform metrics and logs.
Compute pool node-level metrics are also exposed via a Prometheus endpoint on TCP port 9001 on each node, accessible from within the compute pool. Enterprise monitoring tools (Datadog, Grafana, etc.) can scrape these endpoints or query the event table via Snowflake connectors.
SPCS-specific auditing and monitoring should be used in conjunction with overall Snowflake security and observability data to provide end-to-end monitoring and auditing. For more details, see Snowflake Horizon.
SPCS Communication Security#
This section covers service-to-service, Snowflake-to-service, service-to-Snowflake, ingress, egress, and private connectivity patterns.
SPCS Service to Service Communication#
Services communicate using the Snowflake internal network, leveraging Snowflake-provisioned load balancing, DNS naming, and resolution.
Service-to-Service communication has two patterns:
- Same service instance: Containers on the same service instance use localhost and defined port numbers.
- Cross-service / cross-instance: Containers use the automatically provisioned FQDN:
<service-name>.<schema-name>.<db-name>.snowflakecomputing.internal
The Snowflake role that creates the service defines the service trust boundary. For services to communicate directly, they must be created by the same role. Snowflake converts these roles into Kubernetes network policies. Services created by different roles must communicate via public endpoints (egress – ingress).
NOTE: As a result of this security boundary, services cannot be created under the default
ACCOUNTADMIN,SECURITYADMIN, orORGADMINroles.
SPCS Snowflake to Service Communication#
Customers can build services and call those services from inside their Snowflake accounts (for example, for ML inference). Snowflake enables this via a UDF type called a Service Function.
Once an endpoint is created, the service function uses that endpoint to call the service internally using mTLS. The function uses the current user’s identity; the user must have USAGE rights on the function, and the function owner must have USAGE rights on the service being called.
If you run multiple service instances, Snowflake automatically load-balances between the nodes using the max_batch_rows parameter:
CREATE FUNCTION my_echo_udf (text VARCHAR)
RETURNS VARCHAR
SERVICE = Service_C
ENDPOINT = echoendpoint
MAX_BATCH_ROWS = 100
AS '/echo';SPCS Service to Snowflake Communication#
A service can call and access data in your Snowflake account. The service automatically uses the service owner role to connect to the underlying Snowflake account using temporary credentials automatically provisioned by Snowflake OAuth.
A temporary user is created (monitorable via LOGIN_HISTORY and QUERY_HISTORY), and a temporary access token valid for 1 hour is written to /snowflake/session/token for every container. This access token is:
- Valid only to call Snowflake internally via
SNOWFLAKE_HOST - Restricted to that specific account, role, and user
- Cannot be used outside your Snowflake SPCS cluster
Once the service establishes a session using any Snowflake driver/connector, Snowflake session policies apply for session duration.
Working with the Session Token#
Snowflake writes the OAuth token to /snowflake/session/token inside every container. The file is refreshed automatically by the SPCS runtime – the token is valid for up to 1 hour, and Snowflake updates the file in-place before the current token expires. Your service should re-read the file periodically (every few minutes is sufficient) rather than caching the token value at startup.
Environment variables SNOWFLAKE_HOST, SNOWFLAKE_ACCOUNT, SNOWFLAKE_DATABASE, and SNOWFLAKE_SCHEMA are auto-injected into every container, so connecting to Snowflake requires no hardcoded configuration:
import snowflake.connector
import os
def get_snowflake_connection():
with open('/snowflake/session/token', 'r') as f:
token = f.read().strip()
return snowflake.connector.connect(
host=os.environ['SNOWFLAKE_HOST'],
account=os.environ['SNOWFLAKE_ACCOUNT'],
authenticator='oauth',
token=token,
database=os.environ.get('SNOWFLAKE_DATABASE'),
schema=os.environ.get('SNOWFLAKE_SCHEMA'),
)For JDBC drivers, the connection string follows the same pattern: jdbc:snowflake://{SNOWFLAKE_HOST}/?authenticator=oauth&token={token}.
Caller Identity Passthrough#
By default, the session token runs as the service owner’s role – the role that created the service. But many applications need to know who is calling and execute queries as that user. SPCS supports this through identity headers and caller’s rights.
Identity headers on ingress requests:
When a user accesses a public endpoint, Snowflake’s ingress proxy injects identity headers after authenticating the user. These headers are trustworthy – the proxy strips any client-supplied headers with these names, so they cannot be spoofed:
| Header | Content |
|---|---|
Sf-Context-Current-User | Username of the authenticated caller |
Sf-Context-Current-User-Email | Caller’s email (if enabled) |
Your service can read Sf-Context-Current-User to implement user-specific logic, audit logging, or authorization decisions – without the caller needing to send any credentials beyond their Snowflake authentication.
Caller’s rights – executing queries as the calling user:
To go further and run Snowflake queries as the calling user (not the service owner), enable executeAsCaller in the service spec:
spec:
containers:
- name: my-app
image: ...
capabilities:
securityContext:
executeAsCaller: trueWith this enabled, the ingress proxy also injects Sf-Context-Current-User-Token – a JWT for the calling user. Combine it with the service’s own session token (dot-separated) to connect as that user:
def get_caller_connection(request):
"""Connect to Snowflake as the calling user, not the service owner."""
service_token = open('/snowflake/session/token').read().strip()
user_token = request.headers.get('Sf-Context-Current-User-Token')
combined_token = f"{service_token}.{user_token}"
return snowflake.connector.connect(
host=os.environ['SNOWFLAKE_HOST'],
account=os.environ['SNOWFLAKE_ACCOUNT'],
authenticator='oauth',
token=combined_token,
)This is powerful: the service can maintain two connections simultaneously – one as the service owner (for shared config tables, logging, service-level operations) and one as the calling user (for user-specific queries where Snowflake’s RBAC applies to that user’s role). The caller’s data governance policies – masking, row access policies, RBAC grants – apply automatically.
Note: Caller’s rights is supported for ingress endpoints only. Service functions do not receive caller identity headers.
For a working implementation that decodes both tokens, tracks refresh intervals, and generates ready-to-use JDBC connection strings, see the spcs-token-inspector tool.
SPCS Service to Other Snowflake Accounts#
When a service needs to connect to a different Snowflake account, the customer must:
- Enable egress access via an External Access Integration.
- Allow-list the other Snowflake account in the EAI.
- Provide user identity and credentials via secrets in the service specification.
The connection is established over TLS 1.2+ via the Snowflake-managed egress proxy cluster.
SPCS Ingress Access#
For customers building long-running applications that need to be accessed publicly, the Snowflake ingress endpoint is defined in service specs by setting public: true on an endpoint. Snowflake automatically provisions and protects this endpoint in the background.
Snowflake automatically protects public endpoints with the following controls:
- Public endpoints are behind Snowflake load balancers and the ingress proxy. No additional infrastructure needed.
- All communications are carried over TLS 1.2+.
- Users and roles can be automatically provisioned via SCIM.
- Authenticated access only: Snowflake requires valid credentials to access public endpoints. Username/password, OAuth, SAML/SSO, and Keypair authentication are all supported.
- Snowflake access controls automatically apply.
- Customers can restrict access to the ingress endpoint using Snowflake network policies.
- Ingress timeout: Connections to ingress endpoints have a 90-second idle timeout. Use polling or WebSockets for long-lived connections.
Programmatic Access with PAT#
Programmatic Access Tokens (PAT) provide the simplest way to access SPCS public endpoints programmatically. PATs can be used directly as a bearer token in the Authorization header – no intermediate token exchange step required:
curl -X POST https://<ingress-hostname>/endpoint \
-H 'Authorization: Snowflake Token="<your-pat-token>"' \
-H 'Content-Type: application/json' \
-d '{"data": [[0, "hello"]]}'This replaces the earlier flow where a PAT had to be exchanged for a scoped SPCS access token via the /oauth/token endpoint (grant_type=urn:ietf:params:oauth:grant-type:token-exchange). That token exchange flow still works and remains useful if you need to scope down to a specific role or obtain a short-lived token, but it is no longer required for standard SPCS endpoint access.
For keypair JWT-based authentication, the token exchange step is still required. PAT is the recommended approach for new integrations due to its simplicity.
Setup requirements:
-- Create a PAT for the user
ALTER USER <user_name> ADD PROGRAMMATIC ACCESS TOKEN <token_name>;
-- If an authentication policy applies, ensure PAT is allowed
ALTER AUTHENTICATION POLICY <policy_name>
SET AUTHENTICATION_METHODS = ('PASSWORD', 'PROGRAMMATIC_ACCESS_TOKEN');The user’s role must have USAGE on the service endpoint. Snowflake’s RBAC model applies to PAT-authenticated requests identically to browser-authenticated sessions.
Ingress Access Request Security Control#
The ingress proxy filters and hardens all traffic entering your service:
Incoming requests:
- Banned HTTP methods are blocked:
TRACE,CONNECT X-SF-SPCS-Authorizationand Snowflake bearer tokens in theAuthorizationheader are scrubbed before forwarding to your service
Outgoing responses:
- Sensitive server headers are scrubbed (
X-XSS-Protection,Server,X-Powered-By,Public-Key-Pins) - Executable MIME types return a
403 Forbidden - Security headers are automatically injected:
X-Frame-Options: DENY,Cross-Origin-Opener-Policy: same-origin,Cross-Origin-Resource-Policy: same-origin,X-Content-Type-Options: nosniff - A baseline
Content-Security-Policyis enforced by default:default-src 'self' 'unsafe-inline' 'unsafe-eval' blob: data:; object-src 'none'; connect-src 'self'; frame-ancestors 'self'; - When an EAI is configured, the CSP is automatically extended to allow the web page in the browser to access the same egress destinations as the service.
CORS for Ingress Endpoints#
CORS headers on ingress responses are fully managed by the Snowflake proxy – your service code does not need to handle preflight requests or inject CORS headers. Configure allowed origins, methods, and headers per endpoint via corsSettings in the service spec. For the full configuration walkthrough, frontend code example, and common pitfalls, see the CORS Configuration section above.
SPCS Egress Access#
Services that need to access external services and APIs use External Access Integrations. A customer creates egress network rules specifying the external destination by hostname:port or IP address and port. Those rules are cryptographically signed by the cloud service layer using the customer context and installed in Snowflake-managed egress proxies – they cannot be spoofed by workers.
CREATE NETWORK RULE my_network_rule
MODE = EGRESS
TYPE = HOST_PORT
VALUE_LIST = ('api.example.com:443');
CREATE EXTERNAL ACCESS INTEGRATION my_eai
ALLOWED_NETWORK_RULES = (my_network_rule)
ENABLED = TRUE;SPCS supports egress to ports 22, 80, 443, and 1024+.
Stable Egress IP Addresses#
By default, outbound traffic from SPCS uses dynamic IP addresses that can change. For customers whose external endpoints require firewall allowlisting by source IP, Snowflake supports stable egress IPs. When enabled, SPCS routes outbound traffic through a fixed set of IP addresses that remain consistent across service restarts and node replacements.
Customers can retrieve their allocated stable egress IPs using:
SELECT SYSTEM$GET_SNOWFLAKE_PLATFORM_INFO();This is essential for connecting to on-premises systems, SaaS vendors, or partner APIs that enforce IP-based access controls. Rather than allowlisting broad CIDR ranges that change without notice, customers provide a small, stable set of IPs to their network or security team.
Customer-Configured mTLS for Egress#
Beyond Snowflake’s automatic mTLS for internal service-to-service communication, customers can optionally configure mutual TLS (mTLS) for egress connections to their own external endpoints. This allows both the SPCS service and the receiving endpoint to authenticate each other using certificates – the service presents a client certificate to the external endpoint, and the external endpoint presents its server certificate back.
This is particularly valuable for enterprise integrations where the external API or on-premises service requires client certificate authentication in addition to (or instead of) API keys or tokens. Customers store their client certificates and private keys as Snowflake Secrets and reference them in the External Access Integration.
Outbound Egress via Private Connectivity#
Instead of routing egress through the public internet, customers can now direct service egress traffic through a private connectivity endpoint. This keeps traffic on the cloud provider’s private network:
SELECT SYSTEM$PROVISION_PRIVATELINK_ENDPOINT(
'com.amazonaws.us-west-2.s3',
'*.s3.us-west-2.amazonaws.com'
);
CREATE NETWORK RULE private_link_rule
MODE = EGRESS
TYPE = PRIVATE_HOST_PORT
VALUE_LIST = ('mybucket.s3.us-west-2.amazonaws.com');Private egress connectivity is available for AWS, Azure, and Google Cloud. Private communication requires that both Snowflake and the customer’s cloud account use the same cloud provider and region.
SPCS Private Connectivity#
Snowflake now supports inbound private connectivity for all three SPCS network endpoints, eliminating the requirement to route SPCS traffic over the public internet.
Configuring Private Connectivity#
- First, configure private connectivity to your Snowflake account.
- Call
SYSTEM$GET_PRIVATELINK_CONFIGto retrieve SPCS-specific hostnames:
| Key | Purpose |
|---|---|
spcs-registry-privatelink-url | Private routing to the image registry |
app-service-privatelink-url | Wildcard hostname for service public endpoints |
spcs-auth-privatelink-url | Hostname for SPCS authentication routing |
- Create CNAME records in your DNS resolving to these hostnames via your private network.
This enables:
- Image pushes/pulls to flow over private connectivity instead of the public internet
- Service public endpoint access to flow over private connectivity
- SPCS authentication to flow over private connectivity
For complete setup instructions, see Configuring private connectivity.
Tunneling Into and Out of SPCS#
Private connectivity and EAIs cover the official networking paths. But sometimes you need to reach infrastructure that isn’t exposed through PrivateLink or a public API – an on-prem database, a home lab, a development machine, or a service behind a corporate firewall. SPCS allows egress on port 22 and 443+, which is all you need to establish a tunnel.
Several tunneling approaches have been proven to work with SPCS:
SSH Tunnels#
The most straightforward approach. Your SPCS container initiates an outbound SSH connection to a server you control, establishing SOCKS proxies (-D) for outbound traffic, local port forwards (-L) to reach specific services like databases, and reverse port forwards (-R) to expose container services back on your server. Use autossh for automatic reconnection. Store the SSH private key as a base64-encoded Snowflake Secret, decode it at runtime, and clean up the environment variables after the tunnels are established.
For a full walkthrough with Dockerfile, tunnel scripts, nginx reverse proxy, service function examples, and security hardening, see my article: SSH Tunnels from Snowflake Container Services. For the original SSH tunnel architecture used for querying on-prem data lakes, see Query Your On-Premise DataLake Through a Private Tunnel with Snowflake.
WebSocket Tunnels#
An alternative that works entirely over HTTPS (port 443). A lightweight Python agent running in your data centre opens an outbound WebSocket connection to an SPCS endpoint – the same way a browser connects to any website. No inbound firewall rules needed. The agent multiplexes TCP connections over the WebSocket, allowing SPCS services to reach on-prem databases, REST APIs, or any TCP service through the tunnel.
This approach is particularly interesting for data sovereignty scenarios where you need to query on-prem Apache Iceberg catalogs or databases without the data leaving your infrastructure. The WebSocket tunnel carries only queries and results – the data stays on-prem.
For the full architecture including multi-catalog support and PAT-based authentication, see: Data Sovereignty First: Integrating an On-Premise Apache Iceberg into Snowflake Through a Direct Outbound Tunnel.
Tailscale#
Tailscale creates a WireGuard-based mesh VPN. Run the Tailscale client inside your SPCS container and it joins your tailnet – giving the container a stable IP address and direct access to any other machine on your Tailscale network.
One important detail: SPCS containers run as unprivileged user-mode processes with minimal capabilities. The NET_ADMIN capability needed to create kernel-level network interfaces (tun/tap devices) is not available. This means standard WireGuard and traditional VPN solutions that require kernel-level networking will not work. Tailscale solves this with its userspace networking mode (--tun=userspace-networking), which operates entirely in user space without needing NET_ADMIN or root. Any WireGuard-based solution deployed in SPCS must use a similar userspace approach.
Vladimir Timofeenko wrote a detailed guide on this approach: Connecting to Tailscale on Snowpark Container Services.
ngrok#
ngrok creates secure tunnels to localhost. Run the ngrok agent inside your SPCS container and it exposes container services on a public ngrok URL – bypassing Snowflake’s ingress authentication if you need unauthenticated access or a stable public URL for webhooks and integrations.
Brad Culberson explored this approach: Public Endpoints in Snowpark Container Services.
Choosing an Approach#
| Approach | Egress Port | Direction | Auth Required On Remote Side | Best For |
|---|---|---|---|---|
| SSH tunnel | 22 | Bidirectional (SOCKS, -L, -R) | SSH key | Full control, on-prem databases, reverse access |
| WebSocket tunnel | 443 | Inbound to SPCS (agent connects out) | PAT | Data sovereignty, restrictive firewalls, HTTPS-only |
| Tailscale | 443 (DERP relay) | Mesh (any direction) | Tailscale auth | Existing tailnet, multi-service mesh |
| ngrok | 443 | Outbound (expose container services) | ngrok account | Webhooks, public URLs, quick dev access |
All four approaches rely on the container initiating the outbound connection – SPCS has no inbound networking. The choice depends on your network constraints, existing infrastructure, and whether you need bidirectional access or just one-way connectivity.
Conclusion#
Snowpark Container Services simplifies and hardens the traditional container service model. Customers focus on their applications while Snowflake provisions and maintains secure-by-default container infrastructure across AWS, Microsoft Azure, and Google Cloud Platform.
Key security capabilities delivered by Snowflake:
| Capability | Status |
|---|---|
| Automated OS and driver patching | Generally available |
| mTLS for inter-service communication | Generally available |
| Role-based network isolation (Kubernetes network policies) | Generally available |
| Ingress proxy with CSP, header scrubbing, banned method filtering | Generally available |
| Egress control via External Access Integrations | Generally available |
| Stable egress IP addresses for firewall allowlisting | Generally available |
| Customer-configured mTLS for egress to external endpoints | Generally available |
| Private connectivity for image registry and service endpoints | Generally available |
| Private egress connectivity (AWS, Azure, GCP) | Generally available |
| CORS configuration per service endpoint | Generally available |
| Block storage with Tri-Secret Secure (TSS) support | Generally available |
| Programmatic Access Tokens (PAT) for Snowflake API calls | Generally available |
| Compute Pool Metrics with enterprise monitoring integration | Generally available |
| Workload Identity Federation | Generally available |
| Native Apps + SPCS (AWS, Azure, GCP) | Generally available |
| FedRAMP on AWS for Native Apps with containers | Generally available |
Snowflake has kept security at the core of its container offering to more efficiently deploy and scale full-stack applications, LLMs, and other containerized workloads in a business and data owner trusted environment.
Parts of the security architecture sections in this article were inspired by a 2024 SPCS security white paper by Seth Youssef, Security Field CTO at Snowflake.
