Skip to main content
  1. Posts/

SSH Tunnels from Snowflake Container Services — Bidirectional Access to Your DMZ, Home Lab, or PC

Table of Contents

Snowflake Container Services (SPCS) runs your containers inside Snowflake’s managed infrastructure. Networking is locked down — no inbound connections, and egress goes through Snowflake’s network. But SSH doesn’t care. If your container can reach port 22 on a machine you control, you can tunnel anything back.

This article shows how to set up a persistent, bidirectional SSH tunnel from an SPCS container to your DMZ, home server, or workstation. Once the tunnel is up, traffic flows both ways — SOCKS proxy out, reverse port forwards back in. Add nginx on the receiving end and you can expose SPCS services on your own domain with SSL, as if they were running locally.


Why This Works
#

SSH tunnels are established outbound from the container. SPCS allows egress to external hosts (if you configure the external access integration), so the container initiates the connection to your SSH server. Once the tunnel is established:

  • Forward direction (container → your server): A SOCKS proxy (-D 1080) lets anything inside the container route traffic through your server’s network. The container can now reach resources on your LAN, your home network, or anywhere your server can reach.
  • Reverse direction (your server → container): Reverse port forwards (-R) expose container ports on your server. SSH into the container, hit a Streamlit app, access a REST API, open a web terminal — all through the tunnel.

You’re SSH-ing over SSH. The initial tunnel is just a transport layer. Once it’s up, the full bidirectional capability of SSH is available.


Architecture
#

┌──────────────────────────────────────────────────────────────┐
│  Snowflake Container Services (SPCS)                         │
│                                                              │
│  ┌────────────────────────────────────────────────────────┐  │
│  │  Your Container                                        │  │
│  │                                                        │  │
│  │  Streamlit    (:3002)  ← query on-prem Postgres         │  │
│  │  REST API     (:3003)  ← service function target       │  │
│  │  Web Terminal (:7681)  ← ttyd, browser-based shell     │  │
│  │  SSH server   (:22)                                    │  │
│  │                                                        │  │
│  │  autossh ─────────────────────────────────────────┐    │  │
│  │    -D 1080  (SOCKS proxy → DMZ network)           │    │  │
│  │    -R 9001 → localhost:3002  (Streamlit)           │    │  │
│  │    -R 9002 → localhost:3003  (REST API)            │    │  │
│  │    -R 9003 → localhost:7681  (Web Terminal)        │    │  │
│  │    -R 9004 → localhost:22    (SSH back in)         │    │  │
│  └───────────────────────────────────────────────│────┘    │  │
│                                                  │         │  │
└──────────────────────────────────────────────────│─────────┘  │
                                                   │ outbound SSH
                                ┌────────────────────────────────┐
                                │  Your DMZ / Home Server        │
                                │                                │
                                │  nginx (SSL termination)       │
                                │    pg-explorer.you.com → :9001 │
                                │    api.you.com       → :9002   │
                                │    term.you.com      → :9003   │
                                │                                │
                                │  ssh user@localhost -p 9004    │
                                │   → lands inside SPCS          │
                                └────────────────────────────────┘

The Container Image
#

Everything you need in one Dockerfile — SSH client, autossh, ttyd, Python with Streamlit/FastAPI/psycopg2, and an SSH server for reverse access. Build this, push it to your SPCS image repository, and you have a ready-to-go tunnel container:

FROM ubuntu:22.04

ARG DEBIAN_FRONTEND=noninteractive

# --- System packages ---
RUN apt-get update && apt-get install -y --no-install-recommends \
    openssh-client openssh-server autossh \
    python3 python3-venv python3-pip \
    libpq-dev \
    curl wget git jq vim htop net-tools procps \
    && rm -rf /var/lib/apt/lists/*

# --- ttyd (web terminal) ---
RUN curl -L https://github.com/tsl0922/ttyd/releases/download/1.7.7/ttyd.x86_64 \
    -o /usr/local/bin/ttyd && chmod +x /usr/local/bin/ttyd

# --- Create a non-root user ---
RUN useradd -m -s /bin/bash spcstunnel

# --- SSH server setup (for reverse SSH access) ---
RUN mkdir /var/run/sshd \
    && echo 'PermitRootLogin no' >> /etc/ssh/sshd_config \
    && echo 'PasswordAuthentication no' >> /etc/ssh/sshd_config \
    && echo 'PubkeyAuthentication yes' >> /etc/ssh/sshd_config
# Add your public key at runtime or bake it in:
# COPY authorized_keys /home/spcstunnel/.ssh/authorized_keys

# --- Python environment ---
RUN python3 -m venv /home/spcstunnel/venv
RUN /home/spcstunnel/venv/bin/pip install --no-cache-dir \
    uv
RUN /home/spcstunnel/venv/bin/uv pip install --no-cache-dir \
    streamlit fastapi uvicorn httpx[socks] psycopg2

# --- Working directories ---
RUN mkdir -p /home/spcstunnel/streamlit_apps \
    && chown -R spcstunnel:spcstunnel /home/spcstunnel

# --- Copy application code ---
COPY tunnel.sh /home/spcstunnel/tunnel.sh
COPY entrypoint.sh /home/spcstunnel/entrypoint.sh
COPY restapi.py /home/spcstunnel/restapi.py
# COPY streamlit_apps/ /home/spcstunnel/streamlit_apps/
RUN chmod +x /home/spcstunnel/tunnel.sh /home/spcstunnel/entrypoint.sh

EXPOSE 22 3002 3003 7681

ENTRYPOINT ["/home/spcstunnel/entrypoint.sh"]

What’s in the box:

ComponentPurpose
openssh-client + autosshOutbound tunnels to your DMZ
openssh-serverReverse SSH access back into the container
ttydBrowser-based terminal via SPCS web endpoint
python3-venv + uvFast Python package management
streamlit, fastapi, uvicornStreamlit for querying on-prem resources through the tunnel, FastAPI for service functions
psycopg2PostgreSQL driver — query on-prem Postgres through the SOCKS proxy
httpx[socks]HTTP client that can route through the SOCKS proxy
libpq-devPostgreSQL C library, required for psycopg2
curl, jq, vim, htop, net-toolsDebugging and inspection tools

Push the image to your SPCS repository:

docker build -t my-tunnel-container .
docker tag my-tunnel-container <org>-<account>.registry.snowflakecomputing.com/my_db/my_schema/my_repo/my-tunnel-container:latest
docker push <org>-<account>.registry.snowflakecomputing.com/my_db/my_schema/my_repo/my-tunnel-container:latest

Step 1: Store Your SSH Private Key as a Snowflake Secret
#

The private key can’t live in your container image (that’s a leaked secret waiting to happen). Instead, base64-encode it and store it in a Snowflake Secret. SPCS can expose secrets as environment variables inside the container.

Encode the key
#

base64 -w 0 ~/.ssh/id_ed25519 > key_b64.txt
cat key_b64.txt
# Copy this value — you'll paste it into the Snowflake secret

Create the secret in Snowflake
#

CREATE OR REPLACE SECRET my_db.my_schema.ssh_tunnel_key
  TYPE = GENERIC_STRING
  SECRET_STRING = '<paste your base64-encoded key here>';

Store additional connection details as secrets too:

CREATE OR REPLACE SECRET my_db.my_schema.ssh_tunnel_user
  TYPE = GENERIC_STRING
  SECRET_STRING = 'your_ssh_username';

CREATE OR REPLACE SECRET my_db.my_schema.ssh_tunnel_host
  TYPE = GENERIC_STRING
  SECRET_STRING = 'your.dmz.server.com';

Step 2: Create an External Access Integration
#

SPCS needs explicit permission to make outbound connections. Create an external access integration that allows egress to your SSH server:

CREATE OR REPLACE NETWORK RULE ssh_egress_rule
  MODE = EGRESS
  TYPE = HOST_PORT
  VALUE_LIST = ('your.dmz.server.com:22');

CREATE OR REPLACE EXTERNAL ACCESS INTEGRATION ssh_tunnel_integration
  ALLOWED_NETWORK_RULES = (ssh_egress_rule)
  ENABLED = TRUE;

Step 3: Reference Secrets in Your Service Spec
#

In your SPCS service specification, mount the secrets as environment variables:

spec:
  containers:
    - name: tunnel-container
      image: /my_db/my_schema/my_repo/my_image:latest
      env:
        SSH_KEY_B64:
          type: secret
          value: my_db.my_schema.ssh_tunnel_key
        SSH_USER:
          type: secret
          value: my_db.my_schema.ssh_tunnel_user
        DMZ_IP:
          type: secret
          value: my_db.my_schema.ssh_tunnel_host
  networkPolicyConfig:
    allowInternetEgress: true

The secrets arrive as plain environment variables inside the container. No file mounts, no volume magic — just $SSH_KEY_B64, $SSH_USER, and $DMZ_IP.


Step 4: The Tunnel Script
#

This is the core. The container’s entrypoint runs this script to establish the tunnels using fixed ports on the DMZ side (so nginx can reliably proxy to them):

#!/bin/bash

# Decode the private key from the environment variable
DECODED_KEY="$(echo $SSH_KEY_B64 | base64 --decode)"

# Pick random ports for autossh monitoring only
MON1=$(shuf -i 2000-65000 -n 1)
MON2=$(shuf -i 2000-65000 -n 1)

# --- Tunnel 1: SOCKS proxy + local port forward for on-prem Postgres ---
# -D 1080: SOCKS proxy — container routes HTTP traffic through DMZ network
# -L 5432: forward container's localhost:5432 to on-prem Postgres via DMZ
ssh-agent bash -c "\
  ssh-add <(echo \"$DECODED_KEY\") && \
  autossh -M $MON1 \
    -o StrictHostKeyChecking=no \
    -o ServerAliveInterval=60 \
    -o ExitOnForwardFailure=yes \
    -gnNT -D 1080 -C \
    -L 5432:postgres.internal:5432 \
    $SSH_USER@$DMZ_IP" &

# --- Tunnel 2: Reverse port forwards (DMZ → container services) ---
# Fixed ports so nginx on the DMZ can proxy_pass to them
ssh-agent bash -c "\
  ssh-add <(echo \"$DECODED_KEY\") && \
  autossh -M $MON2 \
    -o StrictHostKeyChecking=no \
    -o ServerAliveInterval=60 \
    -o ExitOnForwardFailure=yes \
    -gnNT -C \
    -R 9001:localhost:3002 \
    -R 9002:localhost:3003 \
    -R 9003:localhost:7681 \
    -R 9004:localhost:22 \
    $SSH_USER@$DMZ_IP" &

What each forward does:

DirectionFlagEffect
Container → DMZ-D 1080SOCKS proxy — container routes HTTP traffic through DMZ network
Container → DMZ-L 5432:postgres.internal:5432On-prem Postgres reachable at container’s localhost:5432
DMZ → Container-R 9001:localhost:3002Streamlit (Postgres explorer) accessible on DMZ port 9001
DMZ → Container-R 9002:localhost:3003REST API accessible on DMZ port 9002
DMZ → Container-R 9003:localhost:7681Web terminal (ttyd) on DMZ port 9003
DMZ → Container-R 9004:localhost:22SSH into the container from DMZ

Why autossh: SPCS containers can run for hours or days. Plain SSH tunnels die silently on network blips. autossh monitors the connection and restarts it automatically.

Why fixed ports: Using fixed reverse-forward ports (9001–9004) instead of random ones means nginx on the DMZ can have a static configuration. No dynamic discovery, no reloads.


Step 5: Clean Up Environment Variables After Startup
#

Here’s the part people miss: the secrets stay in the environment after your tunnel script runs. Any process in the container — Streamlit, a REST API, the web terminal — can read $SSH_KEY_B64 from its environment. That’s your private key, sitting in plaintext (well, base64) in every process’s /proc/self/environ.

Fix this by unsetting the sensitive variables once the tunnels are established. Add this to your entrypoint, and to /etc/bash.bashrc so interactive shells don’t leak them either:

# Wait for tunnels to establish
sleep 5

# Unset sensitive environment variables
unset SSH_KEY_B64
unset SSH_USER
unset DMZ_IP

# Also prevent interactive shells from seeing them
cat >> /etc/bash.bashrc << 'EOF'
# Clean up secrets injected by SPCS — tunnels already established
unset SSH_KEY_B64
unset SSH_USER
unset DMZ_IP
EOF

Why /etc/bash.bashrc?
#

When you SSH back into the container (through the reverse tunnel) or open a web terminal session, you get a new bash shell. That shell inherits the container’s original environment — including the secrets. Adding unset commands to /etc/bash.bashrc ensures every new interactive shell strips them out.

Note: This doesn’t protect against reading /proc/1/environ (the init process’s environment). But it prevents casual leakage through web terminals, Streamlit subprocess spawning, or anyone who SSHes in and runs env. For defense in depth, consider writing the decoded key to a temporary file with chmod 600, using it for the SSH commands, then shred-ing the file.


Step 6: Web Terminal with ttyd
#

ttyd is a single binary that shares a terminal session over the browser using WebSockets. It’s perfect for SPCS — you get a shell inside your container without needing SSH, accessible through the SPCS web endpoint or through the reverse tunnel.

Add ttyd to your container image
#

RUN curl -L https://github.com/tsl0922/ttyd/releases/download/1.7.7/ttyd.x86_64 \
    -o /usr/local/bin/ttyd && chmod +x /usr/local/bin/ttyd

Start it in your entrypoint
#

# Web terminal on port 7681 — accessible via SPCS endpoint or reverse tunnel
ttyd --writable -p 7681 bash &

Key flags:

FlagPurpose
--writable / -WAllow input (without this, terminal is read-only)
-p 7681Listening port
-c user:passBasic auth — use this if exposed without nginx auth
--onceExit after client disconnects (one-shot sessions)

Expose as an SPCS endpoint
#

Add the ttyd port to your service spec so it’s accessible via the SPCS web UI:

spec:
  containers:
    - name: tunnel-container
      image: /my_db/my_schema/my_repo/my_image:latest
      # ... env vars ...
  endpoints:
    - name: webterminal
      port: 7681
      public: true

Now anyone with access to the SPCS service can open a browser tab and get a shell inside the container — useful for debugging, inspecting the tunnel state, checking running processes, or poking around.


Step 7: Query On-Prem Postgres — Streamlit UI and Service Function
#

The tunnel forwards your on-prem Postgres to localhost:5432 inside the container. Now anything in the container can query it — a Streamlit app for interactive exploration, or a FastAPI service function callable from Snowflake SQL.

Streamlit: on-prem Postgres explorer
#

A simple Streamlit app that lets you browse and query your on-prem database through the tunnel:

# pg_explorer.py
import streamlit as st
import psycopg2
import pandas as pd

st.title("On-Prem Postgres Explorer")

# Connect through the SSH tunnel (localhost:5432 → on-prem Postgres)
@st.cache_resource
def get_conn():
    return psycopg2.connect(
        host="localhost", port=5432,
        dbname="production", user="readonly", password="from-a-secret"
    )

conn = get_conn()

# List tables
cur = conn.cursor()
cur.execute("""
    SELECT table_schema, table_name
    FROM information_schema.tables
    WHERE table_schema NOT IN ('pg_catalog', 'information_schema')
    ORDER BY table_schema, table_name
""")
tables = cur.fetchall()
cur.close()

selected = st.selectbox("Table", [f"{s}.{t}" for s, t in tables])

# Query builder
query = st.text_area("SQL", f"SELECT * FROM {selected} LIMIT 100")
if st.button("Run"):
    df = pd.read_sql(query, conn)
    st.dataframe(df)
    st.write(f"{len(df)} rows")

Access it via the SPCS web endpoint on port 3002, or through the nginx reverse proxy at https://pg-explorer.yourdomain.com. You’re browsing on-prem Postgres tables from a Streamlit app running inside Snowflake.

REST API as a service function target
#

A FastAPI service does the same thing, but callable from Snowflake SQL via a service function. The key thing to understand is Snowflake’s service function wire format: requests arrive as {"data": [[row_index, arg1, arg2, ...], ...]} and responses must follow the same shape. Each inner list is one row, and the first element is always the row index.

The REST API
#

from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import List, Any
import psycopg2
import json
import logging

logging.basicConfig(level=logging.INFO)

app = FastAPI()

# localhost:5432 reaches on-prem Postgres through the SSH tunnel's -L forward
ONPREM_PG = {
    "host": "localhost",
    "port": 5432,
    "dbname": "production",
    "user": "readonly",
    "password": "from-a-secret",  # use another Snowflake Secret for this
}


class ServiceFunctionRequest(BaseModel):
    """Snowflake service function request format.
    Each row is [row_index, arg1, arg2, ...].
    For a single-argument function like query_onprem_pg(sql VARCHAR),
    each row is [row_index, sql_string].
    """
    data: List[List[Any]]


@app.post("/query-onprem")
def query_onprem(request: ServiceFunctionRequest):
    try:
        results = []
        for row in request.data:
            row_index = row[0]
            sql = row[1]

            logging.info(f"Row {row_index}: executing query")
            conn = psycopg2.connect(**ONPREM_PG)
            cur = conn.cursor()
            cur.execute(sql)
            columns = [desc[0] for desc in cur.description]
            rows = [dict(zip(columns, r)) for r in cur.fetchall()]
            cur.close()
            conn.close()

            # Response must mirror the format: [row_index, result]
            results.append([row_index, rows])

        return {"data": results}

    except Exception as e:
        logging.exception("Query failed")
        raise HTTPException(status_code=500, detail=str(e))


@app.get("/health")
def health():
    return {"status": "ok"}

The wire format explained:

── Snowflake sends ──────────────────────────────────────
POST /query-onprem
{
  "data": [
    [0, "SELECT id, name FROM customers LIMIT 5"],
    [1, "SELECT count(*) AS cnt FROM orders"]
  ]
}

── Your API returns ─────────────────────────────────────
{
  "data": [
    [0, [{"id": 1, "name": "Alice"}, {"id": 2, "name": "Bob"}, ...]],
    [1, [{"cnt": 42}]]
  ]
}

Each row in data maps to one invocation of the service function. Snowflake batches multiple rows into a single HTTP request for efficiency – that’s why you loop over request.data rather than handling a single query.

Create the service function in Snowflake
#

CREATE OR REPLACE FUNCTION my_db.my_schema.query_onprem_pg(sql VARCHAR)
  RETURNS VARIANT
  SERVICE = my_db.my_schema.tunnel_service
  ENDPOINT = 'restapi'
  AS '/query-onprem';

Now you can query your on-prem Postgres from Snowflake SQL:

-- Single query
SELECT my_db.my_schema.query_onprem_pg(
  'SELECT customer_id, name, status FROM customers LIMIT 10'
);

-- Or call it per-row from a table — Snowflake batches these automatically
SELECT query_text,
       my_db.my_schema.query_onprem_pg(query_text) AS result
FROM my_db.my_schema.queries_to_run;

A SQL query in Snowflake → calls a service function → hits the FastAPI endpoint in your SPCS container → routes through the SSH tunnel → queries your on-prem Postgres. Snowflake handles the batching, load balancing, and retries.


Step 8: Nginx Reverse Proxy on the DMZ
#

With reverse port forwards landing on fixed ports (9001–9004), nginx on the DMZ can proxy these SPCS services to proper domains with SSL termination. This is how you make a Postgres explorer running inside SPCS accessible at https://pg-explorer.yourdomain.com.

Why nginx, not raw ports?
#

  • SSL termination — the tunnel carries plain HTTP (which is fine, it’s localhost-to-localhost inside the SSH tunnel), but browsers and users need HTTPS.
  • Domain-based routing — one DMZ server, multiple SPCS services, each on its own subdomain.
  • WebSocket support — Streamlit and ttyd require WebSocket upgrades. Nginx handles this cleanly.
  • Authentication — add basic auth or an OAuth proxy in front of services that shouldn’t be open.

Nginx configuration
#

# /etc/nginx/sites-enabled/spcs-pg-explorer.conf
server {
    listen 443 ssl;
    server_name pg-explorer.yourdomain.com;

    ssl_certificate     /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;

    location / {
        proxy_pass http://127.0.0.1:9001;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Streamlit requires WebSockets
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_read_timeout 86400;
    }
}
# /etc/nginx/sites-enabled/spcs-api.conf
server {
    listen 443 ssl;
    server_name api.yourdomain.com;

    ssl_certificate     /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;

    location / {
        proxy_pass http://127.0.0.1:9002;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}
# /etc/nginx/sites-enabled/spcs-terminal.conf
server {
    listen 443 ssl;
    server_name terminal.yourdomain.com;

    ssl_certificate     /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;

    location / {
        proxy_pass http://127.0.0.1:9003;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # ttyd requires WebSocket
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_read_timeout 86400;
    }
}

Use a wildcard Let’s Encrypt cert (*.yourdomain.com) to cover all subdomains with one cert, or use certbot with individual certs per subdomain.

The full flow
#

Browser → https://pg-explorer.yourdomain.com
       → nginx (SSL termination)
       → 127.0.0.1:9001
       → SSH reverse tunnel
       → SPCS container :3002 (Streamlit → on-prem Postgres)

The user sees a Postgres explorer on a proper domain with a valid SSL cert. They have no idea the app is running inside Snowflake, querying an on-prem database through an SSH tunnel, proxied through nginx on a DMZ.


Step 9: Reverse SSH — Getting Back into the Container
#

The reverse port forward on port 9004 maps to the container’s SSH server on port 22. From your DMZ server (or anywhere that can reach the DMZ), you can SSH directly into the SPCS container:

# From the DMZ server itself
ssh spcstunnel@localhost -p 9004

# From your workstation (if DMZ is reachable)
ssh -J you@your-dmz-server spcstunnel@localhost -p 9004

SSH over SSH — the bidirectional trick
#

Once you’re inside the container via reverse SSH, you’re in the SPCS network. The original outbound tunnel is still running. You can now:

Start new tunnels from inside the container:

# Forward a database port from your LAN into the container
ssh -L 5432:postgres.internal:5432 you@your-dmz-server
# Now localhost:5432 inside the container reaches your internal Postgres

Use the container as a jump host to other SPCS services:

# If other SPCS services are on the same internal network
curl http://other-service.spcs-internal:8080

Transfer files through the tunnel:

# From DMZ, push a file into the container
scp -P 9004 data.csv spcstunnel@localhost:/home/spcstunnel/

# From inside the container, pull from DMZ
scp you@your-dmz-server:/path/to/file.csv /home/spcstunnel/

Set up the DMZ as an SSH jump host to reach deeper:

# From your laptop → DMZ → SPCS container, in one command
ssh -J you@your-dmz-server -p 9004 spcstunnel@localhost

The key insight: the initial outbound SSH tunnel is bidirectional by nature. Once the reverse forward exposes port 22, every SSH capability is available in the return direction — tunnels, port forwards, file transfers, jump hosts. You can chain SSH connections as deep as you need.


Full Entrypoint Example
#

Putting it all together:

#!/bin/bash

# --- Environment setup ---
python -m venv /home/spcstunnel/venv
source /home/spcstunnel/venv/bin/activate
pip install uv
uv pip install streamlit fastapi uvicorn httpx[socks] psycopg2

# --- Start web terminal ---
ttyd --writable -p 7681 bash &

# --- Start Streamlit (on-prem Postgres query UI) ---
cd /home/spcstunnel/streamlit_apps
streamlit run pg_explorer.py \
  --server.port 3002 \
  --server.address 0.0.0.0 \
  --server.runOnSave true \
  enableCORS=false &

# --- Start REST API ---
cd /home/spcstunnel
uvicorn restapi:app --host 0.0.0.0 --port 3003 --reload &

# --- Establish SSH tunnels ---
/home/spcstunnel/tunnel.sh &
sleep 5

# --- CRITICAL: Clean up secrets from environment ---
unset SSH_KEY_B64
unset SSH_USER
unset DMZ_IP

# Prevent interactive shells (reverse SSH, web terminal) from leaking secrets
cat >> /etc/bash.bashrc << 'EOF'
unset SSH_KEY_B64
unset SSH_USER
unset DMZ_IP
EOF

# Keep the container alive
wait

Security Considerations
#

You’re punching a hole from a managed cloud service to your network. Be deliberate about it:

  • Restrict the DMZ SSH user. Dedicated user, no sudo, no shell history. Consider ForceCommand in sshd_config to limit what the tunnel user can do beyond port forwarding.
  • Keep GatewayPorts no in your DMZ’s sshd_config (the default). Reverse-forwarded ports bind to 127.0.0.1 only — nginx on the same box can reach them, but nothing external can. If you change this to yes or clientspecified, you’re exposing tunnel ports to your network.
  • Firewall the DMZ. Only expose nginx ports (443) externally. The tunnel ports (9001–9004) should never be reachable from outside the box.
  • Rotate keys. The base64-encoded key in the Snowflake Secret is a credential. Rotate it regularly. Use ed25519 keys — smaller, faster, modern.
  • Secure ttyd. If exposing the web terminal through nginx, add basic auth (auth_basic) or an OAuth proxy. An open shell on the internet is an incident.
  • Monitor connections. Log SSH connections on your DMZ server. Set up alerts for unexpected tunnel disconnects or reconnections.

Checklist
#

StepWhatWhy
1Base64-encode SSH key, store as Snowflake SecretKeeps the key out of the container image
2Create external access integration with network ruleSPCS needs explicit egress permission
3Mount secrets as env vars in service specContainer picks up key, user, host at runtime
4Run autossh with SOCKS + fixed reverse forwardsPersistent bidirectional tunnel with stable ports
5unset secrets + add to /etc/bash.bashrcPrevent key leakage to child processes, web terminals, and reverse SSH sessions
6Install ttyd, expose as SPCS endpointBrowser-based shell for container inspection
7Streamlit explorer + REST API service functionInteractive UI and SQL-callable access to on-prem Postgres via the tunnel
8Configure nginx with SSL on DMZProper domains and certs for tunneled SPCS services
9Use reverse SSH for full bidirectional accessSSH, SCP, jump hosts, nested tunnels back into SPCS

Wrapping Up
#

The pattern is simple: store the SSH key as a base64-encoded Snowflake Secret, decode it at runtime, and use autossh to establish persistent tunnels with fixed reverse-forward ports. The SOCKS proxy gives the container outbound access to your network; the reverse port forwards give you inbound access to the container’s services.

Put nginx with SSL on the receiving end and your SPCS services get proper domains — pg-explorer.yourdomain.com, api.yourdomain.com, terminal.yourdomain.com — indistinguishable from locally-hosted apps. Add a service function backed by the tunnel-aware REST API and you can query your on-prem Postgres from Snowflake SQL.

And since SSH is bidirectional by nature, once the reverse forward exposes port 22, you can SSH back in, transfer files, set up jump hosts, and chain connections as deep as you need. The initial outbound tunnel is just the beginning.

Clean up your environment variables after the tunnels are established. The secrets served their purpose — don’t let them linger where web terminals and child processes can accidentally expose them.

Kevin Keller
Author
Kevin Keller
Personal blog about AI, Observability & Data Sovereignty. Snowflake-related articles explore the art of the possible and are not official Snowflake solutions or endorsed by Snowflake unless explicitly stated. Opinions are my own. Content is meant as educational inspiration, not production guidance.
Share this article

Related