snowclaw proxy
Deploy the Cortex proxy as a standalone SPCS service with a public endpoint. Lets external OpenClaw agents — ones running outside Snowflake — hit Cortex LLMs through a lightweight gateway without a full SnowClaw deployment.
In a full snowclaw deploy, the Cortex proxy already runs as a sidecar inside the SPCS service. The snowclaw proxy commands are for the standalone case only.
How it works
Each user authenticates with their own Snowflake PAT. SPCS ingress validates the token and injects Sf-Context-Current-User for traceability, but strips the Authorization header before it reaches the container. To thread the PAT through to Cortex, OpenClaw sends a custom X-Cortex-Token header which SPCS passes through untouched. The proxy reads that header and forwards it to Cortex as a Bearer token.
The proxy exposes two endpoints:
POST /v1/chat/completions— OpenAI-shaped, for OpenAI / Snowflake / Llama models.POST /v1/messages— Anthropic-shaped, for Claude models with nativecache_controlsupport.
setup
mkdir my-proxy && cd my-proxy
snowclaw proxy setup
Interactive wizard that collects Snowflake credentials and provisions the standalone-proxy-specific objects: image repo (shared with full deployment), a *_proxy_pool compute pool (CPU_X64_XS), a *_proxy_egress_rule, and a *_proxy_external_access integration.
No secrets are created — each user sends their own PAT via X-Cortex-Token, so the proxy holds no service-level credentials.
deploy
snowclaw proxy deploy
Builds the proxy image, pushes it to your Snowflake image repository, and creates or updates the standalone-proxy SPCS service. Prints the public endpoint URL and a ready-to-use OpenClaw provider config snippet on success.
Example config the CLI prints:
{
models: {
providers: {
"cortex-claude": {
baseUrl: "https://<proxy-endpoint>",
apiKey: "${SNOWFLAKE_TOKEN}",
headers: { "X-Cortex-Token": "${SNOWFLAKE_TOKEN}" },
api: "anthropic-messages"
},
"cortex-openai": {
baseUrl: "https://<proxy-endpoint>/v1",
apiKey: "${SNOWFLAKE_TOKEN}",
headers: { "X-Cortex-Token": "${SNOWFLAKE_TOKEN}" },
api: "openai-completions"
}
}
}
}
status
snowclaw proxy status
Same idea as snowclaw status, but for the standalone proxy service. Shows service state, compute pool, and the public endpoint URL.
suspend / resume
snowclaw proxy suspend
snowclaw proxy resume
Pauses or resumes the proxy service and its compute pool. Handy if you want to keep the infra provisioned but avoid the compute cost when no one's using it.
logs
snowclaw proxy logs
Fetch container logs from the standalone proxy service. Look here for request activity, 429 retries against Cortex, and cache-stat lines.