Simon Willison 3/18/2026

Snowflake Cortex AI Escapes Sandbox and Executes Malware

Read Original

A security report details a now-fixed prompt injection vulnerability in Snowflake's Cortex AI agent. The attack chain began when the agent was tricked into reviewing a malicious GitHub README, leading it to execute shell commands via process substitution, bypassing safety allow-lists. The article discusses the inherent risks of such command pattern allow-lists and advocates for more deterministic sandboxing approaches for AI agents.

Snowflake Cortex AI Escapes Sandbox and Executes Malware

Comments

No comments yet

Be the first to share your thoughts!

Browser Extension

Get instant access to AllDevBlogs from your browser