Claude Cowork Exfiltrates Files
Security researchers found a vulnerability in Claude Cowork allowing data exfiltration via the Anthropic API, bypassing default HTTP restrictions.
SimonWillison.net is the long-running blog of Simon Willison, a software engineer, open-source creator, and co-author of the original Django framework. He writes about Python, Django, Datasette, AI tooling, prompt engineering, search, databases, APIs, data journalism, and practical software architecture. The blog includes detailed notes from experiments, conference talks, and real projects. Readers will find clear explanations of topics such as LLM workflows, SQL patterns, data publishing, scraping, deployment, caching, and modern developer tooling. Simon also publishes frequent micro-posts and TIL entries that document small discoveries and tricks from day-to-day engineering work. The tone is practical and research oriented, making the site a valuable resource for anyone interested in serious engineering and open data.
260 articles from this blog
Security researchers found a vulnerability in Claude Cowork allowing data exfiltration via the Anthropic API, bypassing default HTTP restrictions.
Anthropic invests $1.5 million in the Python Software Foundation to support Python ecosystem security and core development.
A prompt injection attack on Superhuman AI exposed sensitive emails, highlighting a security vulnerability in third-party integrations.
A hands-on review of Claude Cowork, Anthropic's new general AI agent for non-coding tasks, exploring its interface and capabilities.
Argues against anti-AI sentiment in software development, stating AI is a transformative and useful tool for programmers' careers.
Author explores the legal and ethical implications of using LLMs to port open source code between programming languages, based on personal experiments.
A blog post quoting Linus Torvalds on using AI-powered 'vibe-coding' to create a Python audio visualizer tool.
Explores a software library with no code, using AI agents to generate implementations from specifications and conformance tests.
Fly.io launches Sprites.dev, a stateful sandbox environment for secure coding agents and untrusted code execution.
A tech expert shares predictions for 2026 on a podcast, focusing on the future of LLMs, coding agents, and AI-assisted software development.
A look at Google's resurgence in AI, including the origin of 'Nano Banana', Sergey Brin's return, and Gemini's user growth compared to OpenAI.
Tailwind Labs CEO discusses the severe business impact of AI, including major layoffs and revenue decline, despite the framework's growing popularity.
A reflection on the arrival of Artificial General Intelligence (AGI), arguing that its 'general' nature distinguishes it from all previous purpose-built AI models.
A comprehensive guide exploring different sandboxing techniques for safely running untrusted AI code, including containers, microVMs, and WebAssembly.
A critique of the new macOS Tahoe menu icons, arguing they are overly complex and poorly designed, violating Apple's own interface guidelines.
The author will join the Oxide and Friends podcast to make predictions about AI developments for the years 2026, 2028, and 2032.
Key insights on API design and compatibility from Addy Osmani's lessons at Google, emphasizing that compatibility is a core product feature.
How AI coding assistants are enabling experienced developers to code again by reducing time investment and leveraging management skills.
A Google engineer shares her experience using Claude Code to rapidly prototype a distributed agent orchestrator, highlighting AI's impact on complex software development.
Will Larson discusses the three key pillars for successful AI adoption in companies: domain context, AI tooling experience, and IT infrastructure.