Cybersecurity Landscape in Early 2026
Analysis of the 2026 cybersecurity landscape, focusing on AI's dual role in attacks/defense, ransomware evolution, and new defense strategies.
Analysis of the 2026 cybersecurity landscape, focusing on AI's dual role in attacks/defense, ransomware evolution, and new defense strategies.
A security vulnerability in Claude Cowork allowed file exfiltration via the Anthropic API, bypassing default HTTP restrictions.
A prompt injection attack on Superhuman AI exposed sensitive emails, highlighting a critical security vulnerability in AI email assistants.
A prompt injection attack on Superhuman AI exposed sensitive emails, highlighting a security vulnerability in third-party integrations.
A comprehensive guide to different sandboxing technologies for safely running untrusted AI code, covering containers, microVMs, gVisor, and WebAssembly.
A comprehensive guide exploring different sandboxing techniques for safely running untrusted AI code, including containers, microVMs, and WebAssembly.
Using PyRIT and GitHub Copilot Agent Skills to validate and secure AI prompts against vulnerabilities like injection and jailbreak directly in the IDE.
Explains the OWASP Top 10 security risks for autonomous AI agents, detailing threats like goal hijacking and tool misuse with real-world examples.
Analysis of a prompt injection vulnerability in Google's Antigravity IDE that can exfiltrate AWS credentials and sensitive code data.
A method using color-coding (red/blue) to classify MCP tools and systematically mitigate prompt injection risks in AI agents.
Explores the unique security risks of Agentic AI systems, focusing on the 'Lethal Trifecta' of vulnerabilities and proposed mitigation strategies.
Explores the A2AS framework and Agentgateway as a security approach to mitigate prompt injection attacks in AI/LLM systems by embedding behavioral contracts and cryptographic verification.
A framework for evaluating security threats and risks in Model Context Protocol (MCP) implementations, based on recent incidents.
Explores the emerging security research landscape around the Model Context Protocol (MCP), a new standard for AI model communication.
A developer's cautionary tale about command injection vulnerabilities in AI coding assistants using MCP servers, highlighting real-world security risks.
A recap of organizing and speaking at Global Azure Quebec 2025, focusing on AI red teaming and securing generative AI workloads.