As enterprises race to integrate generative AI into their applications and workflows, adversaries are finding new ways to exploit language models through prompt injection attacks to leak sensitive data and bypass security controls.
But how do these attacks actually work, and what can organizations do to defend their genAI applications against them?
This exclusive deep dive with Rob Truesdell, CPO at Pangea, explores the evolving landscape of prompt injection threats and the latest strategies to secure genAI applications.
This session covers:
- How prompt injection works – A breakdown of direct and indirect techniques, with real-world attack examples and data.
- What LLM providers are doing – A look at native defenses built into top models to counteract prompt injection risks.
- The insider vs. outsider threat – How adversaries both inside and outside an organization can manipulate genAI models.
- Risk mitigation strategies – Engineering and security best practices to prevent, detect, and respond to prompt injection attempts.
- Measuring effectiveness – How to evaluate the efficacy of prompt injection detection mechanisms.
This OnDemand session is a must-watch for security leaders, AI engineers, and product teams looking to understand and mitigate the risks of AI-powered applications in an increasingly adversarial landscape.