Fireside Chat – How Bad Actors (Ab)Use LLMs: The Mechanisms That Enable Misuse and What We Can Do About It

  • Explore how malicious actors abuse GenAI-enabled systems, from prompt injection to hallucination-fueled supply chain attacks and even LLM-assisted malware creation. Dive deep into how LLMs work, why they’re vulnerable, and the mechanics that enable these threats, especially in modern applications powered by ChatGPT, Azure OpenAI, and open-source models.
  • Discuss actionable defense strategies, including building more secure AI-powered features using least privilege principles, prompt hardening, model choice, and other practical mitigations.
  • Examine real-world examples where GenAI powers capabilities such as auto-triage and text-to-SQL. See how these same attacks could target features like these — and how we proactively defend against them.

Mackenzie Jackson, Developer and Security Advocate, Aikido

Kimberly Mattheys, Head of Application Security and DevSecOps, Solaris

Ciro Anunciação, Engineering Manager, PandaDoc