About the webinar
Enterprises are racing to deploy AI applications to maximize productivity and stay competitive, but with accelerated adoption comes a completely new set of risks - risks that conventional security tooling was never designed to mitigate. LLMs can hallucinate, misinterpret intent, and behave unpredictably under adversarial pressure. These new AI applications also introduce a new attack surface that adversaries can exploit to exfiltrate data, trigger unintended actions, and disrupt business.
Following our recent AI Security launch, this session provides a deep dive into Zscaler’s new portfolio of solutions for safeguarding the deployment of AI apps and infrastructure. This includes continuous AI Red Teaming and runtime protection of AI applications and models.
What you’ll learn:
-
How to configure and execute continuous AI red teaming on target systems, leveraging probes that cover security, safety, hallucinations & trustworthiness and business alignment
-
How to remediate discovered vulnerabilities and harden system prompts
-
How to deploy intent-based detectors to block malicious attacks, like prompt injection, jailbreak, malicious URLs and more
-
How to ensure the responsible use of AI with content moderation on responses governing off topic, toxicity, brand and reputational damage and more.
Speakers
Ashwin Kesireddy
VP of Product
Zscaler
David Sedgwick
Sr. Director, Product Management- Platform
Zscaler
Talus Park
Director, Solutions Consulting
Zscaler