ChatGPT: Is AI Friend AND Foe?

Moments after ChatGPT launched, adversaries jumped at the chance to use the on-demand AI chatbot to help them with cyberattacks. Even an amateur with no technical or coding experience can use ChatGPT to author and tailor malware and malicious scripts in seconds.

Another request can generate a convincing phishing email—a sharp contrast to campaigns of the past, when threat actors could spend months perfecting a phishing attempt, only to be foiled by a spelling or grammar mistake. AI can develop a complete phishing kit in far less time than it takes to register as an affiliate for a phishing-as-a-service program.

ChatGPT democratized the conversational AI chatbot for anyone to put to all kinds of harmless or productive uses, but it can also help anyone become a threat actor.

Join Zscaler AI enthusiasts Sandy Wenzel, Amy Heng, and Dianhuan Lin for a podcast-style discussion and lively demo of our misadventures with ChatGPT. You’ll hear about:

  • Where AI/ML is already embedded in our daily lives, and how we can start using it to our advantage
  • The risks associated with broad use of AI and its inevitable exploitation by threat actors
  • Ways security teams can fully embrace AI/ML to fight fire with fire against adversaries
  • How we can embrace change and build guardrails to keep AI on the straight and narrow path of good


Amy Heng
Product Marketing Manager

Sandy Wenzel
Security Architect

Dianhaun Lin
Senior Manager of Data Science

Liat Shentser
Global Vice President Solution Consulting