Zayo Logo

AI in Cybersecurity: Risks, Rewards, and Resilience

Blog

|September 30, 2025

They say you can’t fight fire with fire, but can AI-driven security effectively defend against AI-powered attacks? In our final webinar of our Built to be Bold series, Theresa Payton, former White House Chief Information Officer, sat down with Ethan Banks, founder of Packet Pushers, to understand the opportunities and threats AI presents in cybersecurity. 

This blog recaps their key insights on the rise of AI in cyber defense, its use in sophisticated attacks, and the practical steps your organization can take to build resilience. 

The Rise of AI in Cybersecurity Defense 

As organizations face an overwhelming volume of security threats, AI offers a way to enhance and augment human-led security teams. A recent Gartner report found that 67% of cybersecurity leaders believe emerging GenAI risks demand significant changes to their current security approaches. 

AI is being used to: 

  • Improve Threat Detection: AI algorithms can analyze billions of events in real-time, identifying patterns and anomalies that might escape human notice. This allows security teams to surface critical threats faster and with greater accuracy. 
  • Automate Incident Response: By automating repetitive tasks, AI frees up security professionals to focus on more complex strategic initiatives. This improves efficiency and reduces the chance of human error. 

Theresa Payton shared an interesting use case from a Japanese publishing company that used AI to design a zero-trust architecture to protect its intellectual property. The AI helped them identify endpoints and access paths that were not immediately obvious, leading to a more robust and creative security design. This highlights how AI can be used not just for defense, but for proactive and strategic security planning. 

The Dark Side: AI as a Cyberattack Tool 

While AI empowers defenders, it also equips adversaries with sophisticated new tools. The same technology that helps protect networks can be turned around to create more effective and harder-to-detect attacks. 
 
Theresa quote: “We’ve all read these headlines and thought, well, maybe these are one-offs. I’m here to tell you it’s not a one-off.” 

Here’s how bad actors are leveraging AI: 

  • AI-Driven Malware and Phishing: Attackers are using AI to generate malicious code and craft highly convincing phishing campaigns. As Theresa noted, criminals attempted to use Anthropic’s Claude to create malware and design workarounds for existing security measures.  
  • Advanced Social Engineering and Deepfakes: AI-powered deepfakes are making social engineering attacks more believable than ever. Theresa recounted a chilling example of an employee in Hong Kong who was tricked into authorizing a wire transfer during a video conference where every other participant was a deepfake. 
  • Attacks on AI Systems: Cybercriminals also target AI systems directly. We are seeing an increase in attacks on customer service chatbots and emerging agentic AI systems. Criminals assume these systems lack human oversight, making them prime targets for manipulation and unauthorized access. 
  • Data Poisoning: A more sophisticated threat involves “poisoning” an organization’s data lakes or machine learning models. By feeding the AI malicious data, attackers can train it to create backdoors, giving them hidden access to sensitive systems.  

Embracing a “No Trust” Architecture 

So, how can organizations defend against these advanced threats? Theresa advocated for reframing “Zero Trust” as “No Trust” architecture because it better communicates the core principle to business leaders: every request must be verified, every time. It’s not a one-time destination, but a continuous lifestyle choice for the organization. 

Here are the key pillars for building a stronger defense: 

  • Implement a “No Trust” Architecture: This requires a fundamental shift in how you think about access. It’s about moving from a “trust but verify” mindset to “never trust, always verify.” Start by understanding the human user stories — how and why employees, customers, and vendors interact with your technology. 
  • Conduct Regular Audits and Data Cleanup: You can’t protect what you don’t know you have. Before implementing AI, it’s crucial to clean up and classify your data. This strengthens security and reduces the computational costs of running AI algorithms. Theresa explained how a “90-day no trust blitz” can be a great way to gamify this process and get the whole organization involved in inventorying assets and data. 
  • Establish a Safe AI Team: Create a cross-functional team composed of people from various departments, not just tech experts. This team’s role is to ask smart, user-focused questions like, “What happens if an employee pastes customer data into our AI tool?” to anticipate risks before they become breaches. 

 
As AI becomes more integrated across enterprises, it’s important to consider the human aspect of implementation. Theresa notes, “Where I am seeing AI be more successful is when it is implemented in a pilot mode with governance and guardrails, and there’s a human in the loop as part of the process and decision.” 

Building Resilience for the Future 

AI’s dual nature presents a constant arms race where both attackers and defenders are continuously innovating. As Theresa wisely put it, we need to “zigzag” while bad actors run in a straight line. We must be creative, think beyond standard frameworks, and anticipate how attackers will exploit the human element. 

Actionable Insights for Your Business 

  1. Start with Your Data: Before deploying AI tools, clean and classify your data. An organized data architecture reduces compute costs, improves the accuracy of AI models, and strengthens your overall security posture. You can’t protect what you don’t know you have. 
  1. Think Like an Attacker: Don’t just rely on standard frameworks. Criminals reverse-engineer these frameworks to find gaps. As Payton advised, you need to “zigzag.” Engage creative teams to test your defenses in unconventional ways, using deepfake voice cloning or other novel attack vectors. 
  1. Create a Culture of Experimentation: AI implementation involves risk. Frame your initial efforts as pilots and get executive permission to fail. Fail fast, fail small, and learn from your mistakes. This approach fosters innovation and leads to more robust and effective solutions in the long run. 
  1. Engage Your Vendors: Ask your security vendors tough questions. Do they use third-party red teams to test their own tools? Can they share the results? This holds them accountable and helps you understand the true resilience of the products you rely on. 

Ready to dive deeper into the strategies discussed?

You can watch the full webinar on demand to get all the insights from Theresa Payton and Ethan Banks.