AI is transforming industries across the board, and cybersecurity is no exception. As AI becomes more advanced, businesses are striving to harness its potential while maintaining the human oversight essential to network security.
To get a real-world perspective on this transformation, we sat down with Shawn Edwards, Zayo’s SVP Chief Security & Privacy Officer. He offers an inside look at how AI is reshaping network security, the obstacles businesses face during adoption, and the future of human-AI collaboration in cybersecurity.
His insights reveal the incredible potential and the hidden risks of AI-driven security, providing essential advice for any organization navigating its own digital transformation.
Q&A with Shawn Edwards
How has Zayo’s approach to AI and network security evolved in the last year?
Since 2024, we’ve moved from piloting AI-driven tools to more operationalized use across our Security Operations Center (SOC) and network environments. Last year, we were proceeding cautiously, keeping humans as the primary cog in the machine for most decisions. Now, AI is actively supporting automated containment and analytics, while humans continue to validate high-impact actions. The strategy has matured into a “co-pilot” model. AI is proving to accelerate detection and response times; all the while having human judgment safeguards, accuracy, and trust.
How has Zayo needed to increase capacity to accommodate AI in your security teams?
We’ve invested in both compute and bandwidth. AI requires enormous processing power to analyze telemetry and intelligence. While much of this is handled in cloud-based environments, we’ve also had to ensure resiliency across our own infrastructure so that systems perform well and our teams can support adequately.
How do you see AI shaping the landscape of network security in the next 5-10 years?
I can see AI moving from an augmentation tool to a trusted advisor. I believe we should see autonomous remediation at scale, advanced behavioral modeling that triggers with higher fidelity, and finally AI-driven orchestration across environments.
The long-term picture is AI-vs-AI in the battlefield, where speed, flexibility, and context will be critical.
I can see challenges and adversaries using the same capabilities. The long-term picture is AI-vs-AI in the battlefield, where speed, flexibility, and context will be critical.
In what ways has AI improved threat detection and prevention across Zayo’s network over the past year?
We’ve seen measurable reductions in mean-time-to-detect (MTTD) and mean-time-to-remediate (MTTR). Correlation across data sets and sources that would have taken humans hours to triage can be done in seconds. Improved detection logic, containment and greater precision are our gains. Perhaps as important, it has freed our analysts from repetitive triage, allowing for strategic threat hunting and more complex incident response.
Are there particular vulnerabilities inherent in AI-powered cybersecurity solutions that concern you? How does Zayo address these?
AI models are only as good as their training data. Risks can include:
- Model Poisoning: adversaries injecting bad data to manipulate outcomes
- Bias: “black box” decision-making where we can’t fully trace the logic in decision making
- Over-reliance: the human assumption that AI is infallible
What are the risks of relying too much on AI for network security? And how does Zayo strike the right balance between automation and human expertise?
At Zayo, we use AI as a force multiplier, not a replacement.
The biggest risk would be blind trust, complacency, and loss of creativity in problem-solving. AI can have the risk of missing the “unknown unknowns.” At Zayo, we use AI as a force multiplier, not a replacement. AI can assist on handling scale, patterns, rapid response; but humans help bring the intuition, ethics, and context that is vital. We manage this balance through our playbooks, where we outline automation and human escalation.
On the flip side of AI-powered cybersecurity, what trends are you seeing in AI-powered cyber attacks?
Adversaries are using AI to:
- Generate convincing phishing and social engineering campaigns
- Automate vulnerability discovery and analysis at scale
- Enable real-time obfuscation & defense bypass
The “open” part of open AI has posed concerns about data aggregation. What steps does Zayo take to mitigate privacy risks tied to shared AI models?
Data governance is key. Sensitive customer and network data is never fed into open/public models. Instead, we rely upon enterprise-licensed, walled AI platforms with contractual and technical safeguards. Where we use third-party AI, we anonymize or tokenize inputs. In short, customer data stays protected, and our AI deployments remain compliant with privacy and regulatory frameworks.
Are there specific decision-making processes that AI has taken over entirely at Zayo, and what results have you observed from those changes?
Yes, AI now manages independently several containment and remediation workflows, such as isolating endpoints or cloud platforms which are at risk. The results are faster containments, less downtime and reduced customer impact. We’ve also reduced much of the manual noise by filtering out low-fidelity alerts and risks, allowing our analysts to focus on higher-order threats.
What skills and knowledge would you advise professionals in the cybersecurity industry to cultivate as AI becomes more prevalent?
- AI Literacy: understanding how models are trained, how they can be attacked, and how to tune and monitor them
- Human Strengths: critical thinking, ethics and context application that AI skills cannot replicate
What advice would you give organizations just beginning to integrate AI into their network security frameworks?
AI success in security is less about the tool itself, and more about how well humans and machines are integrated into a resilient, adaptive partnership.
First start with augmentation, not replacement. Begin by automating repetitive, well-defined tasks. Maintain human oversight and put in the work to establish strong governance. Just as important is to invest in training your teams. AI success in security is less about the tool itself, and more about how well humans and machines are integrated into a resilient, adaptive partnership.
Striking the Right Balance
Shawn Edwards’ insights make one thing clear: getting the most out of AI in cybersecurity requires striking the right balance between AI-driven tools and human oversight.
Organizations that thoughtfully integrate AI — by starting with augmentation, establishing strong governance, and investing in their teams — will be best positioned to thrive. To truly be successful, AI-powered security needs a human co-pilot to drive critical thinking, creativity, and ethical judgment.