Zayo Logo

AI Takes on Network Security


by Shawn Edwards, Chief Security Officer

Black woman working in office and futuristic graphical user inte

An Interview with Zayo Chief Security Officer, Shawn Edwards

The advent of artificial intelligence (AI) and machine learning (ML) is starting to reshape the landscape of network security. These disruptive technologies are no longer just buzzwords, they’re integrating into our networks and directly influencing how we secure them.

In this interview for Zayo, Stacy Jackson interviews Chief Security Officer Shawn Edwards on the implications of these developments.

Stacy Jackson:

AI is really starting to seep into our business days here at Zayo. In marketing, I started using it as the fastest research assistant ever! How are we using AI in network security at Zayo?

Shawn Edwards, CSO:

For sure. We’ve implemented AI in various functions of network security. It’s not a thing of the future; it’s very much a thing of the present.  

One key application is endpoint detection and response. AI is really good at this – finding the patterns and then spotting when behavior deviates from that pattern. At Zayo, we’re using AI to sift through mountains of data to detect anomalies, flag suspicious behavior, and proactively alert my team so we can mitigate potential threats.


Let me stop you right there! “…alert my team so we can mitigate…”?


Right. Zayo’s use of AI in its own network security currently still has real live humans making the hardest decisions. If AI scans the network and sees a reduction in throughput around a certain Zayo networking hub, we do not give AI the power (yet!) to redirect traffic automatically around the troubled area. 

While we are slowly allowing AI to act based on its findings, it’s crucial to remember that AI is still a tool, not yet a replacement for human judgment. There are critical situations where human intervention may be necessary to prevent errors or misinterpretations.

AI has smoothed out our workflow. The tool uses natural language processing to interpret the mountains of data the network throws at it. So instead of saying, this IP address equals 1234 and that hostname equals ABCD, you can ask the tool, in your own natural language, to display whatever you need. The tool then turns your question into a query and then prints out the results.

And then you can continue with the tool just like using ChatGPT. For instance, Now, show me if any of those anomalies are outside the US. Do you see any kind of malicious activity associated with those findings?

And you can keep asking questions and it’ll help you drill into the actual content without having to know all the bits and bytes on how to actually run these queries. So now, you don’t need a degree in coding to become a network security specialist.


Are you worried about AI over-reporting incidents?


We actually define for the tool when something should be reported as an anomaly. We teach our AI models what data patterns are normal. Once that’s learned, the model looks for oddities or abnormal behavior. So, when Stacy tries logging into her account and the password fails…


Hey! That only happens like, twice a day…


…that could just be a mistake. But if Stacy tries a login three times and fails, AI knows that that’s something we want to watch. And then it looks for different things – like that Stacy’s login attempt was coming from Botswana (not where Stacy works). And it’ll start adding those things together. And based on what we’ve defined and how it’s learned, AI will determine that it needs to report these incidents to the SOC (our security operations center) so they can go and look at it.

So in and of itself one anomaly doesn’t equal an event. Maybe two, three, or five things could actually end up becoming an incident.


Are we allowing AI to make any decisions in our security currently, without human intervention?


We’re starting to, yes. By using AI for predictive analytics, we’ll say that if you see this (for example, malware), don’t let that happen (for example, allow access). Don’t wait for me to tell you to contain, quarantine, lock down, or protect our data. If an employee is infected with malware, AI will find that suspicious and knows to automatically react.

In this case, we’re taking it beyond just information to actual action. 

AI is smart enough today to understand that when it sees a network disruption or a fiber cut, it can do more than report it. It can self-heal and redirect traffic down different pipes. If capacity seems to be spiking in one area and getting saturated down a pipe, AI has the ability to understand why and redirect some of the traffic down another pipe to offset it.

To be clear, AI can do this today. But since a traffic redirect can be so disruptive to our customers, we still have humans do the double check and make the decisions.


I’ve been hearing a lot about AI consuming space, power, bandwidth… has Zayo needed to increase capacity to accommodate AI in our security teams?


It’s both processing power and bandwidth. AI analyzes massive amounts of data, and that consumes CPU cycles. You ask it one little question, and it has to go through all of its data to find the answer – that a lot of compute power. 

Most companies are using AI in the cloud currently – so in those cases the computational consumption is the cloud provider’s issue to solve. But enterprises are also building their own AI capable data centers. If you have a data center in Washington and a data center in Austin, you have to have redundancy and resiliency. The data exists in both locations, and they have to stay in sync because you don’t want a different answer from one than you would get from the other. Those repositories of data, and the movement of the data consumes more bandwidth.


Does the “open” part of open AI pose a security risk to protecting data? Does any of this scare you at all?


The “open” part of open AI was a game changer. For one, there are no longer problems integrating AI into our current security models. AI organizations publish their models. We simply take that model and overlay it on our data. It’s flexible and almost immediately compatible. AI has blown this out of the water. It uses predictive analytics to understand relatively unstructured data. It knows that this is a name, this is an address, that’s a phone number.

But! People are taking their data (sometimes sensitive data) and aggregating it into AI models. So now, if I asked the model the right question, I could get some of that data back. It can be quite risky. 

There’s another risk too. If you ask AI a question and it’s seen that the last 100 people who asked that question also asked these other questions, AI will serve you up the connections it’s made from all other users. While this can help get you unstuck when you’re stuck, the problem is that AI makes it harder to think creatively, because now you’re just following AI’s train of thought. 

And the part that scares me to death? Back to the network example – if AI detects a fiber cut on the East Coast and we’re automatically rerouting traffic, potentially creating congestion in other parts of the network, disrupting customers – what if… an adversary who’s into my model has injected bad data? What if that traffic spike was artificial? So yes, I’m careful about the decisions I allow AI to make.


One final question. We’ve seen ethical considerations in the use of AI across industries. But for network security – do you have specific ethical considerations too?


The algorithms are written by people, and people are flawed. So when, for example, ChatGPT provides you its sources, you still don’t know how those sources were chosen, what other sources may exist, the information it’s not providing you.

This is the same for AI in network security. AI has been fed information based on an opaque backstory. 

So my advice to my team? Just be smart. I’m probably more bleeding edge than most – I’m diving into the promise of AI. But I’m directing my team to be careful.

This is why we’re implementing AI, testing it on our network’s security, but still leaving the heavy decision-making to the team. It’s one thing when AI can help diagnose an illness – but another thing altogether when you allow AI to automatically administer medications based on its diagnosis.

And even with this “limited” use – we’ve seen enormous benefits! AI has reduced the need for manual labor in detecting and responding to outside and inside threats. AI accelerates our process, enhances the efficiency of security operations, and enables quick decision-making based on real-time data analysis. As a new team member, AI is no slouch!

But, until we trust it completely, we implement the right safeguards and protections, go in with eyes wide open, and learn. The benefits of AI will certainly outweigh its challenges, and even its risks.

As AI continues to expand its footprint, Shawn’s insights offer an acute reminder that a balanced approach is necessary, one that marries the strengths of AI with human expertise. Ethical considerations, biases, automated decision-making all represent a new frontier with its unique set of challenges

AI in network security is a fascinating domain, not without its risks but full of untapped potential. Much like the realm of AI itself, it is continuously evolving and mastering this brave new world requires vigilance, wisdom, and the right blend of human-AI collaboration.

About the Author

by Shawn Edwards

Chief Security Officer

Discover how Zayo connects what’s next

Learn more about Zayo’s broad portfolio of services