A Business Of Experience: Exploring the Distributed Edge for Media Companies, Content Providers, Manufacturers and Digital Commerce Services
Innovative IT service delivery relies on infrastructure that goes through cycles of centralization and decentralization about every twenty years.
- Back in the 60s and 70s, the mainframe (centralized compute and storage) was the foundation of every operational and end-user service, and anyone who wanted to release an innovative business application did it on a mainframe.
- In the 80s and 90s, we had the client-server model, which delivered decentralized compute, storage, and application services into the hands of end-users, so operating system and client application innovations drove business advantage.
- In the early 21st century, the Internet exploded, and client applications gave way to innovative, centralized web services powered by enterprise data centers, SaaS providers, and public clouds.
It seems as though, looking at the timing of those shifts, we’re ripe for a change. And sure enough, our customers are telling us that the infrastructure that underpins customer experience and service delivery is going to change, again, and they’re trying to work out what that change will look like.
But we know a few things about the change.
First, it’s a result of shifting expectations. As discussed in Part 1 of this two-part series, organizations are under pressure to deliver transformative experiences to their customers that are powered by masses of data and computation, are latency-sensitive, and often blur the lines between media, content, and retail — or to put it another way, they’re evolving into businesses of experience, not product or service. For example, many brick-and-mortar retailers are forced to compete with online retailers in order to stay in business and need new in-store digital experiences to attract and retain customers.
Second, it’s a result of problems that can’t be solved with conventional infrastructure. Centralized data centers and clouds sometimes struggle to support these compute and data-intensive experiences at scale without service quality issues. More and more organizations are deciding that delivering these services with high service quality and reasonable cost requires another approach to IT infrastructure.
And, looking at the bullets above, you’d naturally think that these transformative, innovative services would be powered by some kind of decentralized infrastructure.
You’d be half right.
Whether you are a retailer who gathers millions of data points to develop and refine real-time user experiences, both in-store and online, a manufacturer who relies on thousands of IoT sensors to feed analytics for your manufacturing arm’s just-in-time supply chain, or a digital commerce company thinking about plans to engage in hyperlocal augmented reality marketing in an emerging metaverse, you need infrastructure that’s a mix of centralized and decentralized.
Or, to put it another way, you need distributed edge infrastructure.
What is distributed edge infrastructure? Instead of filling a data center with servers and storage, infrastructure is spread out, closer to the locations where services are delivered (stores, towns, end-user devices). With distributed edge infrastructure, servers could possibly live under every POS system in all your stores, possibly in a server closet at your manufacturing facility, or could be spread across several racks in dozens of colocation facilities to power your content distribution. Any distributed distribution of infrastructure is possible.
Distributed edge is emerging at the next foundation for innovative service delivery because it solves some of the problems of centralization, including:
- High latency: Increasingly, organizations don’t control every aspect of a network, but you do know that the farther network traffic travels, the more likely it is to be delayed by high latency. Many emerging services are latency-sensitive and simply can’t be delivered with adequate quality of service from a centralized data center.
- Bandwidth limitations: Bandwidth from any data center or cloud has its limits. If your service has to support users at massive scale, it’s possible to simply run out of bandwidth unless you distribute service delivery across many smaller data centers and devices.
- Downtime: as we’ve seen again and again, a simple misconfiguration can result in widespread downtime due to a failure in a central data center or cloud. By bringing the infrastructure needed for service delivery to endpoints and smaller, regional data centers, organizations can create a mesh or net of service delivery that isn’t as susceptible to downtime because it has many interconnected delivery points.
Having said all this, let’s be clear — when we said that you’d be half right to assume that the next wave of innovation is decentralized, that’s because distributed edge infrastructure coexists with, and often relies on, centralized infrastructure. Most of today’s distributed edge still relies on centralized data centers or cloud for many functions, including full-fledged data analytics, command and control, backup and archive, to name a few.
Our customers tell us that distributed edge computing environments will become their new normal, but they’re not there yet. Most organizations are just evaluating their options, thinking through challenges and considerations as they evaluate what they want to do (the experience) and how they want to do it (the infrastructure requirement). Some considerations include:
- Distributed edge infrastructure must be tied together with a deep distributed network that provides the right bandwidth and connectivity, not just in a spoke and hub (data center to edge), but also in a net (edge location to edge location) for high availability and redundancy. Not all network providers have adequate connectivity depth and breadth to address this need, and some don’t have the network intelligence to route around problems.
- It may not be cost-effective to put massive amounts of compute and storage in every edge location. Most companies are evaluating an approach that puts limited computing power and storage in every edge location (a server or two perhaps), then aggregates data and processing in a metro-level or regional node, which could be the size of a server closet or a pair of racks in a micro data center.
- Defining a distributed edge infrastructure can attract or repel third parties who may want to develop complementary services on your platform, including in-store advertising and applications they want to monetize. Using open source tooling, and proprietary technologies is one way to attract third-party engagement.
- Stakes are high and partnerships matter. Choosing the right partners for everything from networking to orchestration is an essential part of a successful approach and compelling outcomes.
At Zayo, we work with organizations, from startups to Fortune 500, to define, deploy, and deliver next-generation experiences built on distributed edge technologies. With our unique, diverse, high-performance routes that connect major cities and a large number of edge locations, we’re able to provide reliable, high bandwidth, low latency connections for transformative service delivery.