5Qs on 5G: Red Hat

Our latest 5Qs on 5G comes from open source and virtualisation specialist Red Hat’s Chris Wright. Chris is Chief Technologist at Red Hat.

Red Hat’s OpenStack-based NFV Infrastructure platform is used by the likes of Alcatel-Lucent (now Nokia) for its CloudBand NFV solution. The company is also a member of open source projects for NFV and SDN such as the Open NFV project and Open Daylights.

Here, Wright provides some open and thoughtful answers to questions that were intended to throw some light on the interplay between NFV and 5G. In terms of what Red Hat is doing, what is specific to 5G? How will the vendor landscape change and what does 5G demand of control and core layer technologies.

Wright’s view is that 5G will bring with it economic and performance requirements that will only be met with the implementation of truly open systems that keep service providers from repeating past mistakes.

What is Red Hat’s role as 5G specifications develop?

Red Hat is contributing time, resources and code changes to related industry forums like the ETSI’s NFV specification group and the Open Platform for NFV (OPNFV) open source initiative. We are also working bilaterally with the major telco service providers, vendors, and system integrators to foster alignment on platform technologies for 5G, including management and orchestration systems, in particular the software-defined cloud infrastructure (across compute, network and storage).

Through our community work, we are also enabling our partners to leverage the innovation potential of open source and open source development processes, and we support them in the migration from traditional physical or virtualized system designs to cloud-native system designs.

Thereby, we are helping to define and build the foundation on which our partners can then build higher layer, telco-specific management systems, network functions like the 5G radio access, and services.

Does Red Hat think it will need to develop new technology, capabilities and features of its own to address specific 5G requirements. How does “5G” change what Red Hat was doing in any case?

5G is typically associated with pushing the boundaries of current mobile networks by several orders of magnitude:

  • 1000x the traffic volume
  • 100x the peak data rate per user
  • 100x the number of connected devices
  • 1/10x the latency

Currently, the radio access link can often act as a bottleneck, so a majority of industry R&D focuses on new radio access technologies. Eventually, you will also need a platform and services that will be capable of processing controls and data traffic.

Designing highly reliable, ultra-low latency systems on tailored hardware and at a small scale is non-trivial… A traditional hardware centric view of infrastructure components that provide  5 or 6 “nines” availability breaks down at scale.

Designing highly reliable, ultra-low latency systems on tailored hardware and at a small scale (from embedded systems to appliances consisting of a few handful of compute blades) is non-trivial. But as the industry has learned from Amazon, Google and other web-scale companies, this is a completely different game at scale. At the massive scale of 5G, you need to reconsider how you meet your service availability requirements.  A traditional hardware centric view of infrastructure components that provide  5 or 6 “nines” availability breaks down at scale. You have to depart from the old “carrier-grade” infrastructure thinking and embrace the insight that infrastructure will fail and your services have to be designed to be resilient to that – and Red Hat can provide the experience and enabling technologies for building such services.

Ironically, though, considerable investments over the last year also went into getting the cloud platform ready for the traditional, pre-5G workloads, which had never been designed for cloud

5G’s ultra-low latency targets (the industry is discussing end-to-end latencies between services and service consumers on the order of 1ms round-trip) are the real challenge, though: It not only requires a new radio access technology, but the physical limits of speed-of-light mean that some services, including their compute and storage, will have to move closer to the users and their share of the latency budget will be much smaller than today. This means more decentralization and improvements in management and orchestration of infrastructure and workloads. We also need more work on further reducing response times of massively scalable distributed systems.

Ironically, though, considerable investments over the last year also went into getting the cloud platform ready for the traditional, pre-5G workloads, which had never been designed for cloud, but are rather physical functions ported to a traditional virtualisation environment.

What will be the role of open source in 5G?

Open source will be the foundation for 5G.  More service providers are demanding open source to promote open interfaces and accelerate innovation. The most advanced among them are driving their requirements – and in some cases even their solutions – directly into the relevant open source projects, rather than waiting for their vendors to do so, has helped them gain more influence on their destiny and their vendors’ respective roadmaps.  This is the real power of open source, bringing users and developers together to evolve technology.

To leverage the true potential of open source, however, Red Hat believes that service providers need to demand that solutions which their vendors label as open source are – in fact – 100% “upstream”, i.e. all changes have already been fully contributed back and integrated into the original open source project. Otherwise, they risk lock-in when the community’s solution starts diverging.

Some vendors are still keeping parts of their solutions proprietary for an unspecified amount of time under the pretext of making innovation available more quickly

Some vendors are still keeping parts of their solutions proprietary for an unspecified amount of time under the pretext of making innovation available more quickly to service providers, claiming driving innovation into open source projects was slow.

While it’s true that the thorough design and implementation reviews by the open source community members require time and potential cross-project coordination, one has to put this into perspective.  This is an upfront investment in quality and long term maintainability.  Consistent engagement with the community builds trust, credibility, and efficiency that can mitigate that cost, enabling delivering of critical NFV features across multiple projects within a single 6-month release cycle.

Will 5G networks bring about a necessary change in the vendor ecosystem – ie partnership, collaboration, open software/interfaces etc? How does Red Hat think it will play a part in that?

When you look at current mobile networks, they are highly complex systems with telco-specific protocols and interfaces, with service-specific functionality baked into the architecture, and with a heavy technology backpack of multiple generations of technology. The R&D and IPR behind this poses a high market entry barrier to new players. Incumbent players are used to tightly controlling the whole, vertically integrated technology stack of their solutions, including the pieces they have OEMed from other vendors.

One lesson from the NFV transformation, which extends to 5G, is that telco service providers require disaggregation and the freedom to mix-and-match best-of-breed solutions to be able to compete in the market.

For 5G, large, innovative ecosystems and collaboration is certainly a desirable trait, but this is also still somewhat new territory

In the IT cloud space, there’s a lot of innovation coming from startups and the open source communities. It is common to jointly work on platforms like OpenStack that foster the creation of ecosystems of big and small players, proprietary and open source, each of which can add value in their own niche. For 5G, large, innovative ecosystems and collaboration is certainly a desirable trait, but this is also still somewhat new territory for current telco service providers and vendors.

We often see network slicing, NFV, virtualisation talked about in terms of enabling 5G. But is it helpful to think of these as 5G technologies?

Does Red Hat see itself as developing specific “5G” technologies, or rather of technology that can be used in a number of environments, including eventually 5G?

Network slicing, NFV and virtualisation are, at their core, fundamentally about transferring established IT cloud concepts – commoditisation, resource pooling, scale out architectures and total automation – to networking and telco use cases. And while these telco use cases introduce new and more stringent requirements, none of the attributes associated with these requirements – “carrier-grade availability”, “military-grade security”, ‘line-rate packet processing”, “real-time virtualization”, etc.  – are really unique to telco. In fact, our enterprise customers from other verticals like finance, big data analytics, and those exploring the Internet of Things have been voicing the same requirements.

For 5G, it will be vital for service providers to not repeat past mistakes

For 5G, it will be vital for service providers to not repeat past mistakes of growing complex and inflexible architectures from technologies that had been custom-tailored for carriers by a small group of vendors. Past examples or cases, may include ATM networks, the IP Multimedia Subsystem, Advanced TCA server hardware, and the “Carrier-Grade” fork of Linux.

Like many successful open source projects, Red Hat believes that the foundations for 5G should be built on open, applicable cross-industry best practice hardware and software platforms – instead of provided by an exclusive club of vendors. This will allow service providers to focus on adding value through their operational excellence, their radio and transport network assets, and their bouquet of partners and services.

Share