Cloud RAN Accelerators: which one is in the lead, and why? - Ericsson

Cloud RAN Accelerators: Which one is in the lead, and why?

Press play on photo to watch the video

Do you want to listen to this videoepisode instead? Play the audio only.

Service providers' key expectations from 5G and toward 6G era are to successfully manage the immense traffic growth and leverage mobile networks to support new use cases. To put the expectations into concrete numbers – the projections say that 5G subscriptions will reach 1 billion, and the average monthly usage per smartphone will surpass 15 GB by the end of this year and then grow to 40 GB by the end of 2027.

As mid-band brings the potential of high capacity to explore new use cases with high bandwidth demand, new considerations appear on how to get the most out of 5G in Cloud RAN, impacting service providers' strategy of designing their 5G networks and selecting the best RAN processing option.

Talking about processing and the time-critical nature of some elements, different parts of the RAN software stack have different requirements. It leads to a discussion about which way forward is best in RAN computing implementation.

What are Cloud RAN processing needs?

Within the compute platform of Cloud RAN, some of the most demanding computation acceleration is carried out by specialized hardware that accelerates compute-intensive functions. The lower you go in the protocol stack, the higher the demand on the processing. Layers 1 and 2 combined comprise 90 percent of the processing demand.

Now, let's do some comparisons and calculations.

The processing needed for a fully loaded, low-band cell on a commercial-off-the-shelf (COTS) server, excluding processing for the operating system and common functions, is roughly 1 core. For a fully loaded, mid-band cell without any accelerator, this would be about 16 cores, or approximately 20 times larger in downlink compared to a low-band cell.

One of the most process-heavy functions is forward error correction (FEC) in layer 1. However, the FEC in layer 1 is well defined in 3GPP, for example, low-density parity-check (LDPC) for new radio (NR) at the data channel, thus providing system design opportunities to offload this well-defined but compute intensive task to the accelerator.

A key to tackling the processing challenges is finding the right balance between cloud infrastructure and accelerators.

So, what are the accelerator types and what's the difference between them?

While Selected Function Hardware Acceleration is evolving into an integrated architecture, a Full Layer 1 Acceleration takes place on an auxiliary PCIe board. Considering the needs for energy efficiency, design flexibility, portability, and support for an ecosystem of Cloud RAN suppliers on common cloud infrastructure, Selected Function Hardware Acceleration currently offers the best option to build high-performing Cloud RAN networks.

In the case of the Selected Function Hardware Accelerator, the CPU is free to use its cycles to process other tasks while the accelerator works on the data that needs to be accelerated. When the CPU receives the processed data from the accelerator, it can switch back to the original processing context and continue the pipeline execution until it receives the next accelerated function. Selected Function Accelerators require well-defined APIs to enable ecosystem adoption, such as Data Plane Development Kit (DPDK). To minimize data transfer over this API, acceleration can be integrated within the CPU chip.

One of the key aspects is that only selected tasks with well-defined API (or with less to no need for flexible programmability) are offloaded to the accelerator, thus achieving the optimization between all software programmability and heavy-compute offload. In other words, while Selected Function Hardware Accelerator does heavy and well-defined computing, all the other functions are left in the CPU, thus gaining the flexibility and ability to apply advanced algorithms and unique IPs.

As Selected Function Acceleration requires massive, latency-sensitive data transfer between CPU and accelerator, tight integration is preferred here.

With the Full Layer 1 Acceleration, a part of or the entire layer 1 pipeline passes to the accelerator. Also, the fronthaul NIC may be integrated in this accelerator mode – separating the fronthaul interface from the rest of the RAN function. It enables having a less data-heavy interface between CPU and accelerator, and the acceleration solution can be a mix of programmable and "hard" blocks. The greater programmability options mean that there can be bigger differences in how this is implemented by different vendors, which means that the acceleration approach has an impact when it comes to the support in a wider ecosystem of applications, cloud platform software, server design, and CPU dimensioning.

To sum up, the fundamental difference between the two acceleration approaches is the balance or trade-off between programmability and computing efficiency, with Selected Function Acceleration having an advantage here.

Why? Because the Selected Function Acceleration achieves the best balance between the two as it only offloads selected tasks to the accelerator. These tasks have well-defined processing, such as FEC and API, being the perfect candidates to be offloaded to the accelerator and free up the CPU resource.

The importance of standardization for ecosystem adoption

From the perspective of creating a highly portable software application, the difference in acceleration approach can be quite important since it takes the support of an entire ecosystem to ensure the solution comes together. As an example, we mentioned DPDK and how it has played a key role in facilitating very fast packet processing. This has enabled service providers to move performance-sensitive applications, such as the backbone for mobile networks and voice, to the cloud.

Since 2010, when it was created as an open source, DPDK has been continuously growing with new contributors, patches, contributing organizations, and five major releases. Today, DPDK's community comprises hardware vendors, physical and virtual network drivers, and various open-source organizations. DPDK supports all major CPU architectures and network interface cards from multiple vendors, thus emerging as the industry-standard API, and is recognized by key players in the Cloud RAN ecosystem. It also includes BB Dev, the specific interface for the baseband application. Thus, the necessary support infrastructure for the wide-scale adoption of Select Function Hardware acceleration technology is already in place.

One important aspect to mention here is that since 2021 O-RAN has specified a full Selected Function Hardware Accelerator API based on the widely industry-adopted DPDK framework.

Since the implementation of Full Layer 1 acceleration provides a high degree of programmability, this comes with many different vendor implementations. The discussion around standardization then moves from defining specific hard blocks to a more generic interface specification between layer 1 and layer 2 of the RAN stack. It means that standardization of an acceleration component has a high impact on how the software stack of the application is implemented in case it results in a need to redefine the layer 1 to layer 2 interface.

While there are ongoing efforts in O-RAN alliance WG6 to work on standardization of this area, those efforts haven't yet resulted in a standard that caters to mid-band deployments with Massive-MIMO type of radios while also allowing for a vendor agnostic approach to the Full L1 acceleration component.

What does the (imminent) future bring in the accelerators' evolution?

Further evolution of acceleration solutions is trending toward integrating hardware acceleration, which helps reduce energy consumption and lowers the latency between CPU and accelerator components. The fundamental approach is to consider the entire system – starting from radios and Cloud RAN applications, then decide what enhancements to build into the CPU instruction sets and where to place the accelerations. This will allow the vendors to map the right acceleration hardware to the right part of the Cloud RAN workload.

Offloading CPU cores through integration will lead to increased performance, reduced number of processors, and savings in power consumption. It will bring the best combination of performance, power, and flexibility to service providers.

The best path forward

Acceleration solutions are far from the final development phase. They are still evolving and will continue together with the evolution of Cloud RAN.

So, based on the current state of acceleration technology, we believe that the Selected Function Hardware Accelerator offers the best path forward to deliver on operators' goals, namely – significant energy efficiency improvements, ecosystem support, and flexibility at reduced complexity, combined with lowering the costs and footprint of servers.

Read more

Cloud RAN Acceleration Technology (PDF)

Ericsson Cloud RAN

The four key components of Cloud RAN

How to get the most out of 5G mid-band in Cloud RAN

Evaluating processing options in RAN: Purpose built and Cloud RAN

Comments

Popular posts from this blog

Virginia Tech Pamplin alumnus Jay Ives included in Forbes' 30 ... - Virginia Tech Daily

Top ICT tenders: Eskom looks to customer engagement - ITWeb