Red Hat Research Quarterly

Research project updates—May 2021

Red Hat Research Quarterly

Research project updates—May 2021

Here are a few highlights of recent research results from the US. There are many more active projects than we can cover here, so be sure to check listings for additional projects. We will highlight research collaborations from other parts of the world in future editions of RHRQ. Contact for more information on any project.

You can join live Research Interest Group (RIG) meetings each month to discuss new project proposals and review the latest results from other research collaborations. Subscribe to the mailing list to stay current on the interest group meetings.

PROJECT:  FPGAs in Large-Scale Computer Systems

Academic investigators: Martin Herbordt, Robert Munafo, Orran Krieger, Rushi Patel, and Mayank Varia (Boston University)

Red Hat investigators:  Ulrich Drepper and Ahmed Sanaullah

Investigators on this project recently moved closer to their goal of enabling FPGA (Field Programmable Gate Arrays) application development by high-level language programmers, especially those working in datacenter and edge environments, using only open source tools. They demonstrated a hardware implementation of secret sharing using FPGAs and assessed the scalability of the design against comparable software-only implementations. Their results, shared in a paper presented at the 30th International Conference on Field-Programmable Logic and Applications (FPL 2020) by Pierre-François Wolfe, are the first-ever results reported for secret sharing multiparty computation (MPC) on FPGA hardware (see

MPC facilitates shared utilization of datasets gathered by different entities by enabling data from several sources to be used in a secure computation. Only the result is revealed, while the original data is protected. The presence of FPGA hardware in datacenters can provide accelerated computing as well as low-latency, high-bandwidth communication that bolsters the performance of MPC and lowers the barrier to using MPC for many applications. The group’s most recent work demonstrated that secret sharing outperformed state-of-the-art methods for implementing MPC in the datacenter. Using 5.5% of FPGA fabric in a consumer cloud environment, this result can match the throughput of an optimized 20-core CPU implementation, saturating a typical 10Gbps network connection. This result scales with available bandwidth: a single FPGA is able to saturate a 200Gbs link with a throughput of ~26 million AES operations per second.

PROJECT:  Kernel Techniques to Optimize Memory Bandwidth with Predictable Latency

Academic investigators: Parul Sohal, Renato Mancuso, and Orran Kreiger (Boston University)  Rohan Tabish (University of Illinois at Urbana-Champaign)

Red Hat investigators:  Ulrich Drepper and Larry Woodman

Parul Sohal has presented a paper with her co-authors Rohan Tabish, Ulrich Drepper, and Renato Mancuso titled “E-WarP: a system-wide framework for memory bandwidth profiling and management” at the 41st IEEE Real-Time Systems Symposium. 

The paper, which won the RTSS Best Student Paper award, used a profiling approach to model memory behavior and understand memory utilization with enough detail to predict application behavior under controlled conditions. As summarized in the paper, “Profiling represents a substantial refinement of measurement-driven approaches, where fine-grained knowledge of the interaction between applications and the platform is collected and leveraged. Conversely, we treat the DRAM subsystem, as much as possible, as a black box. By shifting our emphasis on a more precise representation of memory bandwidth requirements of applications and by ensuring that the DRAM subsystem operates below its saturation threshold, we demonstrate that highly accurate predictions on the behavior of tasks operating on CPUs and accelerators can be made.” The E-WarP framework provides techniques to profile and bound the temporal behavior of application workloads on CPUs and accelerators, providing tools and details in two Github repositories. See for more information on this project.

PROJECT:  Open Telemetry Working Group

Academic investigators: Raja Sambasivan (Tufts University)

Red Hat investigators:  Marcel Hild

Open source partners: OpenInfra Labs, Open Cloud Testbed, and Mass Open Cloud

Red Hat and academic participants have been collaborating for some time to build, operate, and share infrastructure that demonstrates open source cloud operations at scale, most recently including the Operate First initiative at OpenInfra Labs (OIL). A significant step forward in this effort was the recent formation of the Open Telemetry Working group, with participants from several different universities, OIL, and Red Hat. The group seeks to build upon a realistic production-grade environment, operated by IT operations and used by end users and researchers alike. By exploring ways to provide access to telemetry data for research and open operations engineering in this environment, the group hopes to enable new research and development projects, in much the same way that the open source movement enabled new options for software development. Examples of research projects that would benefit from this type of environment include creating new debugging tools and visualizations, using telemetry data to optimize workload performance, and improving telemetry data itself. Monthly meetings are open to all interested participants. The group charter and working notes are shared publicly (see ) along with a Github repository (see 

PROJECT:    Deploying End-to-End, Fully Virtualized, and Open Source 5G Platforms on OpenShift

Academic investigators: Tommaso Melodia, Abhimanyu Gosain, and Michele Polese (Northeastern University)

Red Hat investigators:  Feng Pan

Traditional cellular networks are mostly based on closed source, inflexible architectures, in which functionalities are baked directly on hardware components (e.g., the base stations). This black-box approach leads to vendor lock-in and is unable to adapt to the rapidly varying network, traffic, and topology dynamics that characterize 5G networks. This results in suboptimal network performance. In the last few years, a number of consortia, primarily led by telcos, have been promoting solutions to overcome this imposed lock-in by pushing equipment manufacturers to produce open hardware that can be (i) dynamically programmed via software and (ii) seamlessly integrated—through open interfaces—with a network architecture consisting of components provided by multiple vendors. The resulting network softwarization allows telcos to directly program algorithms and policies to optimize the network behavior in real time, based on the current conditions and requirements (e.g., traffic demand, Quality of Service [QoS], and latency), while opening the network to third-party vendors.

The goals of this new project are to develop an experimental open source platform that merges open and reprogrammable software and hardware components that can be used to test and deploy fully virtualized 5G networks. The platform builds on Red Hat OpenShift and large-scale national wireless experimental facilities:

  • Arena—a 64-antenna SDR-based ceiling grid testbed for sub-6 GHz radio spectrum research
  • PAWR—Platforms for Advanced Wireless Research, a National Science Foundation-funded program 
  • Colosseum—A massive radio-frequency (RF) and computational facility developed by the Johns Hopkins Applied Physics Lab to support the Defense Advanced Research Projects Agency’s (DARPA) Spectrum Challenge

Ideally, the project will also connect to resources in the Open Cloud Testbed that can provide resources for building core network and datacenter testbeds. The project will develop automated pipelines using OpenShift to build, deploy, and manage these complex systems that combine radio, compute, storage, and networking resources into dynamic experimental testbeds that can cope with the tight real-time requirements of experiments with cellular networks. 

More like this