Red Hat Research Quarterly

Research at Devconf.us: Optimizing and automating the foundations of computing

Red Hat Research Quarterly

Research at Devconf.us: Optimizing and automating the foundations of computing

Software foundations like operating systems and hypervisors—to say nothing of the server hardware itself—are boring. Or at least that’s how almost everyone working atop them wants them to be. Who wants an exciting foundation when you’re trying to get your job done?

But there’s still plenty of innovative engineering work going on in those low layers, and cutting-edge research too. The latter was highlighted at Devconf.us in talks by PhD students at Boston University including Ali Raza and Tommy Unger, Han Dong, and Parul Sohal.

Low-level hardware and software optimizations don’t take one form 

Ali Raza’s research (research.redhat.com/blog/research_project/unikernel-linux/) focuses on unikernels. The idea of the unikernel is that you build your app with the kernel it’s going to run on so that you’ve basically built a bootable app. Advantages include fast booting, a reduced attack surface, and a shorter path from the app to system calls. The current Linux kernel in this research hasn’t been slimmed down to just the basics (library kernel) yet, but a great deal of progress has been made.

Raza’s co-presenter, Tommy Unger, is working on the hypervisor layer. Like unikernels, hypervisors can be smaller and therefore offer a potentially smaller attack surface than a full-blown operating system kernel. Nonetheless, because they are both ubiquitous and essential, they are security-critical applications that make attractive targets for potential attackers. Virtual devices are a common site for security bugs in hypervisors. Unger’s work has focused on a novel way of fuzzing virtual devices (an automated software testing technique) that combines a standard coverage-guided strategy with further guidance based on hypervisor-specific behaviors. 

Parul Sohal’s research interests lie in the management of resources at different levels of the memory hierarchy (Quality Of Service included). Her goal is to achieve better resource utilization and isolation so as to avoid contention, which causes application performance degradation below a minimum quality of service. Sohal’s work takes advantage of recent Intel processor features such as reserving a subset of cache for a given program and memory bandwidth throttling. Combined with containers and control groups (Cgroups), these features can help prevent programs from interfering with each other, something often called the noisy neighbor problem.

Han Dong’s work highlights some of the challenges of tuning software and hardware. Dong observes that a modern network interface card (NIC), such as the Intel X520 10 GbE, is complex, with thousands of hardware registers that control every aspect of the NIC’s operation, from device initialization to dynamic runtime configuration. That’s far too many tuning parameters for a human to manually configure, and only about a third of them are even initialized by today’s Linux kernel. The goal of Dong’s research is to automate tuning this NIC using machine learning. 

What happens at the system level?

However, if we now step back and take a look at the bigger performance and optimization picture, a challenge emerges. While specific optimizations often happen at a very detailed micro-level—as in the case of the operating system, hypervisor, processor cache, or NIC—the real goal is to optimize at the system (or even the datacenter) level. And just as individual programs can suboptimally compete for resources on a single processor, so too can individual low-level optimizations lead to undesirable side effects at the global system level.

As Red Hat Senior Distinguished Engineer Larry Woodman puts it, “Several new CPU/hardware features whose implementation is not yet well understood are likely to conflict with each other when running different applications and benchmarks, causing nondeterministic performance behavior. 

“Understanding these patterns given so many variables soon becomes a daunting task for anyone. For this reason it’s likely that Red Hat Research will investigate automating this process by deploying artificial intelligence / machine learning (AI/ML) techniques and algorithms to uncover and attempt to fix a wide range of scenarios. A future project Red Hat Research is investigating involves using AI/ML for overall system configuration and, ultimately, automated tuning. There are so many parameters that adversely affect each other that manual or even profile-based tuning is not effective or even possible.”

You can follow Red Hat Research projects, and even suggest a project based on open source software, at research.redhat.com. Recordings of Devconf presentations are available at www.youtube.com/c/DevConf_INFO/playlists.

SHARE THIS ARTICLE

More like this