Three years ago, I opened my first column in the first issue of this magazine by expressing my sense of good fortune at being able to start something completely new: not just a magazine, but an entire organization devoted to research on computer infrastructure done entirely in open source. Looking back on it today through a post-pandemic lens, I’m surprised to find that my optimism was not misplaced. Despite the uncertainty of building an organization and a research model from scratch, and the Big Surprise of the world more or less shutting down for two years, we have had some remarkable successes at Red Hat Research since 2019. We’ve learned a lot about how to connect with researchers remotely, brought some great research work into productive use, and expanded our reach in the United States, Europe, and Israel to include many more universities than I ever imagined we would find time to work with.
Of course we have also made some interesting mistakes. I wildly underestimated the effort required to create collaborations among universities in different cities using Red Hat as a conduit. It turns out such efforts are rarely successful, mostly because of the management and communication effort required. To correct this, we have focused on a more decentralized model, which is producing results across the hybrid cloud space with surprisingly little conflict and overlap. We’ve also found that the gap between research proof of concept and actual working code in an open source project is even larger than we thought. This limits our ability to take on new work because there are only so many volunteer engineering days available to turn the work we’ve already committed to into something worthwhile. I suppose this is true of all software, in the end: it takes longer than you think.
Happily, returning to some of our earliest stories reminds me that there is a lot of value in persistence. For example, in our inaugural issue of RHRQ we introduced the wide variety of projects happening at the Red Hat Collaboratory at Boston University. This month’s cover interview features BU Prof. Ayse Coskun, whose proposal AI for Cloud Ops was among the first recipients of the Red Hat Collaboratory Research Incubation Awards. Red Hat Research’s $1 million grant to her group is the largest award we have ever given to a single project, and it’s no surprise. We (and our review panel) believe that the challenge of operations at scale is the greatest challenge confronting computing in general and open source computing in particular. We must learn how to operate large- scale systems and take that learning into the open, and the only way we will be successful is if we can use AI and learning systems to help us.
We must learn how to operate large-scale systems and take that learning into the open, and the only way we will be successful is if we can use AI and learning systems to help us.
A crucial part of using AI for this work is covered in another of our featured articles, on the challenges of making stream processing efficient. Stream processing is a key part of AI Ops and many other AI systems: a system often needs to apply AI to draw inferences from a stream of data or events, rather than grinding through a large pool of data looking for answers. Newcastle University researcher and Red Hatter Adam Cattermole describes his work on a library designed to make optimizing stream processing more efficient and more repeatable. This kind of work will be critical as the volume of nodes at the far edge sending data grows by leaps and bounds in the coming years.
Since its inception, we’ve had a number of pieces on open hardware in RHRQ, which should give you a sense of what an important area
this is. This issue’s update, from Red Hatter Ahmed Sanaullah, is particularly compelling: he describes using an open source ISA (RISC-V) to make the open source custom circuits a programmer might design into their application easier to access from normal code. One of the signature features of RISC-V is its support for easily and securely extending the ISA, which means that using a softcore—a RISC-V processor mapped onto an FPGA—to provide a consistent interface to custom logic on the same FPGA board can be as simple as calling a function from code. The point of the work is all about making it easier for developers to write for custom hardware. If Ahmed’s team is successful, in time developers will wonder how they managed without being able to write custom logic directly to a board and access it from their main application.
Finally, I want to highlight a short review from US Research Director Heidi Picher Dempsey on our signature program to connect university researchers and Red Hat engineers, our Research Interest Groups. Our two RIGs, one for Greater New England and one for Europe and Israel, meet regularly to review ideas for new research we should consider funding or highlighting. The meetings are short and dedicated to technical discussion and debate. If you are interested in what’s happening on the cutting edge, reach out to Heidi to learn more.