Research isn’t necessarily about getting the right answers, but asking the right questions. After five years as the editor of RHRQ, I’d like to think I’ve gained some perspective on the questions we want this publication to address. To me, the big-picture questions we’re trying answer four times a year, across multiple disciplines and continents, are:
- What—and who—is new or newsworthy in the world of industry-academia research based on open source development?
- What unique advantages distinguish this kind of collaboration from other collaborative partnerships?
- How can we optimize these collaborations to have the biggest impact not only for businesses or universities, but for society at large?
In the past year or more, we’ve had to add another question to these three: How will AI change all of the above?
I’m excited to say that this issue of RHRQ tackles these questions head on, with some fresh perspectives. For instance, in each issue of RHRQ we interview someone in research, usually university based, about their work. The interview is the centerpiece of each magazine because it allows us to make connections between technological innovations and the contexts that made them possible: the backgrounds of the people involved, the interdisciplinary collaborations that become more than the sum of their parts, the past trends and future vision their work is situated in. Put differently, it’s not just about the results of open source research, it’s about open sourcing a research approach that’s proven very effective. And in this issue, you’re getting a double helping.
Both cite working with an open source company as key to their successes.
You’ll meet both Tomáš Vojnar, a long-time Red Hat Research collaborator who is now the head of the computer science department of Masaryk University (Czechia), and Akash Srivastava, the founding manager of the Red Hat AI Innovation Team who came to Red Hat by way of the MIT-IBM Watson AI Lab. Two PhDs, two deep theoretical thinkers, two different paths: Vojnar collaborates with industry engineers regularly but has chosen to stay in academia, while Srivastava found the opportunity to do research in an industry job. In each conversation, they address the impact AI has on research, in terms of opportunities, resource constraints, and partnership dynamics. If you read their stories side by side, you’ll see that although they each function in different spheres, both are finding ways to balance the freedom and creativity of the academic side of research with the industry push for real-world impact, on deadline, with profitable results—and both cite working with an open source company as key to their successes. As a professor, Vojnar has supported several open source projects in automated analysis and verification, including testing and dynamic analysis, that have been shared widely in the open source community and deployed for enterprise use. As an industry researcher at IBM and Red Hat, Srivastava developed the novel solution for synthetic data generation that became InstructLab, an open source project designed to put customizing LLMs within reach for users not trained in machine learning.
That said, as Vojnar points out, not all solutions developed in research are destined for life outside the lab. Vojnar is one of the developers of Perun, a performance analysis toolkit that began life with a small team of researchers at Brno University of Technology. The BUT team began working with Red Hat Research to enhance Perun with kernel-space analysis capabilities, then worked with the Red Hat Kernel Performance Engineering Team, responsible for kernel performance for Red Hat Enterprise Linux (RHEL). Their article in this issue, “Meet Perun: a performance analysis tool suite,” describes the development of the tool and its application in the RHEL use case, and it also very helpfully outlines the challenges and requirements for making a research tool usable for industry users. As the authors observe, addressing those challenges often drives further research and leads to new solutions—solutions that might not exist without the push and pull of industry-academia relationships.
Our other two technical features this issue focus on asking the right questions about how AI can bring value to computing systems. Simone Ferlin-Reiter, a Red Hatter who works with researchers at the Swedish KTH Royal Institute of Technology, asks the question “Can LLMs facilitate network configurations?” The short answer is a qualified yes—hopeful news for making network configuration less prone to human error and limiting the outages caused by network misconfiguration. But the questions raised by the team’s research are the most valuable part of the story: How much impact does the batch size have on accuracy and cost? What is a tolerable balance between accuracy and cost? What opportunities are in reach, and what has to happen before we reach them?
In the article “Smarter AI, fewer resources: bringing cloud AI into real-time edge devices to unlock performance,” Boston University professor Eshed Ohn-Bar asks whether we can design edge systems that use machine learning to seamlessly balance cloud and local resources to optimize for real-time accuracy, efficiency, and safety across different situations. Short answer: again, yes. In fact, UniLCD, the framework described in the article, is currently being integrated into Red Hat OpenShift, providing a flexible solution for large-scale, real-world deployments across various communication and modeling configurations. The article asks one of the most exciting questions research can raise: what’s next? If reducing the energy consumption and cost of using powerful AI models at the edge is possible through solutions like UniLCD, could we extend it to domains like transportation, healthcare, or disaster response?
Red Hat Research and our collaborators get to engage in these questions because we provide the technology platforms and problem-solving that make it possible. US Research Director Heidi Dempsey has often been in the trenches with research projects reluctantly transitioning to new technology—say, migrating from VMs on OpenStack to Red Hat OpenShift and OpenShift Virtualization. Despite the benefits to be gained, the struggle, as they say, is real. Her column in this issue, “Making a research will: the human side of project migration,” provides a very clear, readable guide to the process.
I said research isn’t always about right answers, but if I could give a one-word response to my opening questions, I’d say “inclusion.” An open source development model makes room for ideas from multiple sources so the best ones find each other and get even better. Collaborating across disciplines in an open source way drives better solutions because everyone has the opportunity to win. We can optimize these collaborations by finding ways to get more people involved—bringing a diverse set of skills and bases of knowledge to bear, but also helping people access the resources needed to test, implement, and improve technologies in multiple ways and settings.
How does AI change all that? Maybe a better question is how all that will impact AI. Ethical, open, transformational AI will happen in part because we’re asking good questions and engaging lots of stakeholders to ask even more. So join us!