Red Hat Research Quarterly

The right idea at the right time: networking researchers use open source for real-world results

Red Hat Research Quarterly

The right idea at the right time: networking researchers use open source for real-world results

We invited Red Hat Principal Kernel Engineer Toke Høiland-Jørgensen to interview Anna Brunström, currently a Full Professor and Research Manager for the Distributed Systems and Communications Research Group at Karlstad University, Sweden. Prof. Brunström has a background in distributed systems, but her main area of work over the last years has been in computer networking. Their wide-ranging conversation covers programmable networking, open data, diversity in IT fields, and more.

about the author

Toke Høiland-Jørgensen

Toke Høiland-Jørgensen is a Principal Kernel Engineer working on networking and BPF. He holds a PhD on the topic of network performance and bufferbloat.

Article featured in

Red Hat Research Quarterly

August 2021

In this issue

We invited Red Hat Principal Kernel Engineer Toke Høiland-Jørgensen to interview Anna Brunström, currently a Full Professor and Research Manager for the Distributed Systems and Communications Research Group at Karlstad University, Sweden. Prof. Brunström has a background in distributed systems, but her main area of work over the last years has been in computer networking. Their wide-ranging conversation covers programmable networking, open data, diversity in IT fields, and more.

Toke Høiland-Jørgensen: Let’s start by talking about how open source and research interact. In the academic community there has been a trend toward open access and reproducibility. How do you view the relationship between open source and the academic community?

Anna Brunström: I think open source has a very important role here. If we talk about systems research or networking research, the availability of Linux® and FreeBSD and the possibility to implement and test things in a real network stack has had a large impact. If you try to implement a new feature in a protocol or some other mechanism, it will be extremely difficult to build the entire network stack just to experiment. But when you have the open source available, you can implement it as a PhD student or researcher in a real system and run experiments in simulated and real networks. The networking research community has used this and gotten valuable results.

“My perception is that the open source community is quite willing to help. If they see a useful solution from academia, they are willing to offer their advice on how to get it integrated.” —Anna Brunström

There is also the possibility of having a real-world impact, right? When you develop open source solutions for things you research or invent, you can put them to practical use. This is very important for academia, to not just publish papers but also have an impact on industry and on society at large. 

Toke Høiland-Jørgensen: I agree. At the same time, having moved over to the open source development side, I’ve also run into this perception that academics just write and publish papers and don’t get the code ready or get it upstream. Have you seen this?

Anna Brunström: You see both sides of it, right? This depends a lot on personal interest or the interest of a group, because going from a research prototype to something upstreamed and integrated in the Linux kernel is a lot of extra work. Depending on when the PhD thesis needs to be done or the interests of the PhD student, this can vary a lot. But a lot of researchers do upstream results; there are research groups that contribute a lot to the implementations all the way up. But it’s also very dependent on the interest from the open source side for the solution.

Toke Høiland-Jørgensen: We’ve just had a really infamous incident with graduate students from the University of Minnesota who intentionally introduced faulty code to the Linux kernel as part of their research projects. Is that a risk of this kind of interaction? 

Programmable networking means that you can manage and control the network through an open API. So you can modify how the network behaves by reprogramming it, in a much more flexible way than you could before.

Anna Brunström: That was an unusual incident—that’s not how the interaction normally works. But understanding the upstream process is definitely a hurdle to get over. This is why contact between academia and industry is very important, to support that process. My perception is that the open source community is quite willing to help. If they see a useful solution from academia, they are willing to offer their advice on how to get it integrated. One of the issues is finding the right person to connect to or the right channels for getting this interaction.

Toke Høiland-Jørgensen: What struck me as a PhD student was how much effort goes into finding someone who can give you feedback. You’re working with something so specialized that there are very few people who can give you qualified feedback. The open source world is a lot the same. You can submit your proposal to experts and they will give you minutely detailed feedback. It can be an intimidating experience, but it’s also liberating and incredible to get this level of expert interaction.

Anna Brunström: That is a good point. A lot of the research process is about finding people who have competence and interest in the same topics. When you submit for open source, you have these experts, but the experts are also quite busy. So it also depends on whether what you’re proposing seems like an interesting contribution at that point in time. Maybe it is a good idea, but it’s not useful right now, or it needs to be reshaped quite a bit. With all these things there’s a timing issue for when ideas are proposed and how they fit into what the rest of the open source community is working on and prioritizing at the time.

Also, a lot of the functionality you have available in the Linux kernel is not used a lot. For instance, you have a lot of different congestion control algorithms. If you look at this from a researcher’s perspective, it’s a random selection. Why do we have this particular set of algorithms available? It has to do with the interests of the people who worked on those algorithms, the timing of when these algorithms were proposed, and what the interests of the rest of the community were. Also, many other algorithms are implemented, but they aren’t implemented in the mainline. Ideas take a long time to mature, and you have ideas that build on each other. Not everything has to be integrated in the mainline kernel.

The trend for reproducibility and open data is important in this context as well. Maybe you have some iteration on the research side where you build on things, and then just a few of them get upstreamed. That whole chain of things has contributed to the particular algorithms or solutions that actually get upstreamed.

The trend now towards more reproducible research and making the artifacts produced available as open source and open data is also extremely positive. 

If all research ideas went up into the mainline kernel, that would be an indication that there is not enough exploration and innovation on the academic side. Not everything should be ready for production or working as well as you expect, because then it’s not research. 

Toke Høiland-Jørgensen: Open source is in itself really liberating for a PhD student. You can poke at the internals, learn how stuff works, and improve upon it. Even if that doesn’t end up being part of a production system, it’s still an important property of open source software.

Anna Brunström: Absolutely. We’ve had both Linux and FreeBSD available as open source for a long time, which has had a huge impact on networking research. You can also see that the bar is quite high. If you want to publish research in this area in top venues, you have to have an implementation in a real system, in some form.

The trend now towards more reproducible research and making the artifacts produced available as open source and open data is also extremely positive. 

Toke Høiland-Jørgensen: I remember with some of my first papers, I had to struggle to get permission or funding to publish them as open access. During the time between then and the last paper I published, this changed a lot. 

Anna Brunström: Definitely. It’s also changed a lot from the funding agencies. Now if you get the funding from the European Union or from National Funding in Sweden, for instance, the funders will require that you make your results available and push for open access. There’s also a big change in program committees for conferences in the networking area. It’s clearly valued if the paper makes the code or data available. It’s now commonly considered in the review process when evaluating papers. Definitely a big difference over the last few years, and a very good development.

Toke Høiland-Jørgensen: Let’s segue into the research collaboration set up now with Karlstad University and Red Hat. Why did Karlstad University decide to engage in this formalized research cooperation with Red Hat, and what’s the most interesting part of the project?

Anna Brunström: First of all, it’s an interesting topic. Programmable networking is a big trend now. It creates a lot of possibilities and flexibility, but it’s not an easy environment to develop in. One of the most interesting parts of this type of project is the exchange of knowledge between these two sides. We can bring in the academic perspective and what’s happening on the research side with new ideas, and we can combine it with the strong knowledge of production systems and production networks and software that you have on the industry side. One of the most interesting benefits of the project is the possibility for our PhD students to get detailed feedback on the code level. 

Toke Høiland-Jørgensen: What is programmable networking, and why does it interest you?

Anna Brunström: Programmable networking means that you can manage and control the network through an open API. So you can modify how the network behaves by reprogramming it, in a much more flexible way than you could before. One important factor is that the control has shifted away from the vendors of networking gear. Before, if you wanted some new functionality, it was very hard to achieve, and it was a very long-term process. But when these devices become programmable, the network owner and network operator can develop software that can then affect how the network operates. This shift of control also opens up the network for innovation and exploring new functionality.

Toke Høiland-Jørgensen: I always viewed programmable networking as the logical evolution of software-defined networking (SDN). Would you agree?

Anna Brunström: Yes. I would say that SDN is programmable networking. If we go back in history, we have active networking, which was a research area in the late 1990s. I think that was the first trend where researchers tried to dynamically update or program the functionality of the network. And then we had the early research on separating the control plane and the user plane, which is one of the foundations of SDN. 

Then came OpenFlow, which is the most well-known protocol in SDN. I remember the SIGCOMM conference, I think it was in 2009 in Barcelona. They had all the demos on OpenFlow and all the things you could do with it—it took over almost one of the rooms. It came out of Stanford, so they had a lot of demos at the conference and we were all thinking that all this was a pretty cool thing. Then there was the idea of separating the user plane and control plane, then being able to update what happens in the data plane through this decentralized control. That is very much programmable networking, I would say. The next step is that the data plane itself has become more programmable.

So if you take OpenFlow, you could program the network, but the functionality that you could put in was still fairly restrictive. You had a limited number of what you could match packets on and what actions you could apply. With the programmable data planes you have the possibility to implement your own matching rules and the programmable match action tables. You have an increasing level of control, but you still have the separation between the control plane and the data plane, and you update the network through the control plane.

With the networks opening up again, we can see that you have a closer interaction between the code development and the standardization.

Toke Høiland-Jørgensen: One interesting aspect of this is the convergence between the software world and the traditional networking world, which has been based on hardware vendors for a long time. You had these big iron boxes that you forklifted into your datacenter, and they would process billions and billions of packets per second in a single refrigerator-sized device, versus software based on off-the-shelf hardware that you can just buy in bulk and scale. 

Coming from the hardware side, there has been this idea that a networking protocol or packet processing is static. But programmable networking is really about bringing this fungibility of software—the ability to change everything—into these big iron boxes as well, right?

Anna Brunström: Yes. And this was also the driver, that it was very hard to manage networks. So, it’s the same trend, and you continue along this scale of things getting more and more flexible. And along with this you have the trend towards virtualization, which fits very well with programmable networking. You can use programmable networking to implement virtualization, but it’s also a concept of its own.

There is also the trend towards open hardware and disaggregation of hardware, whereas before you had a lot of functionality put together in one box. Breaking the box down into smaller pieces opens them up for innovation or evolution and access to other players. If you have to bundle everything into a big box with all the hardware for all this functionality and all the software and the logic for it as well, then contributing to that market is much different from what we have now, when you can run on disaggregated hardware and general purpose hardware, and use open source to manage all of these components.

Toke Høiland-Jørgensen: And it’s the same code that will run on the datacenter and in the small devices you have at home or in your pocket. You can contribute to all that, even though you don’t have a datacenter yourself. You get the benefit from all of it everywhere because of this ability of software to transcend barriers.

Anna Brunström: True. You also have open interfaces and open source that allow you to adapt it and modify it. You open up that possibility to a large number of people when you have open source software to start from and you have an open API and it’s available. If that software is produced and then put in a box, no one can affect that. If you want something done to it, you have to hope that the vendors can support it. It’s also good from a hardware perspective because you can optimize the hardware to support the key principles for programmability and flexible functions, rather than trying to select which protocols should be supported in these boxes.

Toke Høiland-Jørgensen: That was one of the surprising results to me. Some of the prototypes of these completely programmable data planes had better performance than the specialized hardware. It turns out that if you come up with the right primitives, you can optimize the hardware to execute those with performance that more than matches the specialized things you had before.

Anna Brunström: Yes. If you have to implement everything in hardware, you have to put a lot of work into supporting all these various things. Whereas if you can focus on supporting a good set of base primitives, then you also have the possibility to do that better.

Toke Høiland-Jørgensen: It’s also cyclic to a certain extent, right? It cycles back and forth between general purpose processing and specialized hardware offloads. 

Anna Brunström: With programmable networking, they get much more closely integrated, or the boundary between them is more flexible. There is a lot of research now that experiments with what functions you put in hardware, what functions you put in software. You have this continuum of different hardware and software solutions, and you can move the functionality in a flexible way between these different components. This also depends on your problem and how much that requires in terms of pushing packets or in terms of memory, for example. 

Toke Høiland-Jørgensen: We are also moving from looking at specific devices connected to the network as a programmable entity that you can change the behavior of to looking at a whole network as one big programmable entity with SDN as the centralized view of the control plane. Everything becomes programmable. Where does this leave protocols and standardization bodies like the Internet Engineering Task Force (IETF)?

Anna Brunström: You still need the protocols, right? You can implement the protocol in different places, and you may implement part of the protocol in a programmable switch to speed things up, and you get feedback or you distribute the protocol in different entities. For some things you still need interoperability between different components. So I wouldn’t say that the need for protocols disappears because of this. You still need to be able to interact. Also, protocols and standards are important in this domain because there’s also an open interface between different entities in the network. And you have a lot of new protocols or interfaces connected to this that also need to be standardized.

Toke Høiland-Jørgensen: Part of the standardization process is needed and part of this is not, right? There’s a whole lot more review, and people looking at things with different perspectives, but part of it is just an old slow process that needs to change.

Anna Brunström: Yes. But the processes can also interact in some sense. If you go back to the beginning of IETF standardization, it was very much based on running code. That changed over some period, and some of it has not been as much based on running code. But because of things being more open source based, we’re moving back. In the ideal world these two things interact, right? If we think about the QUIC protocol, for instance, that was standardized as a new transport protocol. That was developed hand-in-hand with the code that implements that protocol. And there were somewhere between ten and fifteen independent implementations of the protocol that were being developed together with the standard, and many of them open source.

Toke Høiland-Jørgensen: Yes. I think the standardization of the QUIC protocol is the best example of where this evolution is working, and where the standardization process interacts with the open source community in a good way. They interact with each other, and hopefully that will improve both.

Anna Brunström: When networking started, it was based more on open source. It came more from the academic community. Then it was dominated by industry and by closed source and vendors that control different network equipment over a period. With the networks opening up again, we can see that you have a closer interaction between the code development and the standardization.

QUIC is a very good example. And the way that IETF is working, at least in the transport area, it’s moving more towards open source-like development. Think about how documents are being developed now. We are using GitHub, and in the same way you develop open source code, you develop the standards. Lots of different people can contribute to different pieces of the standard, based on their experience. That reminds me more of how open source code is developed now than it did ten years ago.

Toke Høiland-Jørgensen: Other than what we’ve already discussed, what would be some of the interesting trends in your field?

Anna Brunström: One thing we have not talked about, but that ties very closely into programmable networking, is how this now moves into other domains and into the cellular network. Looking at 5G and beyond 5G networks will be very important for the future, with all the connected devices and the communication going over these types of technologies. This trend of programmable networking and disaggregation is moving into the cellular world. 

They are both working with disaggregation and open interfaces, in this O-RAN initiative, for instance. That’s quite interesting from a research perspective, because it means you can contribute in that domain as a researcher. Another thing that ties into what we have talked about is the entry of AI into networking. You have the idea of the closed loop control, where if the network is programmable end to end and top to bottom, and you can get a lot of information out of the network, how do you then automate management of the network and optimize it?

The third thing is this general development with digitalization and how networking now is a component of almost any engineering field or any solution in society. There are a lot of interesting opportunities to collaborate with researchers from other fields in different applications. We have always worked a lot with people from networking or companies related to networking. But now we also see collaborations with companies from different domains that see networking and the data that is produced and data management as important tools in their development. 

Toke Høiland-Jørgensen: One of the fascinating things about working with networking and the internet has been the sense of the whole world being within reach. On the one hand it’s really scary, because suddenly everything is exposed to the internet. On the other hand, there’s also this convergence of all of human society through this giant web of communication. 

Anna Brunström: Yes. If you’re a student in networking or computer science today, you can use your technical interest and your technical knowledge to work in almost any area and make a big impact on how things develop in society. So it’s certainly an exciting time for researchers in our domain, and we’re starting to see this in student recruitment. 

Toke Høiland-Jørgensen: Do you also see this reflected in a larger diversity of backgrounds in the student body and in research in general?

Anna Brunström: That is still a challenge. I’ve seen some studies on younger ages suggesting that attitudes are changing in secondary school with regard to which fields kids can imagine working in. More girls, for instance, are showing interest in the field. I couldn’t say that we see it in the students we have in our courses today. I certainly think there is a potential for change, because more and more people realize what impact you can have through networking and through computer science knowledge, and how many different things you can achieve with a career in that area.

I certainly have good hope. I also see that the IT industry is very aware of the need to broaden the talent pool. We have good collaborations from our university and with industrial clusters in the region to try to attract a broader group of talent to the field.

Toke Høiland-Jørgensen: Opening up the field to a diversity of opinions is important because technology runs everything now, but it’s also important for the quality of the technology. There are perspectives and use cases that are just not considered by the homogenous population in technical fields. But I’m cautiously optimistic that this is finally moving in the right direction.

Anna Brunström: This has been known for a long time, right? But progress has been slow. With some of the trends that we see today, I’m hopeful that it will speed up. And there is greater understanding of the importance of these technologies. A lot of fields that have not thought much about networking and data management in the past are now thinking about these areas to reform their businesses and production environments. There are an endless number of applications for these technologies.

Toke Høiland-Jørgensen: Let’s hope you are right that it will speed up. And I certainly agree about the endless number of applications! This has been a fascinating conversation, thank you for taking the time to chat with me.

Anna Brunström: You’re welcome, it was my pleasure!

More like this