Red Hat Research Quarterly

“It’s the wild frontier”: security, agentic AI, and open source

Red Hat Research Quarterly

“It’s the wild frontier”: security, agentic AI, and open source

The name Luke Hinds is well known in the open source security community. During his time as Distinguished Engineer and Security Engineering Lead for the Office of the CTO Red Hat, he acted as a security advisor to multiple open source organizations, worked with MIT Lincoln Laboratory to build Keylime, and created Sigstore, a wildly […]

The name Luke Hinds is well known in the open source security community. During his time as Distinguished Engineer and Security Engineering Lead for the Office of the CTO Red Hat, he acted as a security advisor to multiple open source organizations, worked with MIT Lincoln Laboratory to build Keylime, and created Sigstore, a wildly successful open source project for improving software supply chain security that quickly became the standard for signing software components. You don’t have to spend long with Luke to realize he has a restless mind and a strong commitment to open source development and communities. Since Sigstore, he’s channeled that passion into co-founding the open source startup Stacklok, where he remained as CTO until May 2025, and launching AgentUp, an open source framework to help developers build interoperable AI agents quickly, flexibly, and securely. As he’ll explain below, you might think of it as Docker for agentic AI.

RHRQ asked Ryan Cook, the platform and Enterprise AI lead in the Red Hat Emerging Technologies group, to lead a wide-ranging conversation with Luke. Together they discuss the urgent need to develop security in AI, the importance of model provenance and transparency, the essential role of the open source community, and adapting authorization protocols for AI agents. —Shaun Strohmer, Ed.

Ryan Cook: Let’s be honest: security is often the last thing developers think about. Now add AI—it is a whole different world in terms of speed, and it seems we’re just now catching up in the security space. What are you seeing?

Luke Hinds: Security is barely there at the moment. Obviously some people are more advanced than others, but it’s very nascent. There are these large frontier models coming out of Anthropic, OpenAI, and Gemini. Some models claim they’re open, but they’re not; the datasets are not open. These are black boxes: there’s no way of measuring these things. There’s no way of testing them because they’re never determined. Input comes in and a variant output will always come out, so there’s no way of understanding the true nature of these things. 

That gets dangerous from a security angle. There are prompt injection attacks, but people can also weaponize these models such that the weights and biases in the neural network are heavily influenced to act a certain way. It could behave in a different way from one group of people to another, or one language to another. That’s one scary part of security in AI. For people who’ve been working in security for years, everything’s been determined. If there’s a bug, you have an IDE with a breakpoint, and you can inspect the stack trace and the variables, and if you run it again, everything will be the same every time. With AI, it’s never like that. It has that propensity to act differently every single time. 

This is also where people are having problems with agents, which I’ve been looking at for quite a while. They never operate in the same way every time. There’ll always be some slight variance— sometimes it might be very wide variance and a hallucination. That’s the big thing on the model security side, which is where model provenance comes in. That’s where Sigstore’s got a second wind recently, and Red Hat has been working a lot on it as well. You want to be able to look at a model and know this dataset is what built this model— the model’s DNA, or genome structure. Then you can see there’s obviously a bit of advertising thrown in, or there’s a little bit of something leaning to the right or the left, or there’s plain misinformation. Model provenance is really important—and this is getting into Red Hat territory. Open models are massively important, not just for transparency and the community working together, but for safety. Again, due to this probabilistic nature, you can’t rely on them to perform or act a certain way, especially when they’re calling tools and making their own decisions. 

Open models are massively important, not just for transparency and the community working together, but for safety.

Ryan Cook: There’s a lot of room for it to go astray. Everybody has been talking about the importance of humans in the loop, but it also goes back to open models and having the insight into what actually generated it to ensure you don’t have a biased model. I know there’s a couple of bigger communities that are fully, absolutely transparent with their models, and I hope we get to a point where those are much more public and used much more. The space is well on its way to getting better, but there’s still a lot of room to grow. 

AgentUp

Ryan Cook: Speaking of room to grow: Agent Up. What sort of opportunities have you seen in that space? What were your goals when you started coming up with that project?

Luke Hinds: I was building AI agents out of curiosity: writing from the ground up, thinking about what an agent would look like. I spoke to other people about their ideas of what a good agent looks like, and the consensus was that there was little guidance around how to do things the right way, even for fundamentals that are not necessarily AI-centric: security, rate-limiting, persistence, distributed scale, performance. They’ve been using some of the existing frameworks, and I heard this repeated pattern of frustration with the lack of a clean set of interfaces to work with. It’s a nascent space—the wild frontier.

 So I had the idea to build something where people could quickly bootstrap an agent with good old essentials in there. I started to play around, and I realized that an interesting direction would be to make something that’s portable. I’ve always looked at Docker as a brilliant example of something reproducible and portable, and you can bring it up to a good standard quickly. You have this contract you can pass around—a Docker file—and it can go into a GitHub repo. Then people can pull it and run it and have exactly the same environment as everybody else. I thought that would be a nice thing to have for agents. My attempt with AgentUp was to build that and have that config-driven type of experience. 


In his off hours, Luke is a long-distance runner. “When I run, my brain goes into the default mode network,” Luke says. “I have a lot of ideas when I’m running—I’ll pull over and rant into my dictaphone.”

With AgentUp, there’s a framework, but you don’t build on top of an SDK framework like you would normally—you use entry points. You can write as much customized code as you like, but it’s all managed as dependencies, which was something else I came across by mistake because I couldn’t get something to work. It’s at an interesting place now, figuring out where to take it next, because it does have that portable, reproducible, pinnable structure, which is really resonating with folks, because then you can have something that runs the same on a developer’s laptop as it does in production. 

So right now, Agent Up is at a similar stage to where Sigstore was for a while. I’d be talking to people about Sigstore and you could tell they were looking at the other monitor, pretending to follow along while they’re thinking, “I don’t know what this guy’s talking about.” Although I think AgentUp’s a bit further along, because I’m hearing from folks like you who are clicking with it, saying “I see where this is going.”

Ryan Cook: I definitely feel like you won my heart from a sysadmin point of view. You’ve been in the game long enough and you remember the days when you’d say, “Hey, it worked on my laptop, but it doesn’t work on the server or it doesn’t work on my friend’s laptop.”

Luke Hinds: Absolutely. It was a mess trying to have all these different versions on your machine. That was a great thing with containers: they solved a real pain. That’s another thing I’ve learned about successful open source projects. I will ask myself, “Luke, is this a painkiller or a supplement?” Supplements are nice. They’re a good idea. You take a multivitamin and it’s very easy. But if you leave home and you forget to take your vitamin D, you don’t turn the car around and race back home. You think, “I can just take it tomorrow.” 

Your project’s never going to get anywhere. Your startup’s going to die.

Whereas if you’ve got a migraine or some real pain, you need a painkiller. Everything else is on hold until you get into Walmart or you turn the car around. Even though you’ve gone 15 minutes down the road, you go back and get it because you got a pain to solve. I always try to find projects that lean towards solving a pain, but they’re still easy. You throw them in your mouth and you drink. If you have a project that’s a supplement, and it takes about a week of trying to read horrible, obscure documentation to get it to work, your project’s never going to get anywhere. Your startup’s going to die.

“Anything is possible”

Ryan Cook: How did you decide you want to work with computers? For example, I took apart the keyboard of my computer and put it back together, and a week later, my mom said, “You’re going to start early college classes on computers.” That was it. 

Luke Hinds: I was pretty similar. There was a computer game at the time called Elite, with all kinds of vector graphics, and you’d go around these universes and you’d mine rocks and alloy. But it was only available on the BBC Micro—the BBC had their own computers that were in all the schools. I really wanted to play, but my single-parent mom couldn’t afford a computer. To get one I had to join the school computer club, but first I had to prove my worth. I wrote my first program on paper! I tried to write an adventure game, and it got very out of hand quickly—my wrist hurt from writing so much. I showed it to the math teacher, and he was impressed I’d even tried, so even though it was a mess he let me in. I owe that math teacher quite a lot really, because that meant I got my hands on a computer for the first time. From there it just got out of control, really.

Ryan Cook: Where did you want to go with it? 

 Luke Hinds: I didn’t really know. I started off in hardware, initially doing repairs on computer boards—a lot of poking around with an oscilloscope and soldering chips on and off. Then a friend of mine joined a company as a software engineer and then I went there as a support engineer. I was constantly like, “I want to do what they’re doing, writing the code. That looks more interesting.” I think if you have a passion about this you’re going to find a way. Coding was the thing that really made me home in on a particular area, because I love the creativity of it. It’s just amazing: you can write stuff and get a computer to do something. 

Ryan Cook: I completely agree. When I started actually writing code it was like, “Oh my gosh anything is possible.” So now that we’re seasoned engineers, what do we tell the next generation of folks, especially about getting into open source or AI?

Coding was the thing that really made me home in on a particular area, because I love the creativity of it. It’s just amazing: you can write stuff and get a computer to do something. 

Luke Hinds: About open source, I would say, first, it’s very good for your career, because you get public exposure of what you’re doing. If I’m in a position to be hiring engineers, I’ll want to find their GitHub and see what they’ve done. I’m not expecting perfect code, but if I see somebody trying stuff, that’s a really good signal. Second, you can mingle with very senior folks within a community. A lot of communities are very accepting of first-time contributors. I love it when somebody new turns up. One of the things I always do in my projects is mentoring, even if that’s helping someone do their first PR or figure out how to use Git. They can watch the project developing, see how problems are addressed in open discussions, and then see the code to address them. 

I do fear for the younger folks, because they are coming into a world where coding assistants and AI tools let you knock out a project in 20 or 30 minutes. The conditions are not there to force people to learn. About 10 years ago I tried to write an OS in Rust. It sounds impressive but it’s really not—it didn’t go very far. But I remember spending four days just banging my head against the wall trying to figure things out. It forced me to learn something I wouldn’t have learned about otherwise. If necessity is the mother of invention, but people have this instant knowledge available with no need to work at it, where will the inventions come from? 

Ryan Cook: One of my big concerns is whether it’s working in a best practice way, security-wise or otherwise. What do you think is the best way to utilize those vibe-coding situations while keeping best practices in mind? And what is the best way for someone younger in their career to utilize those services and still learn something?

Another thing I’ve noticed being in the agent space is an absolute tidal wave of vibe-coded projects. You can tell straight away because the READ ME is full of emojis and rockets. You’ve got these projects where the first commit was six hours ago and there’s a READ ME talking about it being enterprise grade and some of them even quoting SLAs, and it’s just an LLM just spitting this stuff out. That also makes it a challenge to get your stuff a bit more to get noticed. For somebody younger that’s writing something genuinely useful, being able to rise above the slosh out there can be tough. 

Luke Hinds: I would say turn off auto-accept, read what it produces, and ask it to explain what it’s doing. It might reveal that it doesn’t quite understand. Ask, why did you do that—why did you choose that dependency? Why did you make that design choice? I use Claude and similar tools a lot because they are absolutely brilliant for prototypes, and another thing I’ve noticed is you’ll get some pretty good code and it appears to work really well. But when the system gets above a certain level of complexity, you get underneath and you realize, this is so brittle. It’s like a toy train going around the track, but half the track is missing. The train goes off the track, along the baseboard of the wall, then comes back on the rails. It looks like it’s working, but it’s not. 

I use Claude and similar tools a lot because they are absolutely brilliant for prototypes.

Ryan Cook: You bring up an excellent point. Tying that back to open source and GitHub: somebody early in their career could use vibe coding, question everything, and learn their way up, and also have that public GitHub repository and demonstrate they know how to use Git and write code and understand it. You can build out your knowledge set portfolio with these two things together and come into an interview and just absolutely rock it. 

Luke Hinds: Absolutely. Otherwise, you can build some impressive projects with AI, but when you sit down with a group of people and they start asking about event-based systems, distributed systems, and what did you use for your queueing system, it’s going to become very clear you don’t know what you’re talking about. And don’t feel you’re going to be replaced. I don’t believe in this “AI is going to replace all software engineers” noise. There are things agents are great at. They love open-ended stuff where they get to choose the goal. But if you want them to deliver the same goal every time, they can’t.

Ryan Cook: I like to say that with some agents and some of these LLMs, I’ve never been more correct in my life than when using them, and I’ve never been more incorrectly correct.

The power of community

Ryan Cook: Moving forward in your career, you built one of the biggest supply chain securing projects that exists—you created an entire ecosystem. How did that come about?

Luke Hinds: So, I’d been thinking about software supply chain security for some time, being in security, and that term “software supply chain security” was starting to bubble up. I’d come across transparency logs, which is something they used for cryptographic guarantees around who created a certificate for whom. (CoreOS co-founder) Brandon Phillips was digging around this area as well, and I remember talking to him and thinking, well, I’m just going to try to write something. I’m going to build a prototype: I’m going to get this transparency log and try to start putting signatures of artifacts in there.

I think for Sigstore it was a case of “right project, right time.” I’ve developed lots of right projects-wrong times and I’ve developed lots of wrong projects-right times, so I was quite lucky in this case. There were a lot of people looking to solve this problem, and they just converged on what I was doing. Something similar happened with Linus (Torvalds, creator of Linux) originally. I’m not comparing myself to Linus at all, but it was similar. He popped up on Usenet saying, “Hey, I got this thing, not really quite sure where to go with it.” And then other people were like, “Well, I would like to work on it.” It was very much the same with Sigstore. What I maybe brought to it was the vision of where it could go, because it could have just been people working on something without having a long-term trajectory.

 I used to call it the Let’s Encrypt for software signing: it would be a public-good, vendor-neutral service so everybody could sign things: a 12-year-old kid in his bedroom or the corporation with billions of dollars. I realized if it was going to be successful, first, it had to be a public-good, neutral service and second, it had to be very simple to use. One thing about security: trying to get developers to adopt security is like trying to get a toddler to eat their greens. They may agree that rationally it makes sense, but they just don’t want to. They’ve got all these APIs and AI saying, “We’ll make you faster and better,” so it’s hard to get the security story to land unless they really don’t have to do anything. 

Trying to get developers to adopt security is like trying to get a toddler to eat their greens.

Ryan Cook: You brought up getting the community behind Sigstore. Even with making this almost a free service, how do you feel the community and making those things available in the open changed the project and made it easily adoptable? There was probably a possibility for us at Red Hat to just put the lock on the door—even though Red Hat doesn’t do that—so did it make a difference to start open? 

Luke Hinds: Oh, it massively made a difference. If Google had not got involved and others, it likely wouldn’t have gone anywhere near as far as it has, and Google is still heavily involved. I remember going to (Red Hat CTO) Chris Wright and saying, “Hey, I want to put this in the Linux Foundation. I have this idea of a public-good service,” and he could see the picture. It would not have been a success if we hadn’t taken it to the community. 

Ryan Cook: I completely agree. Even with the newer projects, I appreciate the community you helped build. For example, Nina Bongartz, a Red Hat developer on the Trusted Artifact Signer Team, built an AI model verification operator with Sigstore. The fact that there is one community looking out in so many different places is a testament to what you helped produce.

The next big catalyst

Ryan Cook: You see so many startup projects in this space, and sometimes they put a strange license in place allowing you to partially use the product but you can’t go enterprise with it. With Stacklok and even on your newest project AgentUp, why did you decide to start as open source? You could have locked those down like some other projects out there and tried to go for gold. Instead, you’re doing things to benefit the greater community, developers, and people getting started. How did you make that decision?

Luke Hinds: Good question. First, I’m doing what I’ve always done and what I know. But I’ve always found that open source comes with its benefits. You have a big audience to validate and test against, and you never come across as spamming people to get them to use your new product. You get that early diversity in as well: other people can tell you what they think, and if it’s good then generally other people start to contribute and start to use it more. It’s a really good litmus test for a new project. If it’s not good, it’s not going to grow. I’d like to say it was only for the good of humanity, but it is a little bit of a selfish move as well, because it’s a great model. 

Ryan Cook: There’s so much joy when I get a first contribution outside of my team or from another company. I have a small celebration at my desk and I’m dancing around the room, because it gives you that validation that you’re on the right path.

To wrap things up: what do you see on the horizon for security? As we’ve been saying, security is something many of us think about after the fact. What do you think is the next big catalyst in security for AI?

So far we’re retrofitting old tools and old protocols where we should take the opportunity to really rethink things. 

Luke Hinds: An interesting one is agent identity. We have a lot of systems we’re trying to retrofit for AI and agents, and they are creaking a bit with our current authorization approaches and protocols. There is going to be a world where an agent will need to delegate a task to another agent, with no human in the middle, and so many of our authentication systems are human-centric. They’re based on a human identity. 

But what about when it’s not really Luke Hinds with the cute little avatar that’s writing the code; it’s Qwen or Claude. Agent identity is a big area people are starting to approach, but so far we’re retrofitting old tools and protocols where we should probably take the opportunity to really rethink things. That’s one area I’ve been playing around in.

Ryan Cook: I completely agree with you. We can be retrofitting what we know and trust, but doing it in an efficient manner where we’re not trying to square the circle.

Luke Hinds: Coincidentally, on Sigstore we recently collaborated with someone at Red Hat on an A2A (agent to agent) store project, which was around agent identity—or agent provenance, really, so when an agent presents itself you know the code it was built from. Sigstore is quite good at retrofitting in that setting.

Ryan Cook: That is fantastic. With Sigstore, there’s a level of trust people already have. You know if something comes from that organization, people approached it with the right thought process, the right review process, to make sure nobody introduces any gaping hole. Having that open community is almost a cheat code for an organization to adopt some safety mechanisms— you don’t have to come up with them on your own because there’s an entire field of experts out in the open helping to do that.

Luke Hinds: Absolutely. And there’s some way smarter people than me active on the project now.

Ryan Cook: That’s it for my questions. Thank you for taking the time—this was really fun.

Luke Hinds: Thank you, Ryan. I had fun as well.

SHARE THIS ARTICLE

More like this