Red Hat Research Quarterly

When good models go bad: Minimizing dataset bias In AI

Red Hat Research Quarterly

When good models go bad: Minimizing dataset bias In AI

Sanjay Arora is a data scientist at Red Hat and a member of the Greater Boston Research Interest Group with particular interests in AI and machine learning. For RHRQ he interviewed Kate Saenko, a faculty member at Boston University and consulting professor for the MIT-IBM Watson AI Lab, about managing bias in machine learning datasets and the problems that remain unsolved.

about the author

Sanjay Arora

Sanjay Arora Sanjay Arora leads the AI agenda for Red Hat Research and is mainly interested in the application of machine learning to low-level systems.

Article featured in

Sanjay Arora is a data scientist at Red Hat and a member of the Greater Boston Research Interest Group with particular interests in AI and machine learning. For RHRQ he interviewed Kate Saenko, a faculty member at Boston University and consulting professor for the MIT-IBM Watson AI Lab, about managing bias in machine learning datasets and the problems that remain unsolved.

Sanjay Arora: At Red Hat Research Day you talked about your research into bias in AI, bias in datasets, and how models can go wrong, especially in computer vision. Could you summarize your thesis or the core idea behind that thrust of your research?

Kate Saenko: Modern AI techniques use machine learning, which means that you can’t develop an algorithm without a set of data examples to learn from. Every single AI algorithm these days has been trained on a dataset composed of examples of the kind of inputs that algorithm will receive. For example, if the algorithm is supposed to detect pedestrians, the inputs would be images and the correct outputs that it should be predicting. In this case, if there is a pedestrian in the region of the image then it should predict number one, meaning “I detected the pedestrian,” or number zero, meaning “I didn’t.”

“Your algorithm will inevitably be biased to a certain visual appearance of pedestrians that the dataset presented for training.”

But because every single algorithm has been trained on a fixed-size, finite dataset—it could be 10,000 images of pedestrians or 5,000 images of pedestrians—they’re always going to have some sort of bias. This is because there are so many axes of variation in the world, and in particular in the visual world. There could be different lighting, there could be different seasons, different clothing that people are wearing, or even different numbers of people on the street. It couinld be very crowded or there could be very few people very far apart. Your algorithm will inevitably be biased to a certain visual appearance of pedestrians that the dataset presented for training.

Sanjay Arora: In terms of bias, most people think of bias as predisposition, the way we use it in English in general. But in terms of this research, would you say that at least one technical definition of bias would be a distributional mismatch between your training or source distribution, and your inference or target distribution?

Kate Saenko: So bias is a very broad term. And in fact, it’s not just datasets that can lead to bias in AI algorithms. AI algorithms themselves could be biased in some way that does not depend on the data. They could even take a dataset that is relatively, shall we say fair, but then the algorithm itself could amplify the bias.

As far as technical definitions, what we’re talking about here is actually dataset bias. And there are lots and lots of different technical definitions even in my field of research. Domain adaptation is the technique that we’re using to overcome dataset bias. There’s more being proposed all the time because we are always sort of trying to solve a smaller problem, because the global problem is very, very hard to solve.

“There’s more being proposed all the time because we are always sort of trying to solve a smaller problem, because the global problem is very, very hard to solve.”

So we’re always trying to carve out a little piece and say, “Okay, we’re going to define this problem this way, with these assumptions, and try to solve it.”

Sanjay Arora: When you say algorithmic bias, do you mean things like an inductive bias, like regularization that could induce a bias in predictions? 

Kate Saenko: No, what I mean is even more broad and general: an algorithm that makes decisions in a biased way, biased against some attribute of the data. Let’s say it’s very accurate on daytime images, but very inaccurate on nighttime images. In the popular media we hear this more often referring to people and their demographics—for example an algorithm that’s very accurate on light-skinned faces and very inaccurate on dark-skinned faces. That’s the general biased algorithm definition: it’s just not fair across different attributes of the data.

Sanjay Arora: Let’s talk a bit about the social ramifications of this research. Did they play a part in your process of getting interested in bias in models and datasets?

Kate Saenko: Not in the beginning. I first became interested in the issue of dataset bias and domain adaptation when I was a PhD student. I was trying to train object recognition models and then put them on a mobile robot. The goal was to have a robot that you can tell, “Can you bring me a cup of coffee?” Or, “Can you at least recognize the cup of coffee?”

I trained the algorithm on images that I got from Amazon.com. And then I tried using it on images that the robot captured in an indoor environment in the lab. It failed in a pretty dramatic way, even though when I tried it on the Amazon.com images, it worked with much higher accuracy. So that’s what got me interested in this phenomenon: that just this shift of the domain from product images on Amazon to those in the real world destroyed the model’s ability to generalize and do a good job.

“…I would say most of my research into dataset bias is not with data about people. It’s more with objects and recognizing their differences.”

Since then I have come across more social implications of dataset bias, and have studied some of them, in particular with regard to gender bias. We have a paper on that called “Women also Snowboard: Overcoming Bias in Captioning Models,” where we look at how models can become biased across the gender dimension. Take for example models that do captioning for photos. They take an image and generate a caption of what’s in the image. And if there are people in those images, these models might say “Man” more frequently than “Woman,” or some gendered word to describe the person. That certainly has more of a social implication. But I would say most of my research into dataset bias is not with data about people. It’s more with objects and recognizing their differences.

So now the problem becomes, how do I train a classifier using the labeled source domain, which is the Amazon.com domain, and the target domain, which is the robot images that are unlabeled? How do I use those two datasets and train a model that is accurate on the target domain? That’s what I’ve been working on, for the most part, over the last ten years. And we’ve made a lot of progress.

Sanjay Arora: Is there a certain fixed direction you have in mind where you want to take your research in the next five years or ten years, and are there specific sub-problems that interest you on that timescale?

Kate Saenko: I want to continue pushing in the direction of making domain adaptation work better. I’m also interested in expanding that definition and saying, well, what if we don’t observe our target domain? How do we make sure the model is working well? What if the categories change? So, say in my training set I had 125 different objects, but I let my robot out into the world and it’s starting to see objects that weren’t part of the training set. How do we make sure that the robot doesn’t try to classify them as something that is labeled in the source? For example, seeing a bottle of water and trying to classify that as a flower vase because it wasn’t trained on water bottles, instead of just being able to say “Oh, that’s a new category I don’t know about.”

Looking further or looking at a higher level, how do we even know that the model is looking at something that is out of its training domain? Actually, we don’t have very good ways of knowing. So in some ways the minimum I would like my AI model to do is, if it’s faced with data it hasn’t seen or it’s not able to adapt to or recognize, at least throw up its hands and say, “I know nothing about this,” or “I don’t feel comfortable with this domain.” Humans would do this, right? If you train someone to be a very good classical piano player and then you say “Here’s a very difficult jazz piece,” they’ll say “No, this is not what I do.” The machine by contrast is just going to try to keep playing jazz at that point and do a bad job, because it doesn’t realize you’re giving it something that it wasn’t trained on. 

Sanjay Arora: This is a bit more of the technology and computing side, but what’s your experience been with open source, and especially open source machine learning frameworks and software and technologies in general?

Kate Saenko: Everything we do in my group, we open source usually once the paper is accepted for publication. We make the code for it open source, and that’s more or less the standard in my field or in computer vision and other AI subfields. We open source the code and often also the model itself, so the trained model is open sourced.

The only way that we can move forward in our research field is if the next researcher can take your results and build on them. If we don’t open source our code, then if you invent a better way to do object recognition, and I come along and want to improve on it, but I want to keep your improvement because it really pushes performance up, but I can’t use your improvement, whatever I do is still going to be below that level. And so I would need your code to be able to build on your improvement, and if we don’t open source your code, we just move much slower as a field.

Sanjay Arora: I’m guessing most of the frameworks that you use and the tools that you use are also open source, so PyTorch or the Python stack?

Kate Saenko: Yes. We use only open source tools in my research. In fact, in the beginning of this deep, large neural network revolution, the initial papers that published very good results with these large neural networks trained on a large amount of data, they weren’t open source implementations. And even though it seemed exciting, people tended to feel like, “Well, we can’t even verify or use it in any way.” And so the lab I was in at Berkeley, I was a postdoc at the time, decided, “Well, let’s just reimplement what this paper did and make it open source,” and that became this library known as Caffe. That was a huge thing. Even a lot of companies started using Caffe because there was no other open source library. Later many open source versions of deep learning libraries came out, but for a while, Caffe was the main one.

Sanjay Arora: Does sharing of datasets play a role in AI research that might be different from other research you have done?

Kate Saenko: It’s a little hard for me to tell because I’m not intimately familiar with how research is done in other areas of computer science. But data is the bloodline of the core of AI research. Just like we open source all of our algorithms, we also open source all of our datasets. If you try to submit a paper to a conference that is only evaluating on a proprhealietary dataset, that’s not going to get past the review because you can’t even reproduce somebody’s research result if you don’t have their data. You can have just their model, but if you want to run it or train it on your own dataset, you just have to be able to have the data and the model and the code.

We’re getting better at that. It’s still not perfect. People don’t open source their code right away, or they do but it’s buggy, or it’s incomplete. My students run into these issues a lot, and I’m sure we’re also guilty of it sometimes, but it’s getting better. 

On the other hand, I just thought of an example where this is not the case, and this is in the medical AI field: natural language processing of health-related patient documents, or computer vision analysis of medical scans. So in that field, because of Institutional Review Board (IRB) considerations and human subject considerations, data has been very difficult to share. So IRBs at institutions and hospitals will not let people share the data, for the most part. You can work on it in house or you have to jump through a lot of hoops to get access to it. That has slowed down the progress in that field tremendously.

Sanjay Arora: I want to move on now to another interesting area you mentioned in your Red Hat Research Day talk. One of the papers you mentioned there is on building features that are invariant to distributional changes. They have a backbone with a classifier for the labels as well as a classifier that was predicting whether the data came from the source or target distribution. The idea was to train the label classifier to minimize the loss but use the source-target classifier to maximize the loss so that it’s hard to distinguish between the source and target distributions and use those updates to generate invariant features in the backbone. Basically, a generative adversarial network (GAN).

Another relatively recent idea is building equivariance/covariance into the networks in the sense that the features in each layer transform under a certain group of symmetries. For 3D rotations, this would be SO(3), for example. This is a vague question, but are there approaches that try to learn the group of symmetries that map from one domain to another and build these equivariances into the network. 

Kate Saenko: So convolutional neural networks are already somewhat invariant to translation. If you move an object in the image, they are invariant to that. I think there was a lot of work in building invariances into these networks, but I think the visual world is very, very diverse, and at some point we just don’t know what we need the invariance to.

One thing that people do a lot is normalization, so normalizing your data, normalizing the outputs of layers to make sure that everything is at least on the same scale so you don’t have huge values coming in and they somehow throw everything off. But I would say, more recently, as a field we are starting to become interested in learning without any labels—unsupervised learning. For that, we’re using some of these invariances that we know of. So we know that if I’m looking at an object and I rotate that object slightly, it doesn’t change the identity of the object. Or if I add some small amount of noise to the image, a dog still looks like a dog. Or even if I randomly change the color, it might become a pink dog, but it still looks like a dog and you’re still going to classify it as a dog. 

So we know some of these invariances that apply to our visual tasks that we’re studying, and we’re using them to train models to become invariant. But the way we’re doing it is not by changing the structure of the network, but rather, by producing them as data augmentations. We take the training image of the dog and we augment that training image by producing versions of it that have these additional variations in them, like slight rotation, cropping, adding noise, changing colors, contrast, and so on. Then we give that to the network and say, “This is still the same object. So learn that this variation we just added to it shouldn’t matter, as far as predicting that object.”

Sanjay Arora: That makes sense. At least a couple of papers I looked at were using GANs or versions of GANs to map between the source and the target distributions and back, mapping the distributions themselves. Is that the most common way of trying to fix this distributional mismatch or are there other techniques too?

Kate Saenko: I think there is some work that does that, but you have to realize that GANs are also quite brittle. To get a GAN to generate a realistic-looking face in a video, people have put lots of work into that. Now we have StyleGAN and StyleGAN2 generating really realistic-looking pictures of people who do not exist. But for pretty much any other type of visual object, they still don’t work that well.

So even though people have tried using GANs to translate between domains, it’s still very nascent, not at the point where it just works on any domain you want. I mean, I think for some very small changes in the domain, you could use a GAN, like if all you did was take your dataset and try to use the same dataset but with fog added to it or with snow added. I think a GAN can learn to generate that. But if you now said, this is an extreme viewpoint change, and you want the GAN to generate complete scenes of traffic scenes, but from a different viewpoint, it’s going to really struggle with that.

Sanjay Arora: Is there any work on generalizing domain adaptation or applying it to reinforcement learning to adapt across environments that are a bit different? A typical example: you have a simulated environment where you’re teaching a skeleton to run, and let’s say now you wanted to run on a mountain. So of course your running policy changes a bit, you’re bent forward a bit and things like that. Is there any work being done on applying domain adaptation techniques there in reinforcement learning?

Kate Saenko: I think that there are definitely some techniques being applied, especially when you’re dealing with visual inputs. You’re learning from visual input, as opposed to learning in a simulated environment. We have some work on that: we train a robot arm to pick up objects and manipulate objects. But because robots break easily or they take a long time to do the trial and error that you need to train a policy, we train them in simulation.

However, there’s a huge domain gap with robotics. If you train your biped robot to run on a domain, and then you put that same algorithm on a real robot, what is it going to do? Is it going to run? It’s not going to work. Ninety-nine percent of the reinforcement learning papers I see do everything in simulation, but simulation is a very narrow domain. You’re controlling everything.

And so broadly speaking, there’s a huge domain shift in all of these reinforcement learning applications, but I don’t think a lot of people are looking at that because not a lot of people are transferring their reinforcement learning techniques into the real world. There’s work from UC Berkeley from a while back, and also from Google on this kind of object manipulation. We also looked at it with my colleagues at Northeastern. If you want to train a robot arm to manipulate objects and actually have that algorithm generalized to real world observations, you have to solve this problem and then you can use some similar techniques. We even used GANs to try to solve that problem. 

Sanjay Arora: For some of our problems, a lot of them at the operating system level or the compiler level, you actually just run the compiler, you run the process in your OS. Of course there are many hardware simulators, for chips or memory accesses for example, and you’re trying to learn a policy, but it’s a simulation. And then, like you said, once you transfer it, it does horribly. So just trying to minimize that domain mismatch is a pretty hard problem.

Kate Saenko: I don’t know what the right techniques are there. My students have been looking at drone control and trained a neural network policy to control a small drone, just flight control. And it trained very well in simulation, it achieved high reward. It followed the directions given to it, to control the robot. They put it on the drone, and the thing crashed and almost set the place on fire. And they spent at least probably six months to even closer to a year trying to fix and figure that out, and we have a paper now that fixes it, but it’s a problem.

Sanjay Arora: And especially when something actually crashes and burns, I mean, that psychologically hurts too.

Kate Saenko: Yes, it literally crashed and burned.

SHARE THIS ARTICLE

More like this