She’s Taking Jeff Bezos to Task

archived recording

(SINGING) When you walk in the room, do you have sway?

kara swisher

I’m Kara Swisher, and you’re listening to “Sway.” Facial recognition technology is everywhere. A scan of your face can unlock your smartphone, let you into your building, or land you in jail for a crime you didn’t commit. And facial recognition is what Detroit police used to show up at the driveway of a man named Robert Williams and arrest him on suspicion of shoplifting. Williams spent 30 hours in a detention facility, but when police provided photos, he didn’t know who he was looking at.

archived recording

I picked that paper up, and I hold it next to my face. And I said, ‘This is not me.’ I’m like, ‘I hope y’all don’t think all Black people look alike.’ And then he says, ‘The computer says it’s you.’

kara swisher

It turns out computer algorithms aren’t so infallible, particularly when it comes to facial recognition of minorities. My guest today, Joy Buolamwini, is fighting to ensure that the tech and data that can define our destinies are not biased. She’s a digital activist based at the MIT Media Lab and founder of the Algorithmic Justice League, which sounds like it could be part of the DC Comics extended universe. But it’s actually a group of computer geeks that fights bias in coding. A couple of years ago, armed with only about 1,200 images, Buolamwini took tech giants like Amazon, Microsoft and IBM to task. And she proved that the facial recognition technologies that these companies touted was both racist and sexist — big surprise. I love a good story about taking big tech down a notch, and we’ll get there. But first, I asked Joy to break down the artificial intelligence and machine learning processes behind facial recognition.

joy buolamwini

When I think about artificial intelligence, I think about this ongoing quest to equip computers with abilities that have traditionally required human intelligence, right? So visual perception, speech recognition, language translation, recognizing a face. Now there are different approaches on this quest for intelligence. And one of those approaches that’s been very successful is called machine learning — learning from data sets. So for example, to teach a machine how to detect a face, you can provide many examples of photos. And then, you can use different techniques to learn the representation of a face.

kara swisher

And it’s also predictive. It’s giving guesses about what could happen.

joy buolamwini

Yes, and I love that you used the word ‘guesses’ because ‘guess’ adds to the fact that these predictions are not certain.

kara swisher

Right, but it’s treated with some certainty because it’s a computer. People think of that that way.

joy buolamwini

Yes, the fact that it’s coming from a computer can give a sense of authority and a sense of objectivity that isn’t actually true.

kara swisher

I remember when Google was starting to do photo identification, and they just fed all kinds of pictures of the Eiffel Tower. And eventually, it understood this was the Eiffel Tower, or this is a cup, or this is a spoon, or whatever it is. And kids learn the same way. I have a small child now, and she’s looking at things. And we were doing that this morning. But when it comes to faces, it’s very different. And your argument is that AI systems and facial recognition technology is shaped by prejudices. And it’s something you call “coded gaze.” And this is something you’ve experienced firsthand. So why don’t you talk about how you first became interested and what happened?

joy buolamwini

I was a graduate student at MIT. I took a course called Science Fabrication. You read science fiction, and then you create something you probably otherwise wouldn’t do.

kara swisher

Like a time machine, whenever.

joy buolamwini

A time machine. I wanted to shape shift. But given the laws of physics and the fact that we only had six weeks I thought instead of shifting my physical body, maybe I could shift my reflection in a mirror. And so I worked on this project where, in the mirror, it could look like I had become a lion or somebody I admire, like Serena Williams.

kara swisher

All right, which is common now in phones. They can put a face on top of yours for fun.

joy buolamwini

Right, so it’s like, think of it as a Snapchat filter, but instead of it being on your phone, it’s through a mirror. So it looks like it’s on your reflected face.

kara swisher

And what year was this?

joy buolamwini

I think this was 2015 when I first got to the Media Lab —

kara swisher

And it was called the Aspire Mirror.

joy buolamwini

Yes.

kara swisher

OK, what happened?

joy buolamwini

So as I was building it, I put a webcam on it. And I then added some face tracking software so the image or the filter could follow my face in the mirror. But it didn’t really work that well until I literally put a white mask on my dark face. And that’s when it started to track my movements. And so, I am a dark-skinned woman. I call myself highly melanated. So I was using the system like my light-skinned friends, colleagues were, you know? And I did not have the same results.

kara swisher

And you had a white mask, or you went and got one when you saw it didn’t work for you.

joy buolamwini

Yeah, it was around Halloween time. And the white mask happened to be in my office while I was testing the systems.

kara swisher

Like a Venetian mask, kind of thing.

joy buolamwini

Yes.

kara swisher

But it couldn’t track your face at all.

joy buolamwini

Not consistently. So to get my project done, wearing the white mask made my life a lot easier. And so it was this experience of literally coding in white face at MIT, this epicenter of innovation, that it became quite clear to me there could be some issues here. So I really started looking into, are machines truly neutral? Is it just a one-off situation? Is it just my face?

kara swisher

So what did you think when this happened? Did you think immediately, oh, no, it’s not able to track Black faces?

joy buolamwini

Well, because I have worked with computer vision before and I had worked with facial detection systems before, this wasn’t my first time encountering these sorts of issues. I thought it was mainly because of lighting and illumination. So this is actually what led me to do my MIT master’s thesis, where I could do a more comprehensive investigation. After I had this experience of coding in a white mask, I did a TED Talk about it. And I thought, you know what? People might check my claims. Let me check myself. So I ran my TED profile image at the time through commercially available AI systems that analyze faces. And some didn’t detect me at all. And the ones that did detect my face labeled me male. And so, that’s what led me to say, oh, let me test how gender classification is working with these systems. So the personal experience then led to the comprehensive research.

kara swisher

So explain the uses of facial recognition. And where does this technology show up in our everyday lives?

joy buolamwini

I mean, you can have law enforcement use of facial recognition. You actually had this case leading to a wrongful arrest. Robert Williams was arrested in front of his two young daughters, detained for around 30 hours with this kind of use of facial recognition technologies. You might have landlords using it as a way of entering buildings. You’ve likely encountered it online, social media, Facebook tagging faces automatically. You’ve seen it when Snapchat filters are being applied. And I’m using the term facial recognition technologies as a catch-all for many ways in which you’re analyzing a face.

kara swisher

But it’s everywhere, including on your iPhone or any other thing. It will be on planes if you use Clear.

joy buolamwini

Yeah, transportation. Another key area that we’re seeing, especially with COVID, is the use in education and e-proctoring.

kara swisher

So how many companies are dominating the AI space right now? And who are the major players? Why don’t you lay them out for people?

joy buolamwini

Sure, so in the big nine, we have the G mafia — Google, Microsoft, Amazon, Facebook, Apple, and IBM — in the United States. And then in China, where we also have some heavy hitters, right, we have Alibaba, we have Baidu, we have Tencent. And so these nine major companies are taking the lion’s share when it comes to the AI work that’s being done.

kara swisher

So this starts with a database of images that are used to train AI. So where do the images come from? Explain how they get into the system because people are producing images at a quantum rate.

joy buolamwini

Yes, so we’re seeing the unbridled harvesting of face data. And you ask where are these images coming from. In the case of Clearview AI, they scraped more than 3 billion photos based on social media posts and so forth. So as we have our cell phones and as we’re uploading images, right, companies like Clearview AI can go online and take that data, scrape that data.

kara swisher

And people are uploading them themselves, constant pictures of themselves and their close family members.

joy buolamwini

Well, sometimes what you’ll see, as happened with the case in Flickr, people upload images online for one purpose, not knowing it’s going to be repurposed for something else. So I think it’s really important not to make it as if, oh, people uploaded their photos because they wanted to be so —

kara swisher

No, they did not want to be.

joy buolamwini

Right, it was used in a different context.

kara swisher

That’s right. It’s called scraping sometimes. And there’s all kinds of ways companies can get a hold of pictures to do these kind of things without your knowledge.

joy buolamwini

And that’s the major part, right? This is happening without consent.

kara swisher

All right, we’ll get to Clearview AI in a minute. So explain actual biometric scanning because that’s another thing. In parts of London, we’ve seen police deploy cameras mounted on vans. But they’re everywhere there. They use live facial recognition in public areas. People passing by might not know their face prints can be scanned. Talk about how automated facial recognition works in a public setting.

joy buolamwini

Yes, so when we’re seeing that kind of use of facial recognition for surveillance, which is one of the most dangerous uses of facial recognition, what’s happening is a camera feed is being analyzed. So it’s ongoing, it’s continuous. So it’s covert. You don’t necessarily know it’s happening because it can be on any camera anywhere. And what’s happening is your face is being analyzed, and then you have a unique faceprint — think of it like your fingerprint — that’s being taken and compared to a data set of other faces. So it could be faces of a wanted terrorist. It could be faces from a DMV, right? It could be faces from passport photos that have been collected. So your face is being analyzed against pre-existing data to see if you show up as someone of interest.

kara swisher

We mentioned Clearview AI, which I think is sort of in the crosshairs. And although law enforcement used the tool a couple of years ago to catch a suspect in a child sex abuse crime, The Times also called Clearview a company that could, quote, “end privacy” as we know it. There was just a BuzzFeed investigation where more than 7,000 individuals from about 1,800 publicly funded agencies, including police forces, ICE, the Air Force, public schools, were using Clearview software to search millions of faces in the U.S. before February 2020, much of the time, as you said, without training or any oversight. A lot of the time, the users’ bosses didn’t even know they were using the tool. Talk a little bit about this because it doesn’t just run off mug shot databases. It runs off social media and websites.

joy buolamwini

Yes, so Clearview changes the game because everybody is in the lineup. You now have a perpetual lineup of anybody who they’ve scraped. So, 3 billion photos. So this means instead of a case where you know it’s constrained to just people who are in this mug shot data set, you could potentially be matched when they run their search over their more than 3 billion photos. So this means you have such a wide dragnet with no consent, again, to even be part of a search in the first place.

kara swisher

Right, so you’re being searched without your knowledge or you being in a database. What does the fact that it’s been conducted secretly tell us about how widely Clearview has been spreading its services? Because this was being used so widely, just sort of off the cuff.

joy buolamwini

I think what is really interesting to note here is the people at the top of these organizations aren’t always aware that these technologies are being used. And part of it is how they are introduced. So, Clearview might go to some police officer or some other individual in a company and offer a free trial. So you get them hooked. And then you start having other people adopt it. And so the ways in which these systems are introduced shows us it is a wild, wild west. And so the case of Clearview AI shows what happens when we don’t have these safeguards.

kara swisher

Right, at the same time, the C.E.O. of Clearview said it’s immune to problems with algorithmic bias. Have you tested its accuracy? What do you think of that argument?

joy buolamwini

The whole framing of immune to algorithmic bias does not compute with me if you’re using anything that is based on probabilistic methods. But more importantly, none of these AI systems are immune to systemic bias.

kara swisher

And it gives them more of a certainty if they have a computer establishing their own biases, especially if the computer is wrong. So you’ve heard that argument that facial recognition is only something to worry about if you’re a criminal. What’s your response to that?

joy buolamwini

Clearview shows you that it’s already not the case because you’re already in this —

kara swisher

You’re in the lineup.

joy buolamwini

— perpetual lineup to begin with. And I think the other part of it is thinking about privacy in general. What does it mean if you cannot walk the streets with any sense of being anonymous, right? So where you go to worship, where you go after hours, all parts of your life can be on surveillance. So if you have a face, you have a place in this conversation from a standpoint of civil rights, from a standpoint of privacy as well.

kara swisher

Yes, so this documentary you’re featured in, “Coded Bias,” which is now available on Netflix and PBS, there’s a 14-year-old Black teenager who police in London misidentify as a suspect. He’s walking in his school uniform. And the cops stop him before letting him go. People gather around, and his friends are like, what are you doing? And the police were like, well, just in case, we’re making sure. This is their argument — that the benefits outweigh the harms, that we need to do it just in case. What’s your response to this?

joy buolamwini

Who do these burdens fall on for just in case, right? When we looked at stop and frisk, it was just in case. And it was unjust. So, to me, that kind of argument, we have to look at the harms. What’s the experience to that 14-year-old boy being stopped by the police? So, for the police, it’s a just in case. For this boy, it’s a traumatic experience. And also, Robert Williams being wrongfully arrested, was he released eventually? Yes, but there’s still an indelible impact. So it’s not just a case of, we’re taking extra precautions because when you take those extra precautions, real people, real lives are in the crosshairs.

kara swisher

So one of the things, when I interviewed Andy Jassy, who’s about to become the Amazon CEO, I asked him about Amazon’s Rekognition technology and some of its shortcomings, which there’s been lots of tests of that, too. And he was dismissive about how facial recognition could be misused. He said- – let me read the quote exactly: “Like anything else, with whether it’s the private sector companies or police forces, you have to be accountable to your actions. You have to be responsible if you misuse it.” He was talking about law enforcement agencies. But whose responsibility should it be if the technology is misused or the technology is kind of glitchy, as in the case with Amazon?

joy buolamwini

Yeah, I definitely think vendors have a responsibility. And this is why when we’re looking at a situation like the FDA, you wouldn’t say, oh, well, if you use a vaccine and it doesn’t work, the person who used the vaccine is who is accountable. You would say the vaccine makers have some accountability. Now, at the same time, the people who purchased the vaccine, right, should have had processes in place to test the verifiability of the claims. So you don’t just purchase it without some assurance. I do believe there’s accountability throughout, but I do not believe in this abdication of responsibility and just say it’s user error or these companies are misusing it. And I mean, there’s even a know your customer, right, where you have to have some responsibility as to who you’re selling certain products to. So, it’s also not just the case that these systems are somehow being mishandled or misused.

kara swisher

So as the case with most technology, there are benefits and drawbacks to AI. Talk about the benefits of AI to our society.

joy buolamwini

Well, for me, one of the promising areas is for earlier detection of breast cancer. And so, we talk about how people in African nations are using that kind of technology to look at crops to assess for various diseases, and so forth. So it’s not to say it’s all bad.

kara swisher

Very much like the internet. It is what it is. But some law enforcement agencies are using it, for example, helping catch insurrectionists in the attack on the Capitol. They put pictures up. They’re looking for them using facial recognition. When a gunman opened fire at a newsroom in Annapolis in 2018, he refused to cooperate and give his name. The police used facial recognition technology to ID him when the fingerprinting analysis was taking too long. And then some people feel like when it comes to tackling COVID, you could have applications.

joy buolamwini

Well, you might say, OK, we’re going to use facial recognition technology, and it’s going to allow us to be more efficient. But then what happens is, if you’re getting biased technology, right, that’s leading to wrongful arrests, the promises don’t necessarily weigh up to the actual outcomes. A lot of people in the police departments or law enforcement, they’re saying, we don’t even know how to configure these systems in the first place. So we are at the mercy of what these companies are telling us.

kara swisher

So a couple of years ago, you tested the facial recognition software of some big tech companies, like Amazon, Microsoft, IBM, and the Chinese company Face++. What did you find of all those?

joy buolamwini

Yeah, so we did two studies, right? So the first study is based on my MIT thesis. And what I found was for the task of gender classification, all of these systems overall work better on the faces labeled men than women. They all work better on lighter faces than darker faces. And none of them had error rates of more than 1% for lighter-skinned men. But when it came to women of color, the highly melanated like me, you had error rates that were more in the 30s. So that was the first study that I did. And that study included Microsoft, it included IBM, it included Megvii or Face++, the Chinese tech company. So then we did a follow-on study. That study included Amazon and Kairos. What was so surprising to me about the follow-on study was the error rates for Amazon was that of its peers the year prior. So it’s like you have the test results are out for a year. And in fact, I even wrote a letter to Jeff Bezos before the paper came out that look, we haven’t even yet submitted this, but we’re seeing some issues on the gender classification aspect of Rekognition.

kara swisher

You’re talking about Rekognition, Amazon’s brand of face recognition software, which they cleverly spell with a K.

joy buolamwini

This makes us concerned about other tasks that Rekognition is being used for.

kara swisher

We gave you the test, and still, you got a D.

joy buolamwini

Right.

kara swisher

So Jeff never replied?

joy buolamwini

I didn’t get a reply back.

kara swisher

OK, all right. OK, just to be clear, OK.

joy buolamwini

And this was especially concerning because at the time, Amazon and Microsoft were both going for really lucrative contracts with the Pentagon to provide AI services. And I think our research came out as those decisions were being made at that time.

kara swisher

So one of the things, Amazon did try to discredit your research by saying facial analysis and facial recognition are completely different terms in underlying technology. But IBM and Microsoft responded to your test by saying they would make facial recognition technology more accurate. How do you move them to do this if Amazon got this D a year later? Where is the pressure points to get them to do that?

joy buolamwini

Well, what we have to think about is, what are we moving them to do? If it’s understood what we’re moving them to do is to, quote unquote, “improve the technology,” we’re nonetheless helping powerful actors develop powerful tools that can be used as tools of oppression, right? Tools of social control. With the Amazon situation, we saw after the murder of George Floyd, right, that IBM, then Microsoft and Amazon, pulled back from selling facial recognition technology to law enforcement in different capacities. And I think that response of not even selling the technology, right, is another approach. The answer isn’t always, let’s go collect more data and optimize tools of oppression.

kara swisher

All right, so when you have these systems that are very effective, a lot of people point to China, which is using very aggressive AI modeling and has instituted facial recognition widely. They have a social credit scoring system, where Chinese citizens submit to facial recognition for all kinds of things, using public transport, shopping, getting internet service. Someone in the documentary called it “algorithmic obedience training.” I’ve heard it dozens of times. Like, in the US, we’re not as bad as China. Some of this is necessary. How do you compare the two countries? Because one person in the film said, even though it looks like China is so bad, at least they’re transparent.

joy buolamwini

Well, transparent to some extent. I mean, one thing we have to look at, right, is the fact that we’re a democracy. And so the level of state control and state intervention and also the level of coordination and the level of data collection is on a different level than the U.S. because we’re looking at different kinds of political systems in the first place. I will say another area that China is different from the US is the level of investment in artificial intelligence. And I think there is an opportunity for the US to invest more in AI, but invest in a way where we’re looking at issues of algorithmic harms. But at the moment, what we’re seeing is the adoption of systems by government agencies that then allows more state control becoming closer and closer to China.

kara swisher

Does it give you pause that if you’re getting them to be more accurate, the overall system’s more effective, it also means they can use them for surveillance? A lot of people are resistant to this technology.

joy buolamwini

I think when people look at the research that I’ve done, one assumption is the use of this research is to improve technical systems. For me, the biggest takeaway from our research was the fact that we have to ask questions, and we can’t just assume whether it’s facial recognition or other kinds of AI systems, that just because it’s coming from a major tech company means that it works as advertised. But the other thing that was really important for me was it was a counternarrative to the sophistication of tech. What does it mean that a young Black woman with a data set of 1,270 images can put some of the largest tech companies on their toes? They’re supposed to have the best of the best, right, working on artificial intelligence. How did you miss something so stark, so impactful, and so glaring? So when I look at the research, there is the part of, OK, let’s go optimize these systems, but there’s another one which is a challenge to the tech industry itself. And so that was the major thing, right, to pop the bubble.

kara swisher

That they’re so smart.

joy buolamwini

That it’s so sophisticated. [MUSIC PLAYING]

kara swisher

We’ll be back in a minute. If you like this interview and want to hear others, follow us on your favorite podcast app. You’ll be able to catch up on “Sway” episodes you may have missed, like my conversation with NASA’s Diana Trujillo, and get new ones delivered directly to you. More with Joy Buolamwini after the break.

I want to play a clip from testimony to the House Oversight and Reform Facial Recognition hearing. This was 2019, when you were being questioned by New York Congresswoman Alexandria Ocasio-Cortez.

archived recording (alexandria ocasio-cortez)

Ms. Buolamwini, I heard your opening statement, and we saw that these algorithms are effective to different degrees. So are they most effective on women?

archived recording (joy buolamwini)

No.

archived recording (alexandria ocasio-cortez)

Are they most effective on people of color?

archived recording (joy buolamwini)

Absolutely not.

archived recording (alexandria ocasio-cortez)

Are they most effective on people of different gender expressions?

archived recording (joy buolamwini)

No, in fact, they exclude them.

archived recording (alexandria ocasio-cortez)

So what demographic is it mostly effective on?

archived recording (joy buolamwini)

White men.

archived recording (alexandria ocasio-cortez)

And who are the primary engineers and designers of these algorithms?

archived recording (joy buolamwini)

Definitely white men.

kara swisher

What’s the link between mostly white men who program these algorithms and how it ends up creating a social risk factor for certain people?

joy buolamwini

Yes, well, what I’ve seen is it leads to a certain kind of privileged ignorance when you have a largely homogeneous group of people developing technology that’s deployed on the rest of society. And so, in my experience, the very people making the systems are the people who oftentimes are least likely to suffer the harms. And so the perspectives in terms of what can go wrong aren’t either being prioritized if they’re known, right? They’re dismissed, or sometimes, it’s not even a question that’s being asked. And this is what I saw with my own research. One of the major contributions for my MIT work was actually saying, wait a second, the ways in which we’re even evaluating systems for how well they work doesn’t include who I like to call the undersampled majority — women and people of color. You might have a false sense of progress because it works well for you and your buddy down the hall.

kara swisher

So how much does it matter who is choosing the training data to feed in the algorithms?

joy buolamwini

Well, it depends on the methods that are being used, as well as who. And I think sometimes, there is this perception that, oh, if you have a more inclusive or diverse workforce, then you’re going to address the problem. But we also can’t kid ourselves into thinking that’s a Band-Aid if we don’t change the underlying processes.

kara swisher

So the AI research community looks like what?

joy buolamwini

The AI research community looks like a lot of the pale male data sets I was taking a look at, right? I think women are under 14%, hovering around those numbers, women of color, single digits. And so this is a very homogeneous space. And it also means the type of research that’s conducted, the types of research questions that are even considered worthy of study —

kara swisher

And what gets funded.

joy buolamwini

— and absolutely what gets funded is based on the priorities of who’s in charge.

kara swisher

All right, tech companies are pretending, at least, to do their own algorithmic hygiene, I guess. But two high profile members of Google’s ethical AI team were recently fired, Timnit Gebru and Margaret Mitchell. Timnit is a friend and collaborator of yours. She says she was fired for writing a paper that criticized a new kind of language technology that is potentially biased. Mitchell was fired after defending her, but Google says Mitchell violated the company’s code of conduct and security policies. What did their dismissals tell you about Silicon Valley’s tolerance for being criticized?

joy buolamwini

Yes, so the issues with Dr. Timnit Gebru and Dr. Meg Mitchell show us as long as the exercise of AI ethics does not impede the business model or the bottom line, it is allowed to continue. The fact that they were dismissed, but largely tied to doing their work, tells us, again, change cannot come from the inside.

kara swisher

It cannot algorithmically clean themselves, apparently they need someone else to do so. So let’s get into what to do about that. If they cannot algorithmically clean themselves or conduct proper hygiene, in the U.S., concern about facial recognition and AI seems to have bipartisan support. So where is Congress on the issue now? You did that at that hearing quite a while ago, when it comes to oversight and creating rules for ethical use of technology.

joy buolamwini

I’m happy to report that in 2020, Senator Markey actually introduced the Facial Recognition and Biometric Technology Moratorium Act. And so, this is one of the most comprehensive pieces of legislation that has been introduced when it comes to actually putting in some red lines for facial recognition technologies and other remote biometric technologies in the U.S. And so, the ACLU and 40 other civil rights organizations have urged the Biden administration to go ahead and pass that legislation on the federal level. When we’re looking at the city level, we’ve seen cities around the country put serious limitations on these technologies. So you are starting to see some pushes when it comes to specific types of technologies. But when it comes to larger issues of algorithmic bias or pulling back just to regulating big tech in general, we are still far behind.

kara swisher

Far behind. So your friend Cathy O’Neil said in the documentary “Coded Bias,” we need an FDA for algorithms. What are the benefits and limits of an FDA type model?

joy buolamwini

I think one of the key benefits is the fact that there’s some level of oversight that’s provided. Because at the moment, you can basically sell and market almost any type of AI system. And there generally aren’t checks to even verify the claims. When you brought up Amazon, saying that police departments need to make sure they’re not misusing these systems and so forth, what are the guidelines and standards that are in place in the first place? And how do you even know that the system being sold by Amazon or others actually does what it says it’s going to do?

kara swisher

Tech people just love that one, telling them what to do. It’s going to go over like a —

joy buolamwini

Telling them what not to do.

kara swisher

Yeah, yeah. Well, we want to be regulated, Kara. Mm-hmm, OK.

joy buolamwini

They want to write the regulations.

kara swisher

Indeed. But Microsoft did back a bill in Washington State to require notices in public places —

joy buolamwini

That they wrote.

kara swisher

— where facial recognition is used, to ensure government agencies get a court order when looking for specific people. That was written by a Microsoft employee who is also a state senator. What are some of the protections you would advocate for?

joy buolamwini

I would advocate for community control over police surveillance of which there are model bills. So this doesn’t just give you a notice. This gives you a choice whether or not these systems should be used in the first place. So we need to go back to a model where people actually can decide. So when I look at the bills that are oftentimes introduced that are heavily championed by tech companies, the type of accountability that really involves the “excoded,” right, the people who are most impacted by these systems, oftentimes isn’t there. So the legislation that I’ve seen from tech companies is regulation light, unsurprisingly.

kara swisher

Yeah, which is what they want. See, we’re doing something about it.

joy buolamwini

We’re doing something, but the level of accountability isn’t there.

kara swisher

All right, I want to finish up by talking about what’s worrying you upcoming in AI, is this multimodal biometrics, for example, which is using more than one type of biometric? It could be your voice plus your gait plus your face. One of the things during the pandemic, there were people wearing face masks. There’s technology to try to go around the idea of filling in faces. These are things I’m worried about. What are you worried about?

joy buolamwini

Yes, I’m certainly worried about the rise of remote biometrics for sure. I think something else that I’m starting to see more and more, if there is an ism, it’s happening, right? And we often talk about racism or sexism or classism. What we seldom talk about ableism. And I’m concerned that as more and more of society is moving towards algorithmic systems, that the impact of ableism is going to be further pronounced. And I mean within a lifetime, you might become disabled at any particular moment in time. So when we’re looking at how ableism can creep up when we’re thinking about the use of AI systems, let’s say, in education or in healthcare, or especially in employment, right, where we might analyze your voice, or we might analyze your face, or we’re tracking how you’re working, right, to inform your future at a particular company or if you’re even deemed worthy of a job. And I think also looking at the labor dynamics as well when it comes to how these systems are created, but also how algorithms are being used to monitor workers.

kara swisher

Which is an issue with Amazon.

joy buolamwini

Absolutely. So when I think about the trends, I’m thinking about where people are being harmed by what’s being introduced.

kara swisher

Joy, we really appreciate it. Thank you so much.

joy buolamwini

Thank you [MUSIC PLAYING]

kara swisher

“Sway” is a production of New York Times Opinion. It’s produced by Nayeema Raza, Blakeney Schick, Heba Elorbany, Matt Kwong, and Daphne Chen; edited by Nayeema Raza and Paula Szuchman; with original music by Isaac Jones; mixing by Erick Gomez; and fact-checking by Kate Sinclair and Michelle Harris. Special thanks to Shannon Busta and Liriel Higa. If you’re in a podcast app already, you know how to get your podcasts, so follow this one. If you’re listening on The Times website and want to get each new episode of “Sway” delivered to you, download any podcast app, then search for “Sway,” and follow the show. We release every Monday and Thursday. Thanks for listening. And you better be listening because I’m watching you. No, I’m not. But I am.

Source