Skip to content
Who's in the Video
Joanna Bryson is a Reader (tenured Associate Professor) at the University of Bath, and an affiliate of Princeton's Center for Information Technology Policy (CITP). She has broad academic interests in[…]
Amway supports a prosperous economy through having a diverse workplace. Companies committed to diversity and inclusion are better equipped to innovate and drive performance. For more information, visit amwayglobal.com/our-story.

The best hiring manager might just be the computer sitting on your desk. AI and ethics expert Joanna Bryson posits that artificial intelligence can go through all the resumes in a stack and find what employers are missing. Most humans, on the other hand, will rely on biases — whether they are aware of them or not — to get them through the selection process. This is sadly why those with European-sounding names get more calls for interviews than others. AI, she says, can change that. Joanna is brought to you today by Amway. Amway believes that ​diversity and inclusion ​are ​essential ​to the ​growth ​and ​prosperity ​of ​today’s ​companies. When woven ​into ​every ​aspect ​of ​the talent ​life ​cycle, companies committed to diversity and inclusion are ​the ​best ​equipped ​to ​innovate, ​improve ​brand image ​and ​drive ​performance.

Joanna Bryson: Can AI remove implicit bias from the hiring process? “Remove”, entirely remove? No.

But as I understand I've had multiple people tell me that it's already reducing the impact of implicit bias, so they're already happy with what they're seeing.

So what is implicit bias, first of all? It's important to understand that implicit bias and explicit bias are two different things.

Implicit bias is stuff that you're not conscious of; you're not aware of it; it's hard for you to control; it's probably impossible for you to control.

It's impossible for you to control, right now, on demand. You might be able to alter it by exposing yourself to different situations or whatever and changing what we in machine learning call priors—so changing your experiences.

So maybe if you see more women in senior positions you'll become less implicitly sexist, or something like that.

But anyway, explicit bias is like “I’m going to choose to only work with women” or “I’m going to choose only to work with men” and I know that and I'm conscious about it.

So HR departments are reasonably good at getting people who hopefully honestly are saying “yeah I'm not going to be racist or sexist or whatever-ist, I'm not going to worry about how long somebody's name is or what the country of origin of someone of their ancestors is.” So hopefully HR people can spot the people who sincerely are neutral, at least at the explicit level.

But at the implicit level, there's a lot of evidence that something else might be going on. Again, we don't know for sure if it's implicit or explicit, but what we do know is that in the paper we did in 2017 one of my co-authors Aylin Caliskan had this brilliant idea of looking at the resume data. So there's this famous study that showed that you have identical resumes and the only thing you do is have more African-American names versus European American names, and the people with European American names get 50 percent more calls in to interview with nothing else changed.

And so now people are talking about “whitening” their CVs just so they get that chance to interview. So anyway, it looks by the measures that we used with the vector spaces as if the data and the implicit bias that also explains implicit bias also explains those choices on the resume.

So does that mean people are looking at it and explicitly saying, “Oh I think that's an African-American?” Or were they just going through huge stacks of CVs and some didn't jump out at them in the same ways that others did? Because we're pretty sure when it comes down to like they're all sitting in the room together that that point was okay.

And so what the AI is doing for them is it's helping them pick out the characteristics they're looking for and ignoring the other characteristics. So they're helping them detect the things that they wanted to be: when they were sitting in the room with multiple eyes looking at something, that they were looking at the right starting place and then they're able to find - they're finding people that were falling through the cracks.

A lot of people have trouble, that there's not enough good people applying or that they thought there weren't enough good people applying, but actually, they were missing people because they didn't see the qualifications buried in the other stuff when they're leafing through these stacks.

So a lot of people are reporting that they have great data or they're very pleased with the results, but that's privately and it's off the record and I can't get anyone to go on the record.

I just recently at Princeton, the Center for Information Technology Policy ran a meeting about AI, and somebody, again in Chatham House, I can't say who it was, but an organization that's sort of between corporate and—anyway it's a special kind of organization, they said that they're going to try and do this and so I begged them to document it. I said look you're in a different situation you don't have ordinary customers, please document the results fully and then publish papers about it so we can really see what the outcomes are.

So I hope we'll have that data, but so far I could only tell you that people are saying it really is working.

One of the possible shortfalls of that kind of situation, well first of all being sure that you can eliminate bias that way, no; there's all kinds of ways you can accidentally pick up on things.

So even if you don't have gender you might recognize gender from the name, for example. So there's ways that machine learning picks up on regularities that are illegal and, again, you have to do your own auditing and make sure that that isn't happening.

And I guess that's the biggest concern. Of course anytime you scan something and make it digital the net makes it amenable to hacking, so you have to be careful about that.

And I guess the biggest thing is don't believe that just because you've automated part of a process you've made it fair. You have to keep checking—Just like anything else you keep going to improve.

But yeah when you put these things in front of you and when you write them down, then yeah you have the potential to keep improving.

I guess there's one other thing, which I haven't mentioned, which is that once you've automated the process you do open the door for somebody who is, say, an evil racist to go in and actually tweak things and make it so that you get all one race.

So you need to make sure that there's adequate oversight and regular auditing because people worry about accidentally introducing bias, and that's good, we should worry about that, but we should be really worried about deliberately introducing bias.

That's the thing that I think, again, because people think artificial intelligence is like space aliens that are kind of - it's actually almost like sort of the Greco Roman or Nordic gods or something, like “Maybe we can pray to them correctly and they'll give us what we want, but they're capricious and we're not sure.”

No, it's not like that. It really is something that we have an opportunity to try to fix it, and it works in systematic ways, but it's important to understand that people are writing it, and that means that some people will make mistakes, some people will be sloppy, some people will do what they seriously think is the best thing, but it actually isn't legal and some people will go out of their way to do bad things because they're just vandals or because that's how they got elected or whatever.


Related