Would companies be more diverse if A.I. did the hiring?

The best hiring manager might just be the computer sitting on your desk, says AI expert Joanna Bryson.

Joanna Bryson: Can AI remove implicit bias from the hiring process? “Remove”, entirely remove? No.

But as I understand I've had multiple people tell me that it's already reducing the impact of implicit bias, so they're already happy with what they're seeing.

So what is implicit bias, first of all? It's important to understand that implicit bias and explicit bias are two different things.

Implicit bias is stuff that you're not conscious of; you're not aware of it; it's hard for you to control; it's probably impossible for you to control.

It's impossible for you to control, right now, on demand. You might be able to alter it by exposing yourself to different situations or whatever and changing what we in machine learning call priors—so changing your experiences.

So maybe if you see more women in senior positions you'll become less implicitly sexist, or something like that.

But anyway, explicit bias is like “I’m going to choose to only work with women” or “I’m going to choose only to work with men” and I know that and I'm conscious about it.

So HR departments are reasonably good at getting people who hopefully honestly are saying “yeah I'm not going to be racist or sexist or whatever-ist, I'm not going to worry about how long somebody's name is or what the country of origin of someone of their ancestors is.” So hopefully HR people can spot the people who sincerely are neutral, at least at the explicit level.

But at the implicit level, there's a lot of evidence that something else might be going on. Again, we don't know for sure if it's implicit or explicit, but what we do know is that in the paper we did in 2017 one of my co-authors Aylin Caliskan had this brilliant idea of looking at the resume data. So there's this famous study that showed that you have identical resumes and the only thing you do is have more African-American names versus European American names, and the people with European American names get 50 percent more calls in to interview with nothing else changed.

And so now people are talking about “whitening” their CVs just so they get that chance to interview. So anyway, it looks by the measures that we used with the vector spaces as if the data and the implicit bias that also explains implicit bias also explains those choices on the resume.

So does that mean people are looking at it and explicitly saying, “Oh I think that's an African-American?” Or were they just going through huge stacks of CVs and some didn't jump out at them in the same ways that others did? Because we're pretty sure when it comes down to like they're all sitting in the room together that that point was okay.

And so what the AI is doing for them is it's helping them pick out the characteristics they're looking for and ignoring the other characteristics. So they're helping them detect the things that they wanted to be: when they were sitting in the room with multiple eyes looking at something, that they were looking at the right starting place and then they're able to find - they're finding people that were falling through the cracks.

A lot of people have trouble, that there's not enough good people applying or that they thought there weren't enough good people applying, but actually, they were missing people because they didn't see the qualifications buried in the other stuff when they're leafing through these stacks.

So a lot of people are reporting that they have great data or they're very pleased with the results, but that's privately and it's off the record and I can't get anyone to go on the record.

I just recently at Princeton, the Center for Information Technology Policy ran a meeting about AI, and somebody, again in Chatham House, I can't say who it was, but an organization that's sort of between corporate and—anyway it's a special kind of organization, they said that they're going to try and do this and so I begged them to document it. I said look you're in a different situation you don't have ordinary customers, please document the results fully and then publish papers about it so we can really see what the outcomes are.

So I hope we'll have that data, but so far I could only tell you that people are saying it really is working.

One of the possible shortfalls of that kind of situation, well first of all being sure that you can eliminate bias that way, no; there's all kinds of ways you can accidentally pick up on things.

So even if you don't have gender you might recognize gender from the name, for example. So there's ways that machine learning picks up on regularities that are illegal and, again, you have to do your own auditing and make sure that that isn't happening.

And I guess that's the biggest concern. Of course anytime you scan something and make it digital the net makes it amenable to hacking, so you have to be careful about that.

And I guess the biggest thing is don't believe that just because you've automated part of a process you've made it fair. You have to keep checking—Just like anything else you keep going to improve.

But yeah when you put these things in front of you and when you write them down, then yeah you have the potential to keep improving.

I guess there's one other thing, which I haven't mentioned, which is that once you've automated the process you do open the door for somebody who is, say, an evil racist to go in and actually tweak things and make it so that you get all one race.

So you need to make sure that there's adequate oversight and regular auditing because people worry about accidentally introducing bias, and that's good, we should worry about that, but we should be really worried about deliberately introducing bias.

That's the thing that I think, again, because people think artificial intelligence is like space aliens that are kind of - it's actually almost like sort of the Greco Roman or Nordic gods or something, like “Maybe we can pray to them correctly and they'll give us what we want, but they're capricious and we're not sure.”

No, it's not like that. It really is something that we have an opportunity to try to fix it, and it works in systematic ways, but it's important to understand that people are writing it, and that means that some people will make mistakes, some people will be sloppy, some people will do what they seriously think is the best thing, but it actually isn't legal and some people will go out of their way to do bad things because they're just vandals or because that's how they got elected or whatever.

The best hiring manager might just be the computer sitting on your desk. AI and ethics expert Joanna Bryson posits that artificial intelligence can go through all the resumes in a stack and find what employers are missing. Most humans, on the other hand, will rely on biases — whether they are aware of them or not — to get them through the selection process. This is sadly why those with European-sounding names get more calls for interviews than others. AI, she says, can change that. Joanna is brought to you today by Amway. Amway believes that ​diversity and inclusion ​are ​essential ​to the ​growth ​and ​prosperity ​of ​today’s ​companies. When woven ​into ​every ​aspect ​of ​the talent ​life ​cycle, companies committed to diversity and inclusion are ​the ​best ​equipped ​to ​innovate, ​improve ​brand image ​and ​drive ​performance.

Should you defend the free speech rights of neo-Nazis?

Former president of the ACLU Nadine Strossen discusses whether our society should always defend free speech rights, even for groups who would oppose such rights.

Sponsored by Charles Koch Foundation
  • Former ACLU president Nadine Strossen understands that protecting free speech rights isn't always a straightforward proposition.
  • In this video, Strossen describes the reasoning behind why the ACLU defended the free speech rights of neo-Nazis in Skokie, Illinois, 1977.
  • The opinions expressed in this video do not necessarily reflect the views of the Charles Koch Foundation, which encourages the expression of diverse viewpoints within a culture of civil discourse and mutual respect.
Keep reading Show less

Moon mission 2.0: What humanity will learn by going back to the Moon

Going back to the moon will give us fresh insights about the creation of our solar system.

Videos
  • July 2019 marks the 50th anniversary of the moon landing — Apollo 11.
  • Today, we have a strong scientific case for returning to the moon: the original rock samples that we took from the moon revolutionized our view of how Earth and the solar system formed. We could now glean even more insights with fresh, nonchemically-altered samples.
  • NASA plans to send humans to a crater in the South Pole of the moon because it's safer there, and would allow for better communications with people back on Earth.

Top vets urge dog lovers to stop buying pugs and bulldogs

Pugs and bulldogs are incredibly trendy, but experts have massive animal welfare concerns about these genetically manipulated breeds. 

Photo by terriermandotcom.blogspot.com
popular
  • Pugs, Frenchies, boxers, shih-tzus and other flat-faced dog breeds have been trending for at least the last decade.
  • Higher visibility (usually in a celebrity's handbag), an increase in city living (smaller dogs for smaller homes), and possibly even the fine acting of Frank the Pug in 1997's Men in Black may be the cause.
  • These small, specialty pure breeds are seen as the pinnacle of cuteness – they have friendly personalities, endearing odd looks, and are perfect for Stranger Things video montages.
Keep reading Show less

U.S. Air Force warns UFO enthusiasts against storming Area 51

Jokesters and serious Area 51 raiders would be met with military force.

Politics & Current Affairs
  • Facebook joke event to "raid Area 51" has already gained 1,000,000 "going" attendees.
  • The U.S. Air Force has issued an official warning to potential "raiders."
  • If anyone actually tries to storm an American military base, the use of deadly force is authorized.
Keep reading Show less