Skip to content
Who's in the Video
Joanna Bryson is a Reader (tenured Associate Professor) at the University of Bath, and an affiliate of Princeton's Center for Information Technology Policy (CITP). She has broad academic interests in[…]
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Joanna Bryson isn’t a fan of companies that can’t hold themselves responsible for their actions. Too many tech companies, she argues, think that they’re above the law and that they should create what they want, no matter who it hurts, and have society pick up the pieces later. This libertarian attitude might be fine if the company happens to be a young startup. But if the company is a massive behemoth like Facebook that could easily manipulate 2 billion people worldwide — or influence an election, perhaps — perhaps there should be some oversight. Tech companies, she argues, could potentially create something catastrophic that they can’t take back. And at the dawn of the AI era, where decisions made now could affect the future of mankind, regulation over these tech giants is needed now more than ever.

Joanna Bryson: If we're coding AI and we understand that there's moral consequences does that mean the programmer has to understand it? It isn't only the programmer, although we do really think that we need to train programmers to be watching out for these kinds of situations, knowing how to whistle blow, knowing when to whistle blow. There is a problem of people being over-reactive and that costs companies and I understand that, but we also have sort of a Nuremberg situation that we need everybody to be responsible. But ultimately it isn't just about the programmers, the programmers work within the context of a company and the companies work in the context of regulation and so it is about the law, it's about society. One of the things, one of the papers that had come out in 2017 was Professor Alan Winfield was a thing about how if legislatures can't be expected to keep up with the pace of technological change, what they could keep up with is which professional societies do they trust. And they already do this in various disciplines; it's just new for AI. You say you have to achieve the moral standards of at least one professional organization so when they give their rules about what's okay. And that sort of allows you kind of a loose coupling because it's wrong for professional organizations to enforce the law to go after people to sue them, whatever. That's not what professional organizations are for. But it's also sensible it is what professional organizations are for is to keep up with their own field and to set things like codes of conduct. So that's why you want to bring those two things together the executive government and the professional organizations and you can kind of have the legislature join those two together. 

This is what I'm working hard to keep in the regulations that it's always people in organizations that are accountable and so then they will be motivated to make sure that they can demonstrate they followed due process, so both of the people who are operating the AI and the people who developed the AI. Because it's like a car, when there's a car accident normally the driver is at fault, sometimes the person they hit is at fault because they did something completely unpredictable. But sometimes the manufacturer did something wrong with the brakes and that's a real problem. So we need to be able to show that the manufacturer followed good practice and it really is the fault of the driver. Or sometimes that there really isn't a fact of the matter like it was an unforeseeable thing in the past, but of course now it's happened so in the future we'll be more careful.
That just happened recently in Europe there was a case where somebody was on... it wasn't like a totally driverless car, but I guess it was cruise control or something it had some extra AI and unfortunately somebody had a stroke. Now what happens a lot and what automobile manufacturers have to look for is falling asleep at the wheel, but this guy had a stroke, which is different from falling asleep. So he was still kind of holding on semi in control but couldn't see anything, hit a family and killed two of the three of the family. And so the survivor was the father and he said he wasn't happy only to get money from insurance or whatever the liability or whatever, he wanted to know that whoever had caused this accident was being held accountable. So there was a big lawsuit and that company absolutely it was a car manufacturing company; they're able to show they followed due process; they had been incredibly careful; this was just like a really unlikely thing to have happen to have that kind of stroke that you'd be holding onto the steering wheel and all these other things. And so it was decided that there was nobody actually at fault. But it could have been different. If Facebook is really moving fast and breaking things then they're going to have a lot of trouble proving that they were doing due diligence when Cambridge Analytica got the data that it got. And so they are very likely to be held to account for anything that's found to have been a negative consequence of that behavior. It's something that computer scientists should want that tech companies should want they should want to be able to show that they've done the right thing. 

So everybody who's ever made any code at all knows there's like two different pressures. One is you want to have clean beautiful code that you can understand that's well documented and everything and that's good because then if you ever need to change it or extend it or you want to write some more software that's great you can reuse it, other people can reuse it, maybe you'll get famous for your great code. The other thing is that you want to put stuff out as soon as you can because you can sell it faster, you don't have time, whatever, you want to go do something else or maybe you don't even understand what you're really doing and you've just barely got it to work. Whatever. So those two pressures are always working on each other and it's really, really important for all of our benefit, for society's benefit that we put weight on that side of that nice clean code so we can tell questions like the one I just mentioned questions like who's at fault if data goes the wrong place. So right now that's not the way it's been going. It has been completely sort of Wild West and nobody can tell where the data is going. But with a few lawsuits, with a few big failures I think everyone is going to be motivated to say no I want to show that the AI did the right thing and it was the owner's fault or that we followed due diligence and that was an unforeseeable consequence. They're going to want to prove that stuff. And like I said that's going to benefit not just the companies and not just the owners or operators, but all of us because we want liability to be on the people who are making the decisions and so that's the right way around so that's why we want to maintain human accountability even though the autonomous system is sometimes taking decisions.

The thing that drives me crazy that organizations do wrong about AI right now is when they go in and try to fight regulation by saying you'll lose the magic juice, like deep learning is the only reason we've got AI and if you regulate us then you can't use it because nobody knows what the weights are doing in deep learning. This is completely false. First of all, when you audit a company you don't go and try to figure out how the synapses are connected and their accountants, you just look at the accounts. So in the worst case, we can do the same thing with AI that we could be doing with humans. Now again, this goes back to what I was saying earlier about due diligence, if you have accountants and the accounts are wrong you can try to put the accountant on the stand and say why are the accounts wrong and then you try to establish whether they were doing the right thing at the right time or whatever. You can't do that with AI systems, but if you want to be able to prove that they were honest mistakes you can look at like how is the system trained? How was it being run? There's ways that you can audit whether the system was built appropriately. So I think we should be out looking for those because that also allows us to improve our systems. So the most important thing is just not believing the whole magic line. And one of the companies I've heard give the magic line in a regulatory setting in front of government representatives was Microsoft and that was in early 2017 and now they've completely reversed that. They've sat down, they've thought about it and now they've said accountability and transparency is absolutely core to what we should be doing with AI. I think Microsoft is making really strong efforts to be the adults in the room right now, which is interesting. Like I said just literally within one year there was that change. So I think everybody should be thinking that AI is not this big exception. Don't look for a way to get out of responsibility. 


Related