Can we stop a rogue AI by teaching it ethics? That might be easier said than done.
- One way we might prevent AI from going rogue is by teaching our machines ethics so they don't cause problems.
- The questions of what we should, or even can, teach computers remains unknown.
- How we pick the values artificial intelligence follows might be the most important thing.
What effect does how we build the machine have on what ethics the machine can follow?<iframe width="730" height="430" src="https://www.youtube.com/embed/IHE63fxpHCg" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe><p> <strong><br> </strong>Humans are really good at explaining ethical problems and discussing potential solutions. Some of us are very good at teaching entire systems of ethics to other people. However, we tend to do this using language rather than code. We also teach people with learning capabilities similar to us rather than to a machine with different abilities. Shifting from people to machines may introduce some limitations. <br> <br> Many different methods of machine learning could be applied to ethical theory. The trouble is, they may prove to be very capable of absorbing one moral stance and utterly incapable of handling another. </p><p>Reinforcement learning (RL) is a way to teach a machine to do something by having it maximize a reward signal. Through trial and error, the machine is eventually able to learn how to get as much reward as possible efficiently. With its built-in tendency to maximize what is defined as good, this system clearly lends itself to utilitarianism, with its goal of maximizing the total happiness, and other consequentialist ethical systems. How to use it to effectively teach a different ethical system remains unknown. <br> <br> Alternatively, apprenticeship or imitation learning allows a programmer to give a computer a long list of data or an exemplar to observe and allow the machine to infer values and preferences from it. Thinkers concerned with the alignment problem often argue that this could teach a machine our preferences and values through action rather than idealized language. It would just require us to show the machine a moral exemplar and tell it to copy what they do. The idea has more than a few similarities to <a href="https://bigthink.com/scotty-hendricks/virtue-ethics-the-moral-system-you-have-never-heard-of-but-have-probably-used" target="_self">virtue ethics</a>. </p><p>The problem of who is a moral exemplar for other people remains unsolved, and who, if anybody, we should have computers try to emulate is equally up for debate. <br> <br> At the same time, there are some moral theories that we don't know how to teach to machines. Deontological theories, known for creating universal rules to stick to all the time, typically rely on a moral agent to apply reason to the situation they find themselves in along particular lines. No machine in existence is currently able to do that. Even the more limited idea of rights, and the concept that they should not be violated no matter what any optimization tendency says, might prove challenging to code into a machine, given how specific and clearly defined you'd have to make these rights.</p><p>After discussing these problems, Gabriel notes that:<br> <br> "In the light of these considerations, it seems possible that the methods we use to build artificial agents may influence the kind of values or principles we are able encode."<br> <br> This is a very real problem. After all, if you have a super AI, wouldn't you want to teach it ethics with the learning technique best suited for how you built it? What do you do if that technique can't teach it anything besides utilitarianism very well but you've decided virtue ethics is the right way to go? </p>
Miso Robotics has already served up over 12,000 hamburgers.
- Quick service restaurants are facing growing labor, food, and real estate costs.
- Miso Robotics is working with these restaurants to lower labor costs via automation.
- Miso Robotics has already produced over 60,000 pounds of food with its revolutionary technology.
Machine learning is a powerful and imperfect tool that should not go unmonitored.
- When you harness the power and potential of machine learning, there are also some drastic downsides that you've got to manage.
- Deploying machine learning, you face the risk that it be discriminatory, biased, inequitable, exploitative, or opaque.
- In this article, I cover six ways that machine learning threatens social justice and reach an incisive conclusion: The remedy is to take on machine learning standardization as a form of social activism.
Here are six ways machine learning threatens social justice<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNDUyMDgxNC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY0MzM0NjgxOH0.zHvEEsYGbNA-lnkq4nss7vwVkZlrKkuKf0XASf7A7Jg/img.jpg?width=980" id="05f07" class="rm-shortcode" data-rm-shortcode-id="a7089b6621166f5a2df77d975f8b9f74" data-rm-shortcode-name="rebelmouse-image" />
Credit: metamorworks via Shutterstock<p><strong></strong><strong>1) </strong><strong>Blatantly discriminatory models</strong> are predictive models that base decisions partly or entirely on a protected class. Protected classes include race, religion, national origin, gender, gender identity, sexual orientation, pregnancy, and disability status. By taking one of these characteristics as an input, the model's outputs – and the decisions driven by the model – are based at least in part on membership in a protected class. Although models rarely do so directly, there is <a href="https://www.youtube.com/watch?v=eSlzy1x6Fy0" target="_blank">precedent</a> and <a href="https://www.youtube.com/watch?v=wfpNN8ASIq4" target="_blank">support</a> for doing so.</p><p>This would mean that a model could explicitly hinder, for example, black defendants for being black. So, imagine sitting across from a person being evaluated for a job, a loan, or even parole. When they ask you how the decision process works, you inform them, "For one thing, our algorithm penalized your score by seven points because you're black." This may sound shocking and sensationalistic, but I'm only literally describing what the model would do, mechanically, if race were permitted as a model input. </p><p><strong>2) Machine bias</strong>. Even when protected classes are not provided as a direct model input, we find, in some cases, that model predictions are still inequitable. This is because other variables end up serving as proxies to protected classes. This is <a href="https://coursera.org/share/51350b8fb12a5937bbddc0e53a4f207d" target="_blank" rel="noopener noreferrer">a bit complicated</a>, since it turns out that models that are fair in one sense are unfair in another. </p><p>For example, some crime risk models succeed in flagging both black and white defendants with equal precision – each flag tells the same probabilistic story, regardless of race – and yet the models falsely flag black defendants more often than white ones. A crime-risk model called COMPAS, which is sold to law enforcement across the US, falsely flags white defendants at a rate of 23.5%, and Black defendants at 44.9%. In other words, black defendants who don't deserve it are <a href="https://coursera.org/share/df6e6ba7108980bb7eeae0ba22123ac1" target="_blank" rel="noopener noreferrer">erroneously flagged almost twice as much</a> as white defendants who don't deserve it.</p><p><strong>3) Inferring sensitive attributes</strong>—predicting pregnancy and beyond. Machine learning predicts sensitive information about individuals, such as sexual orientation, whether they're pregnant, whether they'll quit their job, and whether they're going to die. Researchers have shown that it is possible to <a href="https://youtu.be/aNwvXhcq9hk" target="_blank" rel="noopener noreferrer">predict race based on Facebook likes</a>. These predictive models deliver dynamite.</p><p>In a particularly extraordinary case, officials in China use facial recognition to <a href="https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html" target="_blank" rel="noopener noreferrer">identify and track the Uighurs, a minority ethnic group</a> systematically oppressed by the government. This is the first known case of a government using machine learning to profile by ethnicity. One Chinese start-up valued at more than $1 billion said its software could recognize "sensitive groups of people." It's website said, "If originally one Uighur lives in a neighborhood, and within 20 days six Uighurs appear, it immediately sends alarms" to law enforcement.</p>
Recourse: Establish machine learning standards as a form of social activism<p>To address these problems, take on machine learning standardization as a form of social activism. We must establish standards that go beyond nice-sounding yet vague platitudes such as "be fair", "avoid bias", and "ensure accountability". Without being precisely defined, these catch phrases are subjective and do little to guide concrete action. Unfortunately, such broad language is fairly common among the principles released by many companies. In so doing, companies protect their public image more than they protect the public.<br></p><p>People involved in initiatives to deploy machine learning have a powerful, influential voice. These relatively small numbers of people mold and set the trajectory for systems that automatically dictate the rights and resources that great numbers of consumers and citizens gain access to.</p><p>Famed machine learning leader and educator Andrew Ng drove it home: "AI is a superpower that enables a small team to affect a huge number of people's lives... Make sure the work you do leaves society better off."</p><p>And Allan Sammy, Director, Data Science and Audit Analytics at Canada Post, clarified the level of responsibility: "A decision made by an organization's analytic model is a decision made by that entity's senior management team."</p><p>Implementing ethical data science is as important as ensuring a self-driving car knows when to put on the breaks.</p><p>Establishing well-formed ethical standards for machine learning will be an intensive, ongoing process. For more, <a href="https://youtu.be/ToSj0ZkJHBQ" target="_blank">watch this short video</a>, in which I provide some specifics meant to kick-start the process.</p>
This wide-ranging, 13-course electrical engineering training is your next power move.
- Electrical engineering is the application, design, and study of devices and systems that use electricity.
- Related fields include telecommunications, computer engineering, and electronics.
- Specialization within this field includes nanotechnology, electrochemistry, and microwave engineering.
An accident left this musician with one arm. Now he is helping create future tech for others with disabilities.
- Meet the world's first bionic drummer. Rock musician Jason Barnes lost his arm in a terrible accident... and then he became the fastest drummer in the world.
- With the help of Gil Weinberg, a Georgia Tech professor and inventor of musical robots, the pair utilized electromyography and ultrasound technology to break musical records.
- Weinberg and Barnes hope to perfect the technology so that it can one day be used to help other people with disabilities realize that "they're not only not disabled, they're actually super-able."