Subscribe to our daily newsletter
Erin Meyer explains the keeper test and how it can make or break a team.
- There are numerous strategies for building and maintaining a high-performing team, but unfortunately they are not plug-and-play. What works for some companies will not necessarily work for others. Erin Meyer, co-author of No Rules Rules: Netflix and the Culture of Reinvention, shares one alternative employed by one of the largest tech and media services companies in the world.
- Instead of the 'Rank and Yank' method once used by GE, Meyer explains how Netflix managers use the 'keeper test' to determine if employees are crucial pieces of the larger team and are worth fighting to keep.
- "An individual performance problem is a systemic problem that impacts the entire team," she says. This is a valuable lesson that could determine whether the team fails or whether an organization advances to the next level.
Can we stop a rogue AI by teaching it ethics? That might be easier said than done.
- One way we might prevent AI from going rogue is by teaching our machines ethics so they don't cause problems.
- The questions of what we should, or even can, teach computers remains unknown.
- How we pick the values artificial intelligence follows might be the most important thing.
What effect does how we build the machine have on what ethics the machine can follow?<iframe width="730" height="430" src="https://www.youtube.com/embed/IHE63fxpHCg" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe><p> <strong><br> </strong>Humans are really good at explaining ethical problems and discussing potential solutions. Some of us are very good at teaching entire systems of ethics to other people. However, we tend to do this using language rather than code. We also teach people with learning capabilities similar to us rather than to a machine with different abilities. Shifting from people to machines may introduce some limitations. <br> <br> Many different methods of machine learning could be applied to ethical theory. The trouble is, they may prove to be very capable of absorbing one moral stance and utterly incapable of handling another. </p><p>Reinforcement learning (RL) is a way to teach a machine to do something by having it maximize a reward signal. Through trial and error, the machine is eventually able to learn how to get as much reward as possible efficiently. With its built-in tendency to maximize what is defined as good, this system clearly lends itself to utilitarianism, with its goal of maximizing the total happiness, and other consequentialist ethical systems. How to use it to effectively teach a different ethical system remains unknown. <br> <br> Alternatively, apprenticeship or imitation learning allows a programmer to give a computer a long list of data or an exemplar to observe and allow the machine to infer values and preferences from it. Thinkers concerned with the alignment problem often argue that this could teach a machine our preferences and values through action rather than idealized language. It would just require us to show the machine a moral exemplar and tell it to copy what they do. The idea has more than a few similarities to <a href="https://bigthink.com/scotty-hendricks/virtue-ethics-the-moral-system-you-have-never-heard-of-but-have-probably-used" target="_self">virtue ethics</a>. </p><p>The problem of who is a moral exemplar for other people remains unsolved, and who, if anybody, we should have computers try to emulate is equally up for debate. <br> <br> At the same time, there are some moral theories that we don't know how to teach to machines. Deontological theories, known for creating universal rules to stick to all the time, typically rely on a moral agent to apply reason to the situation they find themselves in along particular lines. No machine in existence is currently able to do that. Even the more limited idea of rights, and the concept that they should not be violated no matter what any optimization tendency says, might prove challenging to code into a machine, given how specific and clearly defined you'd have to make these rights.</p><p>After discussing these problems, Gabriel notes that:<br> <br> "In the light of these considerations, it seems possible that the methods we use to build artificial agents may influence the kind of values or principles we are able encode."<br> <br> This is a very real problem. After all, if you have a super AI, wouldn't you want to teach it ethics with the learning technique best suited for how you built it? What do you do if that technique can't teach it anything besides utilitarianism very well but you've decided virtue ethics is the right way to go? </p>
Although everyone knows that coal-based energy is a thing of the past, declarations about nuclear power plants somehow do not want to enter into force.
No other power-generating device raises as much concern as the nuclear reactor. Because of this, until recently the future of the entire energy sector has been determined by its past.
- The International Energy Agency is an intergovernmental organization that advises member nations on issues related to energy and the environment.
- In its annual report, the IEA reported that the cost of solar is dropping more rapidly than previously thought, providing some parts of the world with historically cheap electricity.
- The IEA predicted that, over the next decade, renewables will meet 80 percent of global electricity demand growth, while the demand for oil will peak.
"Interacting" with nature through virtual reality applications had especially strong benefits, according to the study.
- Previous studies have shown that spending time in nature can lead to a variety of mental and physical health benefits.
- The new study involved exposing people to a high-definition nature program through one of three mediums: TV, VR and interactive VR.
- The results suggest that nature programs may be an easy and effective way to give people a "dose" of nature, which may be especially helpful during pandemic lockdowns.
Credit: Yeo et al.<p>The results showed that watching the nature program under all three conditions lowered negative affect, including emotions like boredom and sadness. But only the group who experienced the program in interactive VR reported a boost in mood, and feelings of being more connected to nature.</p><p style="margin-left: 20px;">"Our results show that simply watching nature on TV can help to lift people's mood and combat boredom," lead researcher Nicky Yeo <a href="https://www.exeter.ac.uk/news/research/title_821333_en.html" target="_blank">told</a> University of Exeter News. "With people around the world facing limited access to outdoor environments because of COVID-19 quarantines, this study suggests that nature programmes might offer an accessible way for populations to benefit from a 'dose' of digital nature."</p>
Helping those without access to nature<p>"Dose" is probably a keyword: The researchers didn't compare the benefits of experiencing nature via TV or VR to experiencing it in person. But even beyond the pandemic, the findings suggest that experiencing nature via virtual reality could help people improve their mental wellbeing — a tool that could prove especially useful for people who don't live near natural environments.</p><p style="margin-left: 20px;">"Virtual reality could help us to boost the wellbeing of people who can't readily access the natural world, such as those in hospital or in long term care," co-author Mathew White told University of Exeter News. "But it might also help to encourage a deeper connection to nature in healthy populations, a mechanism which can foster more pro-environmental behaviours and prompt people to protect and preserve nature in the real world."</p>
Machine learning is a powerful and imperfect tool that should not go unmonitored.
- When you harness the power and potential of machine learning, there are also some drastic downsides that you've got to manage.
- Deploying machine learning, you face the risk that it be discriminatory, biased, inequitable, exploitative, or opaque.
- In this article, I cover six ways that machine learning threatens social justice and reach an incisive conclusion: The remedy is to take on machine learning standardization as a form of social activism.
Here are six ways machine learning threatens social justice<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNDUyMDgxNC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY0MzM0NjgxOH0.zHvEEsYGbNA-lnkq4nss7vwVkZlrKkuKf0XASf7A7Jg/img.jpg?width=980" id="05f07" class="rm-shortcode" data-rm-shortcode-id="a7089b6621166f5a2df77d975f8b9f74" data-rm-shortcode-name="rebelmouse-image" />
Credit: metamorworks via Shutterstock<p><strong></strong><strong>1) </strong><strong>Blatantly discriminatory models</strong> are predictive models that base decisions partly or entirely on a protected class. Protected classes include race, religion, national origin, gender, gender identity, sexual orientation, pregnancy, and disability status. By taking one of these characteristics as an input, the model's outputs – and the decisions driven by the model – are based at least in part on membership in a protected class. Although models rarely do so directly, there is <a href="https://www.youtube.com/watch?v=eSlzy1x6Fy0" target="_blank">precedent</a> and <a href="https://www.youtube.com/watch?v=wfpNN8ASIq4" target="_blank">support</a> for doing so.</p><p>This would mean that a model could explicitly hinder, for example, black defendants for being black. So, imagine sitting across from a person being evaluated for a job, a loan, or even parole. When they ask you how the decision process works, you inform them, "For one thing, our algorithm penalized your score by seven points because you're black." This may sound shocking and sensationalistic, but I'm only literally describing what the model would do, mechanically, if race were permitted as a model input. </p><p><strong>2) Machine bias</strong>. Even when protected classes are not provided as a direct model input, we find, in some cases, that model predictions are still inequitable. This is because other variables end up serving as proxies to protected classes. This is <a href="https://coursera.org/share/51350b8fb12a5937bbddc0e53a4f207d" target="_blank" rel="noopener noreferrer">a bit complicated</a>, since it turns out that models that are fair in one sense are unfair in another. </p><p>For example, some crime risk models succeed in flagging both black and white defendants with equal precision – each flag tells the same probabilistic story, regardless of race – and yet the models falsely flag black defendants more often than white ones. A crime-risk model called COMPAS, which is sold to law enforcement across the US, falsely flags white defendants at a rate of 23.5%, and Black defendants at 44.9%. In other words, black defendants who don't deserve it are <a href="https://coursera.org/share/df6e6ba7108980bb7eeae0ba22123ac1" target="_blank" rel="noopener noreferrer">erroneously flagged almost twice as much</a> as white defendants who don't deserve it.</p><p><strong>3) Inferring sensitive attributes</strong>—predicting pregnancy and beyond. Machine learning predicts sensitive information about individuals, such as sexual orientation, whether they're pregnant, whether they'll quit their job, and whether they're going to die. Researchers have shown that it is possible to <a href="https://youtu.be/aNwvXhcq9hk" target="_blank" rel="noopener noreferrer">predict race based on Facebook likes</a>. These predictive models deliver dynamite.</p><p>In a particularly extraordinary case, officials in China use facial recognition to <a href="https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html" target="_blank" rel="noopener noreferrer">identify and track the Uighurs, a minority ethnic group</a> systematically oppressed by the government. This is the first known case of a government using machine learning to profile by ethnicity. One Chinese start-up valued at more than $1 billion said its software could recognize "sensitive groups of people." It's website said, "If originally one Uighur lives in a neighborhood, and within 20 days six Uighurs appear, it immediately sends alarms" to law enforcement.</p>