Americans under 40 want major reforms, expanded Supreme Court
Younger Americans support expanding the Supreme Court and serious political reforms, says new poll.
22 October, 2020
Credit: Jon Cherry/Getty Images
- Americans under 40 largely favor major political reforms, finds a new survey.
- The poll revealed that most would want to expand the Supreme Court, impose terms limits, and make it easier to vote.
- Millennials are more liberal and reform-centered than Generation Z.
<p>A new nonpartisan poll of Americans under 40 finds they want serious reforms in the American political system. The survey, conducted by the University of Massachusetts Lowell, showed great support for expanding the Supreme Court, terms limits, and abolishing the electoral college.</p><p>In particular, the researchers looked at opinions from 1,503 people between 18 and 39 years old, representing millennials – those born from early 1980s to mid-1990s, and the succeeding Generation Z – those born in the mid-to-late 1990s until early 2010s. Overall, most everyone agreed that the political system is quite broken, with only 24 percent having trust in the government. Interestingly, millennials appear more liberal-minded and want reforms more than the younger cohort.</p><p>The majority of the polled (52 percent) were in favor of upping the number of Supreme Court justices from nine to 13, with 66 percent of the millennials and 62 percent of the Gen Z'ers supporting. Seventy-two percent of all surveyed wanted to do away with lifetime appointments of the justices and impose term limits. Seventy-nine percent of the millennials and 62 percent of Gen Z approved this measure. Eighty-four percent of all surveyed wanted Congressional term limits.</p>
<p><span style="background-color: initial;">Political science professor John Cluverius, associate director of the </span><span style="background-color: initial;"><a href="https://www.uml.edu/research/public-opinion/" target="_blank">UMass Lowell Center for Public Opinion</a></span><span style="background-color: initial;">, which carried out the poll, sees a correlation between the views of the younger Americans and the politics in the run-up to the 2020 elections:</span></p><p style="margin-left: 20px;">"It's no coincidence that as Joe Biden's lead has grown in the polls, the more comfortable Americans are with expanding the size of the Supreme Court," said Cluverius in a <a href="https://www.eurekalert.org/pub_releases/2020-10/uoml-nnp102120.php" target="_blank">press release</a>. "As the chance of Trump holding a second term and appointing more justices dwindles, the opposition to court-packing dwindles as well. Saying Americans are opposed to expanding the court used to be conventional wisdom; now it's a commonly held misconception."</p><p>Americans under 40 also want to eliminate the Electoral College and rather pick Presidents by popular vote, with 58 percent overall in favor to the idea. That includes 64 percent of millennials and 54 percent of Gen Z people polled. Sixty-nine percent of all adults also like the idea of "no excuse" absentee balloting for any voter. Additionally, 71 percent of millennials would rather see more than two political parties being competitive in the U.S., compared to 61 percent of all and 59 percent of Gen Z participants.</p>
<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNDU1OTE1OS9vcmlnaW4ucG5nIiwiZXhwaXJlc19hdCI6MTYyNTc5NDI2NX0.GCiPQgPiUWv7qkONIvergMHcuAiMwrs3ksV2Hkmm0Gg/img.png?width=980" id="f3fb1" class="rm-shortcode" data-rm-shortcode-id="b9e0e653081a68fd9e920e0ae80641f2" data-rm-shortcode-name="rebelmouse-image" data-width="1000" data-height="743" />
<p>On the issues of social justice and the Black Lives Matter movement, the poll found 8 percent fewer overall saying that Black people are treated less fairly than whites, in contrast to a similar poll from August. Forty-three percent of the respondents had such opinions, with Gen Z'ers leading in this belief at 54 percent.<br></p><p> Joshua Dyck, director of the Center for Public Opinion and associate professor of political science, commented that the ideological divide didn't break down in an expected way:</p><p style="margin-left: 20px;">"What I find most interesting is that it is not always the youngest Americans who espouse the most liberal viewpoints," <a href="https://www.eurekalert.org/pub_releases/2020-10/uoml-nnp102120.php" target="_blank">he shared</a>." Here we see millennials, the oldest of whom are about to turn 40, as the driving force behind the vision for a more progressive future."</p>
<p>The poll also revealed that the vast majority of Gen Z (84 percent) and millennial (85 percent) responders are very likely to believe that human activity contributes to climate change. They also think the U.S. government has not done enough to combat it, with 64 percent of Gen Z'ers and 65 percent of millennials holding such views.</p><p>Americans under 40 are also largely in favor of canceling all student loan debt, with 66 percent of Generation Z and 66 percent of millennials supporting. In a telling comparison, the older generations are less on board, with Gen X'ers (those born from mid-60s to early 1980s) split, with 51 percent favoring the idea. Only 38 percent of the Baby Boomers (born 1946-1964) and 31 percent of the Silent Generation (born 1928 to 1945) back the notion.</p><p>Check out the <a href="https://www.uml.edu/Research/public-opinion/polls/default.aspx" target="_blank">detailed results of the poll.</a></p>
Keep reading
Show less
Six ways machine learning threatens social justice
Machine learning is a powerful and imperfect tool that should not go unmonitored.
15 October, 2020
Credit: Monopoly919 on Adobe Stock
- When you harness the power and potential of machine learning, there are also some drastic downsides that you've got to manage.
- Deploying machine learning, you face the risk that it be discriminatory, biased, inequitable, exploitative, or opaque.
- In this article, I cover six ways that machine learning threatens social justice and reach an incisive conclusion: The remedy is to take on machine learning standardization as a form of social activism.
<p>When you harness the power and potential of machine learning, there are also some drastic downsides that you've got to manage. Deploying machine learning, you face the risk that it be discriminatory, biased, inequitable, exploitative, or opaque. In this article, I cover six ways that machine learning threatens social justice and reach an incisive conclusion: The remedy is to take on machine learning standardization as a form of social activism.</p><p>When you use machine learning, you aren't just optimizing models and streamlining business.<strong> You're governing. </strong>In essence, the models embody policies that control access to opportunities and resources for many people. They drive consequential decisions as to whom to investigate, incarcerate, set up on a date, or medicate – or to whom to grant a loan, insurance coverage, housing, or a job.<br></p><p>For the same reason that machine learning is valuable—that it drives operational decisions more effectively—it also wields power in the impact it has on millions of individuals' lives. Threats to social justice arise when that impact is detrimental, when models systematically limit the opportunities of underprivileged or protected groups.</p>
Here are six ways machine learning threatens social justice
<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNDUyMDgxNC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY0MzM0NjgxOH0.zHvEEsYGbNA-lnkq4nss7vwVkZlrKkuKf0XASf7A7Jg/img.jpg?width=980" id="05f07" class="rm-shortcode" data-rm-shortcode-id="a7089b6621166f5a2df77d975f8b9f74" data-rm-shortcode-name="rebelmouse-image" data-width="1000" data-height="563" />Credit: metamorworks via Shutterstock
<p><strong></strong><strong>1) </strong><strong>Blatantly discriminatory models</strong> are predictive models that base decisions partly or entirely on a protected class. Protected classes include race, religion, national origin, gender, gender identity, sexual orientation, pregnancy, and disability status. By taking one of these characteristics as an input, the model's outputs – and the decisions driven by the model – are based at least in part on membership in a protected class. Although models rarely do so directly, there is <a href="https://www.youtube.com/watch?v=eSlzy1x6Fy0" target="_blank">precedent</a> and <a href="https://www.youtube.com/watch?v=wfpNN8ASIq4" target="_blank">support</a> for doing so.</p><p>This would mean that a model could explicitly hinder, for example, black defendants for being black. So, imagine sitting across from a person being evaluated for a job, a loan, or even parole. When they ask you how the decision process works, you inform them, "For one thing, our algorithm penalized your score by seven points because you're black." This may sound shocking and sensationalistic, but I'm only literally describing what the model would do, mechanically, if race were permitted as a model input. </p><p><strong>2) Machine bias</strong>. Even when protected classes are not provided as a direct model input, we find, in some cases, that model predictions are still inequitable. This is because other variables end up serving as proxies to protected classes. This is <a href="https://coursera.org/share/51350b8fb12a5937bbddc0e53a4f207d" target="_blank" rel="noopener noreferrer">a bit complicated</a>, since it turns out that models that are fair in one sense are unfair in another. </p><p>For example, some crime risk models succeed in flagging both black and white defendants with equal precision – each flag tells the same probabilistic story, regardless of race – and yet the models falsely flag black defendants more often than white ones. A crime-risk model called COMPAS, which is sold to law enforcement across the US, falsely flags white defendants at a rate of 23.5%, and Black defendants at 44.9%. In other words, black defendants who don't deserve it are <a href="https://coursera.org/share/df6e6ba7108980bb7eeae0ba22123ac1" target="_blank" rel="noopener noreferrer">erroneously flagged almost twice as much</a> as white defendants who don't deserve it.</p><p><strong>3) Inferring sensitive attributes</strong>—predicting pregnancy and beyond. Machine learning predicts sensitive information about individuals, such as sexual orientation, whether they're pregnant, whether they'll quit their job, and whether they're going to die. Researchers have shown that it is possible to <a href="https://youtu.be/aNwvXhcq9hk" target="_blank" rel="noopener noreferrer">predict race based on Facebook likes</a>. These predictive models deliver dynamite.</p><p>In a particularly extraordinary case, officials in China use facial recognition to <a href="https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html" target="_blank" rel="noopener noreferrer">identify and track the Uighurs, a minority ethnic group</a> systematically oppressed by the government. This is the first known case of a government using machine learning to profile by ethnicity. One Chinese start-up valued at more than $1 billion said its software could recognize "sensitive groups of people." It's website said, "If originally one Uighur lives in a neighborhood, and within 20 days six Uighurs appear, it immediately sends alarms" to law enforcement.</p><span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="10134f9910c910d2002e2c1984069791"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/IHE63fxpHCg?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span><p><strong></strong><strong>4) A lack of transparency.</strong> A computer can keep you in jail, or deny you a job, a loan, insurance coverage, or housing – and yet <a href="https://www.youtube.com/watch?v=XDonDVGWhAE&feature=youtu.be" target="_blank">you cannot face your accuser</a>. The predictive models generated by machine learning to drive these weighty decisions are generally kept locked up as a secret, unavailable for audit, inspection, or interrogation. Such models, inaccessible to the public, perpetrate a lack of due process and a lack of accountability.</p><p>Two ethical standards oppose this shrouding of electronically-assisted decisions: 1) <em>model transparency</em>, the standard that predictive models be accessible, inspectable, and understandable. And 2) <em>the right to explanation</em>, the standard that consequential decisions that are driven or informed by a predictive model are always held up to that standard of transparency. Meeting those standards would mean, for example, that a defendant be told which factors contributed to their crime risk score -- which aspects of their background, circumstances, or past behavior caused the defendant to be penalized. This would provide the defendant the opportunity to respond accordingly, establishing context, explanations, or perspective on these factors.</p><p><strong>5) Predatory micro-targeting.</strong> Powerlessness begets powerlessness – and that cycle can magnify for consumers when machine learning increases the efficiency of activities designed to maximize profit for companies. Improving the micro-targeting of marketing and the predictive pricing of insurance and credit can magnify the cycle of poverty. For example, highly-targeted ads are more adept than ever at <a href="https://youtu.be/LeZv2RanMRQ" target="_blank">exploiting vulnerable consumers</a> and separating them from their money.</p><p>And insurance pricing can lead to the same result. With insurance, the name of the game is to charge more for those at higher risk. Left unchecked, this process can quickly slip into predatory pricing. For example, a churn model may find that elderly policyholders don't tend to shop around and defect to better offers, so there's less of an incentive to keep their policy premiums in check. And pricing premiums based on other life factors also contributes to a cycle of poverty. For example, individuals with poor credit ratings are charged more for car insurance. In fact, a low credit score can increase your premium more than an at-fault car accident.</p><p><strong>6) The coded gaze.</strong> If a group of people is underrepresented in the data from which the machine learns, the resulting model won't work as well for members of that group. This results in <a href="https://www.youtube.com/watch?v=UG_X_7g63rY" target="_blank" rel="noopener noreferrer">exclusionary experiences and discriminatory practices</a>. This phenomenon can occur for both facial image processing and <a href="https://www.pnas.org/content/117/14/7684" target="_blank" rel="noopener noreferrer">speech recognition</a>.</p>
Recourse: Establish machine learning standards as a form of social activism
<p>To address these problems, take on machine learning standardization as a form of social activism. We must establish standards that go beyond nice-sounding yet vague platitudes such as "be fair", "avoid bias", and "ensure accountability". Without being precisely defined, these catch phrases are subjective and do little to guide concrete action. Unfortunately, such broad language is fairly common among the principles released by many companies. In so doing, companies protect their public image more than they protect the public.<br></p><p>People involved in initiatives to deploy machine learning have a powerful, influential voice. These relatively small numbers of people mold and set the trajectory for systems that automatically dictate the rights and resources that great numbers of consumers and citizens gain access to.</p><p>Famed machine learning leader and educator Andrew Ng drove it home: "AI is a superpower that enables a small team to affect a huge number of people's lives... Make sure the work you do leaves society better off."</p><p>And Allan Sammy, Director, Data Science and Audit Analytics at Canada Post, clarified the level of responsibility: "A decision made by an organization's analytic model is a decision made by that entity's senior management team."</p><p>Implementing ethical data science is as important as ensuring a self-driving car knows when to put on the breaks.</p><p>Establishing well-formed ethical standards for machine learning will be an intensive, ongoing process. For more, <a href="https://youtu.be/ToSj0ZkJHBQ" target="_blank">watch this short video</a>, in which I provide some specifics meant to kick-start the process.</p><p><em>Eric Siegel, Ph.D., is a leading consultant and former Columbia University professor who makes machine learning understandable and captivating. He is the founder of the long-running <a href="https://www.predictiveanalyticsworld.com/" target="_blank" rel="noopener noreferrer">Predictive Analytics World</a> and the <a href="https://www.deeplearningworld.com/" target="_blank" rel="noopener noreferrer">Deep Learning World</a> conference series and the instructor of the end-to-end, business-oriented Coursera specialization </em><em><a href="http://www.machinelearning.courses/" target="_blank" rel="noopener noreferrer">Machine learning for Everyone</a>. Stay in touch with Eric on Twitter <a href="https://twitter.com/predictanalytic" target="_blank">@predictanalytic.</a></em><a href="http://www.machinelearning.courses/" target="_blank" rel="noopener noreferrer"></a></p>
Keep reading
Show less
