The study found that people who spoke the same language tended to be more closely related despite living far apart.
- Studies focusing on European genetics have found a strong correlation between geography and genetic variation.
- Looking toward India, a new study found a stronger correlation between gene variation and language as well as
- social structure.
- Understanding social and cultural influences can help expand our knowledge of gene flow through human history.
A new kind of mother tongue<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNTU0ODY2MS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYzODQ4MjEyMH0.Ag7iKSgWxyUn6-v3wbIk7ADkxtbyiuUaodlxjRYmDkk/img.jpg?width=980" id="e0037" class="rm-shortcode" data-rm-shortcode-id="0624bd5ae5c2c18e87d89e6549ef3131" data-rm-shortcode-name="rebelmouse-image" data-width="815" data-height="450" />
A map showing the locations of 33 Indian populations alongside plot graphs showing the relations between sociolinguistic groups and genetic structures.
New dimensions for understanding ancestry<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="b2f6780bd878e2434da8e19bff5481d8"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/hu4pjmBTN2Y?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span><p>None of this is to say that geography played no part in the ancestral gene flow of India, nor that social and cultural factors didn't influence genotypes across Europe. They most certainly did. That Nature study, for example, discovered genetic clusters in Switzerland that were language-based. And Europe's geographic distribution may have more to do with historical sociopolitical realities than environmental ones.</p><p>The point of both studies, however, is not to tie our genetic history to land or language, but to understand how genes flowed throughout historical societies.</p><p>"It sheds light on how genetics work in our society," Bose said in the same release. "This is the first model that can take into account social, cultural, environmental and linguistic factors that shape the gene flow of populations. It helps us to understand what factors contribute to the genetic puzzle that is India. It disentangles the puzzle."</p><p>With an improved knowledge of historic gene flow, scientists may be able to further biomedical research to better detect rare genetic variants, assess individual risks to certain diseases, and predict which populations may be more or less susceptible to particular drugs. By opening the avenues we use to understand our genetic history, we can hopefully advance such knowledge and understanding.</p>
When we limit the clash of ideas, we ultimately hinder progress for the entire society.
- Pluralism is the idea that different people, traditions, and beliefs not only can coexist together in the same society but also should coexist together because society benefits from the vibrant workshopping of ideas.
- Cancel culture is a threat to a liberal society because it seeks to shape the available information rather than seek truth.
- Practicing toleration for those ideas does not mean merely putting up with them but actually acknowledging the ideas with an open spirit, as Chandran Kukathas, professor at Singapore Management University, says.
Machine learning is a powerful and imperfect tool that should not go unmonitored.
- When you harness the power and potential of machine learning, there are also some drastic downsides that you've got to manage.
- Deploying machine learning, you face the risk that it be discriminatory, biased, inequitable, exploitative, or opaque.
- In this article, I cover six ways that machine learning threatens social justice and reach an incisive conclusion: The remedy is to take on machine learning standardization as a form of social activism.
Here are six ways machine learning threatens social justice<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNDUyMDgxNC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY0MzM0NjgxOH0.zHvEEsYGbNA-lnkq4nss7vwVkZlrKkuKf0XASf7A7Jg/img.jpg?width=980" id="05f07" class="rm-shortcode" data-rm-shortcode-id="a7089b6621166f5a2df77d975f8b9f74" data-rm-shortcode-name="rebelmouse-image" data-width="1000" data-height="563" />
Credit: metamorworks via Shutterstock<p><strong></strong><strong>1) </strong><strong>Blatantly discriminatory models</strong> are predictive models that base decisions partly or entirely on a protected class. Protected classes include race, religion, national origin, gender, gender identity, sexual orientation, pregnancy, and disability status. By taking one of these characteristics as an input, the model's outputs – and the decisions driven by the model – are based at least in part on membership in a protected class. Although models rarely do so directly, there is <a href="https://www.youtube.com/watch?v=eSlzy1x6Fy0" target="_blank">precedent</a> and <a href="https://www.youtube.com/watch?v=wfpNN8ASIq4" target="_blank">support</a> for doing so.</p><p>This would mean that a model could explicitly hinder, for example, black defendants for being black. So, imagine sitting across from a person being evaluated for a job, a loan, or even parole. When they ask you how the decision process works, you inform them, "For one thing, our algorithm penalized your score by seven points because you're black." This may sound shocking and sensationalistic, but I'm only literally describing what the model would do, mechanically, if race were permitted as a model input. </p><p><strong>2) Machine bias</strong>. Even when protected classes are not provided as a direct model input, we find, in some cases, that model predictions are still inequitable. This is because other variables end up serving as proxies to protected classes. This is <a href="https://coursera.org/share/51350b8fb12a5937bbddc0e53a4f207d" target="_blank" rel="noopener noreferrer">a bit complicated</a>, since it turns out that models that are fair in one sense are unfair in another. </p><p>For example, some crime risk models succeed in flagging both black and white defendants with equal precision – each flag tells the same probabilistic story, regardless of race – and yet the models falsely flag black defendants more often than white ones. A crime-risk model called COMPAS, which is sold to law enforcement across the US, falsely flags white defendants at a rate of 23.5%, and Black defendants at 44.9%. In other words, black defendants who don't deserve it are <a href="https://coursera.org/share/df6e6ba7108980bb7eeae0ba22123ac1" target="_blank" rel="noopener noreferrer">erroneously flagged almost twice as much</a> as white defendants who don't deserve it.</p><p><strong>3) Inferring sensitive attributes</strong>—predicting pregnancy and beyond. Machine learning predicts sensitive information about individuals, such as sexual orientation, whether they're pregnant, whether they'll quit their job, and whether they're going to die. Researchers have shown that it is possible to <a href="https://youtu.be/aNwvXhcq9hk" target="_blank" rel="noopener noreferrer">predict race based on Facebook likes</a>. These predictive models deliver dynamite.</p><p>In a particularly extraordinary case, officials in China use facial recognition to <a href="https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html" target="_blank" rel="noopener noreferrer">identify and track the Uighurs, a minority ethnic group</a> systematically oppressed by the government. This is the first known case of a government using machine learning to profile by ethnicity. One Chinese start-up valued at more than $1 billion said its software could recognize "sensitive groups of people." It's website said, "If originally one Uighur lives in a neighborhood, and within 20 days six Uighurs appear, it immediately sends alarms" to law enforcement.</p>
Recourse: Establish machine learning standards as a form of social activism<p>To address these problems, take on machine learning standardization as a form of social activism. We must establish standards that go beyond nice-sounding yet vague platitudes such as "be fair", "avoid bias", and "ensure accountability". Without being precisely defined, these catch phrases are subjective and do little to guide concrete action. Unfortunately, such broad language is fairly common among the principles released by many companies. In so doing, companies protect their public image more than they protect the public.<br></p><p>People involved in initiatives to deploy machine learning have a powerful, influential voice. These relatively small numbers of people mold and set the trajectory for systems that automatically dictate the rights and resources that great numbers of consumers and citizens gain access to.</p><p>Famed machine learning leader and educator Andrew Ng drove it home: "AI is a superpower that enables a small team to affect a huge number of people's lives... Make sure the work you do leaves society better off."</p><p>And Allan Sammy, Director, Data Science and Audit Analytics at Canada Post, clarified the level of responsibility: "A decision made by an organization's analytic model is a decision made by that entity's senior management team."</p><p>Implementing ethical data science is as important as ensuring a self-driving car knows when to put on the breaks.</p><p>Establishing well-formed ethical standards for machine learning will be an intensive, ongoing process. For more, <a href="https://youtu.be/ToSj0ZkJHBQ" target="_blank">watch this short video</a>, in which I provide some specifics meant to kick-start the process.</p>
A new study shows how poor children are negatively impacted neurologically.
- Children in poor neighborhoods exhibit abnormal activation of motivational circuits in their brains.
- The neurological impact increases the likelihood of criminal behavior and substance abuse later in life.
- Researchers suggest focusing on shaping the environment to set up the child for success.
We wouldn't want to live without it, so how can we create art that's durable?
- You cannot kill the arts. This is particularly true when you talk about poetry, which does well in a world of social media as its easy to digest in its short form.
- Measuring success in art can be tricky, though. Impact and influence can be felt immediately, so how does art find that everlasting durability?
- Philanthropy can encourage and enable art, and as a result, potentially lengthen its lifespan. If we can find ways to measure art in its own terms, we can effectively give a platform to new voices who complete the cultural picture.