Eastern traditions have complex views on how karma affects your life.
- Karma is not simple retribution for bad deeds.
- Eastern traditions view karma as part of a cycle of birth and rebirth.
- Actions and intentions can influence karma, which can be both positive and negative.
Thanga Wheel of LIfe
Credit: Adobe stock
Tibetan Buddhist Wheel of Life~ Samsara Cyclic Existence<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="c84ca072d61d5303c11f6290102a63ea"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/5m6Vge2JBFs?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span>
Can we stop a rogue AI by teaching it ethics? That might be easier said than done.
- One way we might prevent AI from going rogue is by teaching our machines ethics so they don't cause problems.
- The questions of what we should, or even can, teach computers remains unknown.
- How we pick the values artificial intelligence follows might be the most important thing.
What effect does how we build the machine have on what ethics the machine can follow?<iframe width="730" height="430" src="https://www.youtube.com/embed/IHE63fxpHCg" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe><p> <strong><br> </strong>Humans are really good at explaining ethical problems and discussing potential solutions. Some of us are very good at teaching entire systems of ethics to other people. However, we tend to do this using language rather than code. We also teach people with learning capabilities similar to us rather than to a machine with different abilities. Shifting from people to machines may introduce some limitations. <br> <br> Many different methods of machine learning could be applied to ethical theory. The trouble is, they may prove to be very capable of absorbing one moral stance and utterly incapable of handling another. </p><p>Reinforcement learning (RL) is a way to teach a machine to do something by having it maximize a reward signal. Through trial and error, the machine is eventually able to learn how to get as much reward as possible efficiently. With its built-in tendency to maximize what is defined as good, this system clearly lends itself to utilitarianism, with its goal of maximizing the total happiness, and other consequentialist ethical systems. How to use it to effectively teach a different ethical system remains unknown. <br> <br> Alternatively, apprenticeship or imitation learning allows a programmer to give a computer a long list of data or an exemplar to observe and allow the machine to infer values and preferences from it. Thinkers concerned with the alignment problem often argue that this could teach a machine our preferences and values through action rather than idealized language. It would just require us to show the machine a moral exemplar and tell it to copy what they do. The idea has more than a few similarities to <a href="https://bigthink.com/scotty-hendricks/virtue-ethics-the-moral-system-you-have-never-heard-of-but-have-probably-used" target="_self">virtue ethics</a>. </p><p>The problem of who is a moral exemplar for other people remains unsolved, and who, if anybody, we should have computers try to emulate is equally up for debate. <br> <br> At the same time, there are some moral theories that we don't know how to teach to machines. Deontological theories, known for creating universal rules to stick to all the time, typically rely on a moral agent to apply reason to the situation they find themselves in along particular lines. No machine in existence is currently able to do that. Even the more limited idea of rights, and the concept that they should not be violated no matter what any optimization tendency says, might prove challenging to code into a machine, given how specific and clearly defined you'd have to make these rights.</p><p>After discussing these problems, Gabriel notes that:<br> <br> "In the light of these considerations, it seems possible that the methods we use to build artificial agents may influence the kind of values or principles we are able encode."<br> <br> This is a very real problem. After all, if you have a super AI, wouldn't you want to teach it ethics with the learning technique best suited for how you built it? What do you do if that technique can't teach it anything besides utilitarianism very well but you've decided virtue ethics is the right way to go? </p>
In some situations, asking "what if everyone did that?" is a common strategy for judging whether an action is right or wrong.
This space expansionist ideology marked the beginning of what Arendt called "earth alienation."
On Wednesday 30th May, billionaire Elon Musk's SpaceX company launched its first human passengers into orbit from Florida's Kennedy Space Center, opening a door to the commercialization of space.
Researchers say that moral self-licensing occurs "because good deeds make people feel secure in their moral self-regard."
Books about race and anti-racism have dominated bestseller lists in the past few months, bringing to prominence authors including Ibram Kendi, Ijeoma Oluo, Reni Eddo-Lodge, and Robin DiAngelo.