Through experiencing time in a nonlinear way, can artificial intelligence provide us more perspective?
- Is Sophia the Robot, of Hanson Robotics, conscious? Not quite, she says. Instead, she reflects the consciousness of humans in the same way the moon reflects the light of the sun.
- While we don't know if humans possess free will, she advises us to act as if we do. We can benefit from this.
- So, what can humans learn from robots? Artificial intelligence can view the world in a way that's more objective, being present while still able to look toward the future and the past.
We encode our biases into everything we create: books, poems, and AI. What does that means for an increasingly automated future?
- AI isn't "just technology," says Professor Ramesh Srinivasan. We have to bust the myth that AI is neutral and has no biases. We encode our biases into artificial intelligence. That fact will become more apparent as 5G 'smart cities' become a reality.
- Business leaders must develop awareness and ask themselves: What are the data sets my technologies are learning from and what are the values that are influencing the development of these technologies?
- The American public, across every demographic and both sides of the aisle, supports doing something about big technology issues that are creating an unequal future, says Srinivasan. We are at an inflection point, and good AI is possible if tech leaders act on these issues.
A pragmatic approach to fixing an imbalanced system.
- Intentional or not, certain inequalities are inherent in a digital economy that is structured and controlled by a few corporations that don't represent the interests or the demographics of the majority.
- While concern and anger are valid reactions to these inequalities, UCLA professor Ramesh Srinivasan also sees it as an opportunity to take action.
- Srinivasan says that the digital economy can be reshaped to benefit the 99 percent if we protect laborers in the gig economy, get independent journalists involved with the design of algorithmic news systems, support small businesses, and find ways that groups that have been historically discriminated against can be a part of these solutions.
Is there a way for more human-centered algorithms to prevent potentially triggering interactions on social media?
- According to a 2017 study, 71% of people reported feeling better (rediscovery of self and positive emotions) about 11 weeks after a breakup. But social media complicates this healing process.
- Even if you "unfriend", block, or unfollow, social media algorithms can create upsetting encounters with your ex-partner or reminders of the relationship that once was.
- Researchers at University of Colorado Boulder suggest that a "human-centered approach" to creating algorithms can help the system better understand the complex social interactions we have with people online and prevent potentially upsetting encounters.
Without regulations, implicit bias could shape artificial intelligence into a nightmare for some.
- Implicit biases are feelings and ideas subconsciously attributed to a group or culture based on learned associations and experiences. Everyone has them, but it can be dangerous when those biases are transferred to a powerful technology like AI.
- By keeping the development of artificial intelligence private, we are risking building systems that are intrinsically biased against certain groups.
- Governance and regulations are necessary to ensure that artificial intelligence remains as neutral as possible.