Why A.I. is a big fat lie
The Dr. Data Show is a new web series that breaks the mold for data science infotainment, captivating the planet with short webisodes that cover the very best of machine learning and predictive analytics.
- All the hype around artificial intelligence misunderstands what intelligence really is.
- And A.I. is definitely, definitely not going to kill you, ever.
- Machine learning as a process and a concept, however, holds more promise.
A.I. is a big fat lie
A.I. is a big fat lie. Artificial intelligence is a fraudulent hoax — or in the best cases it's a hyped-up buzzword that confuses and deceives. The much better, precise term would instead usually be machine learning – which is genuinely powerful and everyone oughta be excited about it.
On the other hand, AI does provide some great material for nerdy jokes. So put on your skepticism hat, it's time for an AI-debunkin', slam-dunkin', machine learning-lovin', robopocalypse myth-bustin', smackdown jamboree – yeehaw!
3 main points
1) Unlike AI, machine learning's totally legit. I gotta say, it wins the Awesomest Technology Ever award, forging advancements that make ya go, "Hooha!". However, these advancements are almost entirely limited to supervised machine learning, which can only tackle problems for which there exist many labeled or historical examples in the data from which the computer can learn. This inherently limits machine learning to only a very particular subset of what humans can do – plus also a limited range of things humans can't do.
2) AI is BS. And for the record, this naysayer taught the Columbia University graduate-level "Artificial Intelligence" course, as well as other related courses there.
AI is nothing but a brand. A powerful brand, but an empty promise. The concept of "intelligence" is entirely subjective and intrinsically human. Those who espouse the limitless wonders of AI and warn of its dangers – including the likes of Bill Gates and Elon Musk – all make the same false presumption: that intelligence is a one-dimensional spectrum and that technological advancements propel us along that spectrum, down a path that leads toward human-level capabilities. Nuh uh. The advancements only happen with labeled data. We are advancing quickly, but in a different direction and only across a very particular, restricted microcosm of capabilities.
The term artificial intelligence has no place in science or engineering. "AI" is valid only for philosophy and science fiction – and, by the way, I totally love the exploration of AI in those areas.
3) AI isn't gonna kill you. The forthcoming robot apocalypse is a ghost story. The idea that machines will uprise on their own volition and eradicate humanity holds no merit.
Neural networks for the win
In the movie "Terminator 2: Judgment Day," the titular robot says, "My CPU is a neural net processor, a learning computer." The neural network of which that famous robot speaks is actually a real kind of machine learning method. A neural network is a way to depict a complex mathematical formula, organized into layers. This formula can be trained to do things like recognize images for self-driving cars. For example, watch several seconds of a neural network performing object recognition.
What you see it doing there is truly amazing. The network's identifying all those objects. With machine learning, the computer has essentially programmed itself to do this. On its own, it has worked out the nitty gritty details of exactly what patterns or visual features to look for. Machine learning's ability to achieve such things is awe-inspiring and extremely valuable.
The latest improvements to neural networks are called deep learning. They're what make this level of success in object recognition possible. With deep learning, the network is quite literally deeper – more of those layers. However, even way way back in 1997, the first time I taught the machine learning course, neural networks were already steering self-driving cars, in limited contexts, and we even had our students apply them for face recognition as a homework assignment.
The achitecture for a simple neural network with four layers
But the more recent improvements are uncanny, boosting its power for many industrial applications. So, we've even launched a new conference, Deep Learning World, which covers the commercial deployment of deep learning. It runs alongside our long-standing machine learning conference series, Predictive Analytics World.
Supervised machine learning requires labeled data
So, with machines just getting better and better at humanlike tasks, doesn't that mean they're getting smarter and smarter, moving towards human intelligence?
No. It can get really, really good at certain tasks, but only when there's the right data from which to learn. For the object recognition discussed above, it learned to do that from a large number of example photos within which the target objects were already correctly labeled. It needed those examples to learn to recognize those kinds of objects. This is called supervised machine learning: when there is pre-labeled training data. The learning process is guided or "supervised" by the labeled examples. It keeps tweaking the neural network to do better on those examples, one incremental improvement at a time. That's the learning process. And the only way it knows the neural network is improving or "learning" is by testing it on those labeled examples. Without labeled data, it couldn't recognize its own improvements so it wouldn't know to stick with each improvement along the way. Supervised machine learning is the most common form of machine learning.
Here's another example. In 2011, IBM's Watson computer defeated the two all-time human champions on the TV quiz show Jeopardy. I'm a big fan. This was by far the most amazing thing I've seen a computer do – more impressive than anything I'd seen during six years of graduate school in natural language understanding research. Here's a 30-second clip of Watson answering three questions.
To be clear, the computer didn't actually hear the spoken questions but rather was fed each question as typed text. But its ability to rattle off one answer after another – given the convoluted, clever wording of Jeopardy questions, which are designed for humans and run across any and all topics of conversation – feels to me like the best "intelligent-like" thing I've ever seen from a computer.
But the Watson machine could only do that because it had been given many labeled examples from which to learn: 25,000 questions taken from prior years of this TV quiz show, each with their own correct answer.
At the core, the trick was to turn every question into a yes/no prediction: "Will such-n-such turn out to be the answer to this question?" Yes or no. If you can answer that question, then you can answer any question – you just try thousands of options out until you get a confident "yes." For example, "Is 'Abraham Lincoln' the answer to 'Who was the first president?'" No. "Is 'George Washington'?" Yes! Now the machine has its answer and spits it out.
Computers that can talk like humans
And there's another area of language use that also has plentiful labeled data: machine translation. Machine learning gobbles up a feast of training data for translating between, say, English and Japanese, because there are tons of translated texts out there filled with English sentences and their corresponding Japanese translations.
In recent years, Google Translate – which anyone can use online – swapped out the original underlying solution for a much-improved one driven by deep learning. Go try it out – translate a letter to your friend or relative who has a different first language than you. I use it a lot myself.
On the other hand, general competence with natural languages like English is a hallmark of humanity – and only humanity. There's no known roadmap to fluency for our silicon sisters and brothers. When we humans understand one another, underneath all the words and somewhat logical grammatical rules is "general common sense and reasoning." You can't work with language without that very particular human skill. Which is a broad, unwieldy, amorphous thing we humans amazingly have.
So our hopes and dreams of talking computers are dashed because, unfortunately, there's no labeled data for "talking like a person." You can get the right data for a very restricted, specific task, like handling TV quiz show questions, or answering the limited range of questions people might expect Siri to be able to answer. But the general notion of "talking like a human" is not a well-defined problem. Computers can only solve problems that are precisely defined.
So we can't leverage machine learning to achieve the typical talkative computer we see in so many science fiction movies, like the Terminator, 2001's evil HAL computer, or the friendly, helpful ship computer in Star Trek. You can converse with those machines in English very much like you would with a human. It's easy. Ya just have to be a character in a science fiction movie.
Intelligence is subjective, so A.I. has no real definition
Now, if you think you don't already know enough about AI, you're wrong. There is nothing to know, because it isn't actually a thing. There's literally no meaningful definition whatsoever. AI poses as a field, but it's actually just a fanciful brand. As a supposed field, AI has many competing definitions, most of which just boil down to "smart computer." I must warn you, do not look up "self-referential" in the dictionary. You'll get stuck in an infinite loop.
Many definitions are even more circular than "smart computer," if that's possible. They just flat out use the word "intelligence" itself within the definition of AI, like "intelligence demonstrated by a machine."
If you've assumed there are more subtle shades of meaning at hand, surprise – there aren't. There's no way to resolve how utterly subjective the word "intelligence" is. For computers and engineering, "intelligence" is an arbitrary concept, irrelevant to any precise goal. All attempts to define AI fail to solve its vagueness.
Now, in practice the word is often just – confusingly – used as a synonym for machine learning. But as for AI as its own concept, most proposed definitions are variations of the following three:
1) AI is getting a computer to think like a human. Mimic human cognition. Now, we have very little insight into how our brains pull off what they pull off. Replicating a brain neuron-by-neuron is a science fiction "what if" pipe dream. And introspection – when you think about how you think – is interesting, big time, but ultimately tells us precious little about what's going on in there.
2) AI is getting a computer to act like a human. Mimic human behavior. Cause if it walks like a duck and talks like a duck... But it doesn't and it can't and we're way too sophisticated and complex to fully understand ourselves, let alone translate that understanding into computer code. Besides, fooling people into thinking a computer in a chatroom is actually a human – that's the famous Turing Test for machine intelligence – is an arbitrary accomplishment and it's a moving target as we humans continually become wiser to the trickery used to fool us.
3) AI is getting computers to solve hard problems. Get really good at tasks that seem to require "intelligence" or "human-level" capability, such as driving a car, recognizing human faces, or mastering chess. But now that computers can do them, these tasks don't seem so intelligent after all. Everything a computer does is just mechanical and well understood and in that way mundane. Once the computer can do it, it's no longer so impressive and it loses its charm. A computer scientist named Larry Tesler suggested we define intelligence as "whatever machines haven't done yet." Humorous! A moving-target definition that defines itself out of existence.
By the way, the points in this article also apply to the term "cognitive computing," which is another poorly-defined term coined to allege a relationship between technology and human cognition.
The logical fallacy of believing in A.I.'s innevitability
The thing is, "artificial intelligence" itself is a lie. Just evoking that buzzword automatically insinuates that technological advancement is making its way toward the ability to reason like people. To gain humanlike "common sense." That's a powerful brand. But it's an empty promise. Your common sense is more amazing – and unachievable – than your common sense can sense. You're amazing. Your ability to think abstractly and "understand" the world around you might feel simple in your moment-to-moment experience, but it's incredibly complex. That experience of simplicity is either a testament to how adept your uniquely human brain is or a great illusion that's intrinsic to the human condition – or probably both.
Now, some may respond to me, "Isn't inspired, visionary ambition a good thing? Imagination propels us and unknown horizons beckon us!" Arthur C. Clarke, the author of 2001, made a great point: "Any sufficiently advanced technology is indistinguishable from magic." I agree. However, that does not mean any and all "magic" we can imagine – or include in science fiction – could eventually be achieved by technology. Just 'cause it's in a movie doesn't mean it's gonna happen. AI evangelists often invoke Arthur's point – but they've got the logic reversed. My iPhone seems very "Star Trek" to me, but that's not an argument everything on Star Trek is gonna come true. The fact that creative fiction writers can make shows like Westworld is not at all evidence that stuff like that could happen.
Now, maybe I'm being a buzzkill, but actually I'm not. Let me put it this way. The uniqueness of humans and the real advancements of machine learning are each already more than amazing and exciting enough to keep us entertained. We don't need fairy tales – especially ones that mislead.
Sophia: A.I.'s most notoriously fraudulent publicity stunt
The star of this fairy tale, the leading role of "The Princess" is played by Sophia, a product of Hanson Robotics and AI's most notorious fraudulent publicity stunt. This robot has applied her artificial grace and charm to hoodwink the media. Jimmy Fallon and other interviewers have hosted her – it, I mean have hosted it. But when it "converses," it's all scripts and canned dialogue – misrepresented as spontaneous conversation – and in some contexts, rudimentary chatbot-level responsiveness.
Believe it or not, three fashion magazines have featured Sophia on their cover, and, ever goofier and sillier, the country Saudi Arabia officially granted it citizenship. For real. The first robot citizen. I'm actually a little upset about this, 'cause my microwave and pet rock have also applied for citizenship but still no word.
Sophia is a modern-day Mechanical Turk – which was an 18th century hoax that fooled the likes of Napoleon Bonaparte and Benjamin Franklin into believing they'd just lost a game of chess to a machine. A mannequin would move the chess pieces and the victims wouldn't notice there was actually a small human chess expert hidden inside a cabinet below the chess board.
In a modern day parallel, Amazon has an online service you use to hire workers to perform many small tasks that require human judgement, like choosing the nicest looking of several photographs. It's named Amazon Mechanical Turk, and its slogan, "Artificial Artificial Intelligence." Which reminds me of this great vegetarian restaurant with "mock mock duck" on the menu – I swear, it tastes exactly like mock duck. Hey, if it talks like a duck, and it tastes like a duck...
Yes indeed, the very best fake AI is humans. In 1965, when NASA was defending the idea of sending humans to space, they put it this way: "Man is the lowest-cost, 150-pound, nonlinear, all-purpose computer system which can be mass-produced by unskilled labor." I dunno. I think there's some skill in it. ;-)
The myth of dangerous superintelligence
Anyway, as for Sophia, mass hysteria, right? Well, it gets worse: Claims that AI presents an existential threat to the human race. From the most seemingly credible sources, the most elite of tech celebrities, comes a doomsday vision of homicidal robots and killer computers. None other than Bill Gates, Elon Musk, and even the late, great Stephen Hawking have jumped on the "superintelligence singularity" bandwagon. They believe machines will achieve a degree of general competence that empowers the machines to improve their own general competence – so much so that this will then quickly escalate past human intelligence, and do so at the lightning speed of computers, a speed the computers themselves will continue to improve by virtue of their superintelligence, and before you know it you have a system or entity so powerful that the slightest misalignment of objectives could wipe out the human race. Like if we naively commanded it to manufacture as many rubber chickens as possible, it might invent an entire new high-speed industry that can make 40 trillion rubber chickens but that happens to result in the extinction of Homo sapiens as a side effect. Well, at least it would be easier to get tickets for Hamilton.
There are two problems with this theory. First, it's so compellingly dramatic that it's gonna ruin movies. If the best bad guy is always a robot instead of a human, what about Nurse Ratched and Norman Bates? I need my Hannibal Lecter! "The best bad guy," by the way, is an oxymoron. And so is "artificial intelligence." Just sayin'.
But it is true: Robopocalypse is definitely coming. Soon. I'm totally serious, I swear. Based on a novel by the same name, Michael Bay – of the "Transformers" movies – is currently directing it as we speak. Fasten your gosh darn seatbelts people, 'cause, if "Robopocalypse" isn't in 3D, you were born in the wrong parallel universe.
Oh yeah, and the second problem with the AI doomsday theory is that it's ludicrous. AI is so smart it's gonna kill everyone by accident? Really really stupid superintelligence? That sounds like a contradiction.
To be more precise, the real problem is that the theory presumes that technological advancements move us along a path toward humanlike "thinking" capabilities. But they don't. It's not headed in that direction. I'll come back to that point again in a minute – first, a bit more on how widely this apocalyptic theory has radiated.
A widespread belief in superintelligence
The Kool-Aid these high-tech royalty drink, the go-to book that sets the foundation, is the New York Times bestseller "Superintelligence," by Nick Bostrom, who's a professor of applied ethics at Oxford University. The book mongers the fear and fans the flames, if not igniting the fire in the first place for many people. It explores how we might "make an intelligence explosion survivable." The Guardian newspaper ran a headline, "Artificial intelligence: 'We're like children playing with a bomb'," and Newsweek: "Artificial Intelligence Is Coming, and It Could Wipe Us Out," both headlines obediently quoting Bostrom himself.
Bill Gates "highly recommends" the book, Elon Musk said AI is "vastly more risky than North Korea" – as Fortune Magazine repeated in a headline – and, quoting Stephen Hawking, the BBC ran a headline, "'AI could spell end of the human race'."
In a Ted talk that's been viewed 5 million times (across platforms), the bestselling author and podcast intellectual Sam Harris states with supreme confidence, "At a certain point, we will build machines that are smarter than we are, and once we have machines that are smarter than we are, they will begin to improve themselves."
Both he and Bostrom show the audience an intelligence spectrum during their Ted talks – here's the one by Bostrom:
What happens when our computers get smarter than we are? | Nick Bostrom
You can see as we move along the path from left to right we pass a mouse, a chimp, a village idiot, and then the very smart theoretical physicist Ed Witten. He's relatively close to the idiot, because even an idiot human is much smarter than a chimp, relatively speaking. You can see the arrow just above the spectrum showing that "AI" progresses in that same direction, along to the right. At the very rightmost position is Bostrom himself, which is either just an accident of photography, or proof that he himself is an AI robot.
Oops, that was the wrong clip – uh, that was Dr. Frankenstein, but, ya know, same scenario.
A falsely conceived "spectrum of intelligence"
Anyway, that falsely-conceived intelligence spectrum is the problem. I've read the book and many of the interviews and watched the talks and pretty much all the believers intrinsically build on an erroneous presumption that "smartness" or "intelligence" falls more or less along a single, one-dimensional spectrum. They presume that the more adept machines become at more and more challenging tasks, the higher they will rank on this scale, eventually surpassing humans.
But machine learning has us marching along a different path. We're moving fast, and we'll likely go very far, but we're going in a different direction, only tangentially related to human capabilities.
The trick is to take a moment to think about this difference. Our own personal experiences of being one of those smart creatures called a human is what catches us in a thought trap. Our very particular and very impressive capabilities are hidden from ourselves beneath a veil of a conscious experience that just kind of feels like "clarity." It feels simple, but under the surface, it's oh so complex. Replicating our "general common sense" is a fanciful notion that no technological advancements have ever moved us towards in any meaningful way.
Thinking abstractly often feels uncomplicated. We draw visuals in our mind, like a not-to-scale map of a city we're navigating, or a "space" of products that two large companies are competing to sell, with each company dominating in some areas but not in others... or, when thinking about AI, the mistaken vision that increasingly adept capabilities – both intellectual and computational – all fall along the same, somewhat narrow path.
Now, Bostrom rightly emphasizes that we should not anthropomorphize what intelligent machines may be like in the future. It's not human, so it's hard to speculate on the specifics and perhap it will seem more like a space alien's intelligence. But what Bostrom and his followers aren't seeing is that, since they believe technology advances along a spectrum that includes and then transcends human cognition, the spectrum itself as they've conceived it is anthropomorphic. It has humanlike qualities built in. Now, your common sense reasoning may seem to you like a "natural stage" of any sort of intellectual development, but that's a very human-centric perspective. Your common sense is intricate and very, very particular. It's far beyond our grasp – for anyone – to formally define a "spectrum of intelligence" that includes human cognition on it. Our brains are spectacularly multi-faceted and adept, in a very arcane way.
Machines progress along a different spectrum
Machine learning actually does work by defining a kind of spectrum, but only for an extremely limited sort of trajectory – only for tasks that have labeled data, such as identifying objects in images. With labeled data, you can compare and rank various attempts to solve the problem. The computer uses the data to measure how well it does. Like, one neural network might correctly identify 90% of the trucks in the images and then a variation after some improvements might get 95%.
Getting better and better at a specific task like that obviously doesn't lead to general common sense reasoning capabilities. We're not on that trajectory, so the fears should be allayed. The machine isn't going to get to a human-like level where it then figures out how to propel itself into superintelligence. No, it's just gonna keep getting better at identifying objects, that's all.
Intelligence isn't a Platonic ideal that exists separately from humans, waiting to be discovered. It's not going to spontaneously emerge along a spectrum of better and better technology. Why would it? That's a ghost story.
It might feel tempting to believe that increased complexity leads to intelligence. After all, computers are incredibly general-purpose – they can basically do any task, if only we can figure out how to program them to do that task. And we're getting them to do more and more complex things. But just because they could do anything doesn't mean they will spontaneously do everything we imagine they might.
No advancements in machine learning to date have provided any hint or inkling of what kind of secret sauce could get computers to gain "general common sense reasoning." Dreaming that such abilities could emerge is just wishful thinking and rogue imagination, no different now, after the last several decades of innovations, than it was back in 1950, when Alan Turing, the father of computer science, first tried to define how the word "intelligence" might apply to computers.
Don't sell, buy, or regulate on A.I.
Machines will remain fundamentally under our control. Computer errors will kill – people will die from autonomous vehicles and medical automation – but not on a catastrophic level, unless by the intentional design of human cyber attackers. When a misstep does occur, we take the system offline and fix it.
Now, the aforementioned techno-celebrity believers are true intellectuals and are truly accomplished as entrepreneurs, engineers, and thought leaders in their respective fields. But they aren't machine learning experts. None of them are. When it comes to their AI pontificating, it would truly be better for everyone if they published their thoughts as blockbuster movie scripts rather than earnest futurism.
It's time for term "AI" to be "terminated." Mean what you say and say what you mean. If you're talking about machine learning, call it machine learning. The buzzword "AI" is doing more harm than good. It may sometimes help with publicity, but to at least the same degree, it misleads. AI isn't a thing. It's vaporware. Don't sell it and don't buy it.
And most importantly, do not regulate on "AI"! Technology greatly needs regulation in certain arenas, for example, to address bias in algorithmic decision-making and the development of autonomous weapons – which often use machine learning – so clarity is absolutely critical in these discussions. Using the imprecise, misleading term "artificial intelligence" is gravely detrimental to the effectiveness and credibility of any initiative that regulates technology. Regulation is already hard enough without muddying the waters.
Want more of Dr. Data?
Northwell Health CEO Michael Dowling has an important favor to ask of the American people.
- Michael Dowling is president and CEO of Northwell Health, the largest health care system in New York state. In this PSA, speaking as someone whose company has seen more COVID-19 patients than any other in the country, Dowling implores Americans to wear masks—not only for their own health, but for the health of those around them.
- The CDC reports that there have been close to 7.9 million cases of coronavirus reported in the United States since January. Around 216,000 people have died from the virus so far with hundreds more added to the tally every day. Several labs around the world are working on solutions, but there is currently no vaccine for COVID-19.
- The most basic thing that everyone can do to help slow the spread is to practice social distancing, wash your hands, and to wear a mask. The CDC recommends that everyone ages two and up wear a mask that is two or more layers of material and that covers the nose, mouth, and chin. Gaiters and face shields have been shown to be less effective at blocking droplets. Homemade face coverings are acceptable, but wearers should make sure they are constructed out of the proper materials and that they are washed between uses. Wearing a mask is the most important thing you can do to save lives in your community.
Two massive clouds of dust in orbit around the Earth have been discussed for years and finally proven to exist.
- Hungarian astronomers have proven the existence of two "pseudo-satellites" in orbit around the earth.
- These dust clouds were first discovered in the sixties, but are so difficult to spot that scientists have debated their existence since then.
- The findings may be used to decide where to put satellites in the future and will have to be considered when interplanetary space missions are undertaken.
What are they?<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8xODgyMDA0NC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYzNTM1ODc0Mn0.NH33LuauIo__sUBi4tvhwxDcsvhflDFD-Nhx9FjlSNk/img.jpg?width=1245&coordinates=148%2C0%2C149%2C0&height=700" id="cec96" class="rm-shortcode" data-rm-shortcode-id="acb78abe2ab46a17e419ad30906751d6" data-rm-shortcode-name="rebelmouse-image" />
Artist's impression of the Kordylewski cloud in the night sky (with its brightness greatly enhanced) at the time of the observations.
G. Horváth<p>The<a href="https://en.wikipedia.org/wiki/Kordylewski_cloud" target="_blank"> Kordylewski clouds</a> are two dust clouds first observed by Polish astronomer Kazimierz Kordylewski in 1961. They are situated at two of the <a href="https://www.space.com/30302-lagrange-points.html" target="_blank">Lagrange points</a> in Earth's orbit. These points are locations where the gravity of two objects, such as the Earth and the Moon or a planet and the Sun, equals the centripetal required to orbit the objects while staying in the same relative position. There are five of these spots between the Earth and Moon. The clouds rest at what are called points four and five, forming a triangle with the clouds and the Earth at the three corners.</p><p>The clouds are enormous, taking up the same space in the night sky as twenty lunar discs; covering an area of 45,000 miles. They are roughly 250,000 miles away, about the same distance from us as the Moon. They are entirely comprised of specks of dust which reflect the light of the sun so faintly most astronomers that looked for them were unable to see them at all. </p><p>The clouds themselves are probably ancient, but the model that the scientists created to learn about them suggests that the individual dust particles that comprise them can be blown away by solar wind and replaced by the dust from other cosmic sources like comet tails. This means that the clouds hardly move but are <a href="https://www.nationalgeographic.com/science/2018/11/news-earth-moon-dust-clouds-satellites-planets-space/" target="_blank">eternally changing</a>. </p>
How did they discover this?<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8xODgyMDAzNi9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY1Nzc4MjQ4MX0.7uU9OqmQcWw5Ll1UXAav0PCu4nTg-GdJdAWADHanC7c/img.jpg?width=1245&coordinates=0%2C180%2C0%2C181&height=700" id="952fb" class="rm-shortcode" data-rm-shortcode-id="a778280a20f1c54cd2c14c8313224be2" data-rm-shortcode-name="rebelmouse-image" />
"In this picture the central region of the Kordylewski dust cloud is visible (bright red pixels). The straight tilted lines are traces of satellites."
J. Slíz-Balogh<p>In their study published in the <a href="https://academic.oup.com/mnras" target="_blank">Monthly Notices of the Royal Astronomical Society</a>, Hungarian astronomers Judit Slíz-Balogh, András Barta, and Gábor Horváth described how they were able to find the dust clouds using polarized lenses.</p><p>Since the clouds were expected to polarize the light that bounces off of them, by configuring the telescopes to look for this kind of light the clouds were much easier to spot. What the scientists observed, polarized light in patterns that extended outside the view of the telescope lens, was in line with the predictions of their mathematical model and ruled out other possible sources. </p>
Why are we just learning this now?<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8xODgyMDAzOS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY2MjUyNDMyMH0.Zl8GmQ_rJHiL4b7hN0r_YBmgb6_ZqIRvqOVuko2ubpw/img.jpg?width=1245&coordinates=0%2C141%2C0%2C185&height=700" id="87afe" class="rm-shortcode" data-rm-shortcode-id="dd4c0b5088e601d7279cc5eb226f8b7b" data-rm-shortcode-name="rebelmouse-image" />
"Mosaic pattern of the angle of polarization around the L5 point (white dot) of the Earth-Moon system. The five rectangular windows correspond to the imaging telescope with which the patterns of the Kordylewski cloud were measured."
J. Slíz-Balogh<p>The objects, being dust clouds, are very faint and hard to see. While Kordylewski observed them in 1961, other astronomers have looked there and given mixed reports over the following decades. This discouraged many astronomers from joining the search, as study co-author Judit Slíz-Balogh <a href="https://ras.ac.uk/news-and-press/research-highlights/earths-dust-cloud-satellites-confirmed" target="_blank">explained</a>, <em>"The Kordylewski clouds are two of the toughest objects to find, and though they are as close to Earth as the Moon are largely overlooked by researchers in astronomy. It is intriguing to confirm that our planet has dusty pseudo-satellites in orbit alongside our lunar neighbor."</em></p>
Will this have any impact on space travel?<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="c3d797fff5430c64afcb5a49bddc3616"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/Ou8N3v9SFPE?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span><p>Lagrange points have been put forward as excellent locations for a space station or satellites like the <a href="https://jwst.nasa.gov/about.html" target="_blank">James Webb Telescope</a> to be put into orbit, as they would require little fuel to stay in place. Knowing about a massive dust cloud that could damage sensitive equipment already being there could save money and lives in the future. While we only know about the clouds at Lagrange points four and five right now, the study's authors suggest there could be more at the other points.</p><p>While the discovery of a couple of dust clouds might not seem all that impressive, it is the result of a half-century of astronomical and mathematical work and reminds us that wonders are still hidden in our cosmic backyard. While you might never need to worry about these clouds again, there is nothing wrong with looking at the sky with wonder at the strange and fantastic things we can discover. </p>
A new survey found that 27 percent of millennials are saving more money due to the pandemic, but most can't stay within their budgets.
Taking control of bad luck<p>According to <a href="https://themanifest.com/accounting/budgeting-money-tips-for-millennials" target="_blank">a recent survey by The Manifest</a>, a business news website, millennials agree with Cramer. The study found that, of millennials surveyed, their largest expenses were housing (66 percent), educational expenses (9 percent), and health insurance (6 percent). In light of the COVID-19 pandemic, millennials are using the remaining 19 percent of their paychecks to budget and increase their savings.</p><p>About a third of millennials said they are saving more money in response to the pandemic and creating new budgets for themselves. In fact, of all generations surveyed, millennials felt the most comfortable creating personal budgets. They were also willing to think critically and adjust budgets to match financial changes, both signs that this highly-educated generation is willing to learn and adapt.</p><p>Millennials still have a rough road ahead, though. According to the survey, about half of millennials make less than $50,000 a year. That puts them into the upper-lower or lower-middle <a href="https://www.pewresearch.org/fact-tank/2020/07/23/are-you-in-the-american-middle-class/#:~:text=In%202018%2C%20the%20national%20middle,(incomes%20in%202018%20dollars)." target="_blank">income class</a>, depending on where in the country they live. That matches <a href="https://www.bls.gov/opub/mlr/2019/article/time-use-of-millennials-and-nonmillennials.htm#:~:text=Among%20full%2Dtime%20wage%20and,with%2031%20percent%20of%20nonmillennials." target="_blank" rel="noopener noreferrer">BLS data</a>, which shows millennials earning less than older non-millennials. <a href="https://www.bls.gov/opub/mlr/2019/beyond-bls/the-kids-are-alright-millennials-and-the-economy.htm" target="_blank" rel="noopener noreferrer">The BLS also notes</a> that while millennials have less debt than GenXers, most of that is student loan debt rather than mortgages.</p><p>And despite their budgetary plans, only 11 percent of millennials surveyed were able to stay within budget, while uncertainty still looms in the future job market.<em></em></p><p>With all this said, there are caveats to The Manifest survey. It hosted a relatively small sample size, only surveying 502 Americans. Of those, millennials made up 22 percent of respondents. They weren't even the largest cohort in the study. That was the baby boomers at 32 percent. </p><p>This makes the survey more suggestive than indicative. But the suggestion is that millennials, to borrow a phrase from writer Vicki Robin, are ready to reinterpret their relationship with finances.</p>
A push for financial freedom<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="a463513bfbe5a2b7d5bcc59f8be265a7"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/J-B-b393epk?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span><p>While budgeting and financial savvy have always been important, the millennial generation will need to be far more critical of their relationship with the economy. What <a href="https://www.youtube.com/watch?v=T_tDthUWsVM" target="_blank" rel="noopener noreferrer">Robin calls the old roadmap</a>—the idea that "growth is good, more is better, game over"—is unlikely to support millennials as it did past generations. They'll need a new roadmap, charting both a new macro (the relationship between our economic and ecological footprints, for example) and micro (our individual relationships with money).</p><p>Because the macro is a whole other article, we'll stick with the micro here:</p><p><strong>1) Track and cut your spending</strong></p><p>The first step to financial freedom is to track your spending and cut unnecessary purchases. For Robin, these are often the things, services, and subscriptions that we buy out of habit, but we no longer consider whether they add value to our lives.</p><p>A pernicious modern example is the subscription economy. We subscribe to services for food, clothes, television, exercise, self-help, video games, bric-a-brac, computer programs, and on and on. These services quickly fade into the financial background as just another bill we pay. </p><p>But if we watch Netflix nine times out of ten, why pay for Hulu and Disney+ and HBO Max and CBS All access? Instead, every month or so, we should scrutinize our subscriptions to ask whether they still add value to our lives. If they don't, unsubscribe.</p><p><strong>2) Kill your debt</strong></p><p>Debt doesn't just take away money we could save elsewhere; it's also a self-replicating devourer of wealth. Your debt interest rates are almost certainly higher than your investment returns, especially on credit cards. Because of this, no matter your saving rituals, you're likely bleeding wealth the longer you remain in debt.</p><p>Instead, focus on removing debt from your life. Again, credit card debt especially. The good news is that most companies have hardship programs to help debtors. You can call them to see if they can lower your interest rates or provide other helpful services.</p><p>"Financial accommodations are generally readily available right now," Amy Thomann, the head of consumer credit education at TransUnion, <a href="https://www.nytimes.com/2020/08/29/at-home/manage-finances-save-money-millennials-coronavirus.html" target="_blank" rel="noopener noreferrer">told the New York Times</a><u>.</u> "Lenders, just like consumers, understand the hardships that are going on in the economy."</p><p><strong>3) Have an emergency fund</strong></p><p>Of course, you'll need some savings when the unexpected happens. Say—I don't know—a worldwide pandemic? Experts like Robin and Thomann recommend people have three to six months' worth of expenses on reserve. These should be in liquid assets so you can access them easily and quickly.</p><p>Of course, that's not always feasible, but you should save what you can. </p><p><strong>4) Find social outlets that don't cost</strong></p><p>The economic shutdown has offered one financial boon: It has revealed ways we can enjoy each other's company with overspending. We can host movies remotely with our friends. Play video games online. Enjoy physical-distance strolls through the park. And a host of other creative connections. After the pandemic, the occasional bar hop or Friday dinner out can still be a guilty pleasure. But unlike sitcom characters, we shouldn't be spending our social lives on the set of our favorite coffee shops or local watering holes.</p><p><strong>5) Reconsider your relationship with money</strong></p><p>Robin pushes her readers to be financially free. That is, to understand that there's an economy, people have a relationship with it, but it shouldn't become an obsession that runs their lives. As <a href="https://www.youtube.com/watch?v=xDaBjc4QyWU" target="_blank" rel="noopener noreferrer">she told <em>Big Think</em></a>: "It's like there are so many presumptions that drive us into wage [slavery], and it doesn't matter whether you are at the low end or the high end. If you are engaged in that sort of anxious process of 'more, more, more,' you are not free."</p><p>The millennial generation has certainly been dealt a bum hand, but it's perhaps defeatist, and more than a little premature, to label them the unluckiest generation. Perhaps after being led astray by the old roadmap, they will be the generation to reconsider their relationship with money—not as an end itself but a means to a healthier and more beneficial life. </p>
Your health and the health of the planet are not indistinguishable.
- Transitioning to a plant-based diet could help reduce obesity, cardiovascular disease, and type 2 diabetes.
- Humans are destroying entire ecosystems to perpetuate destructive food habits.
- Understanding how to properly transition to a plant-based diet is important for success.
Richard Dawkins: No Civilized Person Accepts Slavery So Why Do We Accept Animal Cruelty? | Big Think<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="c09f23c34faacc8ec55aba054fae9c7c"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/_4SnBCPzBl0?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span><h3>Get your hands dirty—in the kitchen</h3><p>Quarantine offered an entire world the opportunity to get into the kitchen and put on a chef's apron. Complaints about "not enough time" are the biggest barriers to preparing home-cooked meals. Of course, pandemic fatigue has resulted in a number of recent chefs ordering out more. That said, this is the perfect time to try your hand at new dishes. With infection rates <a href="https://www.vox.com/coronavirus-covid19/2020/10/11/21511641/covid-19-us-cases-update-testing-deaths-hospitalizations" target="_blank">increasing across the country</a>, stocking up on seasonal vegetables is a great idea. </p><p>Simple seasonal ways to begin your plant-based exploration include <a href="https://nomnompaleo.com/post/11136213353/roasted-kabocha-squash" target="_blank" rel="noopener noreferrer">roasted kabocha squash</a>, <a href="https://www.delicious.com.au/recipes/no-chop-pumpkin-soup/seblnp2r?r=recipes/collections/autumnrecipes&c=f3bf723a-05a7-487d-bd4b-5bc8af042ca9/autumn%20recipes%20you%27ll%20fall%20in%20love%20with" target="_blank" rel="noopener noreferrer">Bombay potatoes</a>, and <a href="https://www.delicious.com.au/recipes/no-chop-pumpkin-soup/seblnp2r?r=recipes/collections/autumnrecipes&c=f3bf723a-05a7-487d-bd4b-5bc8af042ca9/autumn%20recipes%20you%27ll%20fall%20in%20love%20with" target="_blank" rel="noopener noreferrer">no-chop pumpkin soup</a>. If you're feeling a bit more adventurous, <a href="https://www.thecuriouschickpea.com/masoor-dal-tadka/" target="_blank" rel="noopener noreferrer">Masoor Dal Tadka</a> will keep you warm into the winter months. A delicious <a href="https://www.delish.com/cooking/recipe-ideas/a23362341/sweet-potato-salad-recipe/" target="_blank" rel="noopener noreferrer">sweet potato salad</a> will never fail you. This <a href="https://www.buzzfeed.com/hannahloewentheil/easy-meatless-monday-recipes" target="_blank">round-up of 25 vegetarian recipes</a> will keep you busy for a few months (or a month if you're ambitious). </p><h3>Educate yourself on the benefits</h3><p>Education is essential for beginning any endeavor. Weeding through propaganda and bunk science to find credible evidence of any diet is difficult, though many experts agree that for individual and societal health, a plant-based diet is key. </p><p>Even vegetarianism has its pitfalls. For example, <a href="https://michaelpollan.com/books/cooked/" target="_blank">one-fifth of all calories</a> consumed by Americans come from nutritionally-worthless white flour. If you're eating processed bread every day, you're missing out on the benefits of a rich and varied diet. </p><p>Many of the "<a href="https://www.who.int/chp/chronic_disease_report/media/Factsheet4.pdf?ua=1" target="_blank">diseases of affluence</a>," such as cardiovascular and obesity-related ailments, originate with a poor diet (and lack of exercise). Meat has been an essential component of the human diet throughout our evolution. Today, we eat too much of it—and too much of it is produced in factory farms. Transitioning to a plant-based diet could help cut down on carbon emissions and the aforementioned diseases. </p><p>Plants are full of valuable phytochemicals and antioxidants that support a <a href="https://www.mdanderson.org/publications/focused-on-health/5-benefits-of-a-plant-based-diet.h20-1592991.html" target="_blank" rel="noopener noreferrer">strong immune system</a>. A (non-processed) plant-based diet reduces inflammation and offers plenty of fiber. It has been shown to reduce your risk of diabetes, stroke, and heart diseases. Those are all great reasons to transition. </p><h3>Begin your journey with a single step</h3><p>Going cold turkey rarely works for addicts. The same is true of diets. If you're interested in a plant-based diet, try to eat veg every other day for a few weeks. Notice how your body reacts on days you eat this way compared to other days. Gradually phase out meat products. Attempt meat-free weekdays and see if your craving for meat persists on the weekend. Try using meat as a garnish instead of the main course. </p><p>More importantly, have a replacement plan. Dropping all meat products to consume frozen dinners isn't the best course of action. Filling your cart with bags of foods you've never eaten before will overwhelm you. Prepare meals as you taper off of meat; arm yourself with a broad knowledge of healthy plants and vegetables. At some point, you might forget what you've been missing. </p>
Photo: anaumenko / Adobe Stock<h3>Start with foods you already love</h3><p>The good news is that you likely have a number of plant-based side and main dishes that you love. Transitioning into a new diet requires a certain level of enjoyment. Otherwise, you're going to loathe eating, and eating should bring some level of satisfaction. </p><p>Try a one-to-one ratio to begin. On one night, cook a meal you love. Then try something completely new the next night. Follow that up with old faithful. This way, you constantly have new dishes to look forward to yet don't get stuck in thinking you have to be creative every single day. You'll likely find some winners and decide not to repeat other dishes. Regardless, you'll have a broader menu to work from. </p><h3>Avoid ingredients you can't pronounce</h3><p>The produce section of your grocery store provides almost everything you need to survive. You can likely pronounce every ingredient in this section. There's a vast difference between food and foodstuffs. Plenty of plant-based companies offer too much of the latter. Potato chips are technically vegetarian, and some use simple ingredients, yet it's easy to fill your cart with foodstuffs. The health benefits of this are not only negligible but potentially dangerous. </p><p>Qi Sun, an assistant professor of nutrition at the Harvard T.H. Chan School of Public Health, <a href="https://www.webmd.com/diet/obesity/news/20191104/are-there-health-downsides-to-vegetarian-diets" target="_blank">explains</a>. "If you eat a vegan diet, but eat a lot of french fries, refined carbs like white bread, white rice, that's not healthy." He suggests "emphasizing fruits and vegetables. Not fruit juice but whole food. And nuts."</p><h3>Utilize the wisdom of the internet—but don't get indoctrinated</h3><p>There's a lot of terrible advice—and worse, propaganda—on the internet. While you likely don't want to eat eggs every day, they're not "toxic," as one popular documentary claims. Eggs are <a href="https://www.bbcgoodfood.com/howto/guide/ingredient-focus-eggs" target="_blank">one of the best</a> low-cost, high-value foods around. </p><p>Read websites like <a href="https://www.everydayhealth.com/diet-nutrition/scientific-benefits-following-plant-based-diet/" target="_blank">Everyday Health</a>, which uses clear language, like "may improve" and "may decrease," with links to credible studies. This way you follow the going science without becoming fanatical about a particular diet or being disappointed if it turns out the research doesn't hold up. Good science evolves with evidence. And right now, the evidence points to more vegetables in our diets. </p><p>--</p><p><em>Stay in touch with Derek on <a href="http://www.twitter.com/derekberes" target="_blank">Twitter</a> and <a href="https://www.facebook.com/DerekBeresdotcom" target="_blank" rel="noopener noreferrer">Facebook</a>. His new book is</em> "<em><a href="https://www.amazon.com/gp/product/B08KRVMP2M?pf_rd_r=MDJW43337675SZ0X00FH&pf_rd_p=edaba0ee-c2fe-4124-9f5d-b31d6b1bfbee" target="_blank" rel="noopener noreferrer">Hero's Dose: The Case For Psychedelics in Ritual and Therapy</a>."</em></p>