from the world's big
Article from 1912 warns the world about climate change
They were a little optimistic in 1912, but they understood that adding carbon to the atmosphere has side effects.
- An article from 1912 is making headlines for its mention of climate change by means of putting carbon into the atmosphere.
- It is but one of many articles and papers that mentioned human-driven climate change during the early 20th century.
- It reminds us that just because we can see a problem coming doesn't mean we fully understand how quickly it will arrive or how dangerous it will be.
Somehow, there is still a public debate on whether climate change is occurring and how much of it humanity is responsable for. This is despite the agreement of 97% of climate scientists on the matter and decades of research. It gets even stranger when you realize that the idea that humans can change the environment is older than gasoline-powered cars and that people were discussing the potential effects of climate change before the Titanic sank.
Extra, Extra! Read all about it!
In the March 1912 edition of Popular Mechanics, an article on the balmy year of 1911 and the ability of humans to change the climate includes a single line that has shocked some modern readers. The caption for a photograph of a coal plant explains that:
The furnaces of the world are now burning about 2,000,000,000 tons of coal a year. When this is burned, uniting with oxygen, it adds about 7,000,000,000 tons of carbon dioxide to the atmosphere yearly. This tends to make the air a more effective blanket for the earth and raise its temperature. The effect may be considerable in a few centuries.
The article goes on to somewhat contradict its own caption, explaining how it is "highly improbable" that there would be enough change in the atmosphere within the next thousand years to have any noticeable effect on global temperatures, though it does argue that the Earth will get warmer before it gets cooler.
Oh, 1912, how innocent you were.
How did they know about climate change way back then?
The Popular Mechanics article was hardly ahead of its time. An article in Nature published in 1882 concluded that increased pollution "will have a marked influence on the climate of the world." This article was widely discussed, and follow-ups to it are credited with popularizing discussion about the effects of pollution on the environment.
A basic understanding of the greenhouse effect goes back to 1824 when Joseph Fourier argued that Earth's atmosphere allowed the planet to be warmer than it would be without one. He even speculated on the potential for humans to alter the climate, though he thought altering the land was more important to the process than changing the composition of the atmosphere. You can see in this quote how he also thought the process would take much longer to notice than it has:
The establishment and progress of human societies, the action of natural forces, can notably change, and in vast regions, the state of the surface, the distribution of water and the great movements of the air. Such effects are able to make to vary, in the course of many centuries, the average degree of heat; because the analytic expressions contain coefficients relating to the state of the surface and which greatly influence the temperature.
His ideas were followed up on by Svante Arrhenius in 1896. Working as a chemist, he was able to determine how much the temperature of the planet would increase for each unit of carbon dioxide introduced into the atmosphere. Working forwards from of his calculations, he was the first to understand that global warming by means of changing the composition of the atmosphere is possible. He phrased his ideas in what is now known as "Arrhenius' rule."
If the quantity of carbonic acid* increases in geometric progression, the augmentation of the temperature will increase nearly in arithmetic progression.
He also didn't think we had much to worry about anytime soon from this phenomenon. He even once told an audience:
We would then have some right to indulge in the pleasant belief that our descendants, albeit after many generations, might live under a milder sky and in less barren surroundings than is our lot at present.
Why were they so off on the timescales? Why did they think this was a good thing?
Clipping from the 1912 article 'Remarkable Weather of 1911: The Effect of the Combustion of Coal on the Climate — What Scientists Predict for the Future' in Popular Mechanics.
Credit: Popular Mechanics
We've put a lot more carbon into the air than these scientists probably thought we would—that alone would throw off their estimates even if they had the better understanding of climate change that we have today.
As for thinking climate change could be good, they weren't alone. The idea that human intervention in the climate was good for us was widespread during the 19th century. Farmers were told that the act of plowing encouraged rainfall in the drier regions of Australia and the United States. In the light of this optimism, the idea that we could warm up the planet probably gave these early climatologists visions of more summer sun and better crop yields rather than nightmares of worsening natural disasters.
The conclusion of the 1912 Popular Mechanics article will leave you a bit sick in the stomach from all the hubris:
It is perhaps somewhat hazardous to make conjectures for centuries yet to come, but in the light of all that is known it is reasonable to conclude that not only has the brain of man contrived machines by means of which he can travel faster than the wind, navigate the ocean depths, fly above the clouds, and do the work of a hundred, but also indirectly by these very things, which change the constitution of the atmosphere, have his activities reached beyond the near at hand and the immediate present and modified the cosmic processes themselves.
It is largely the courageous, enterprising, and ingenious American whose brains are changing the world. Yet even the dull foreigner, who burrows in the earth by the faint gleam of his miner's lamp, not only supports his family and helps to feed the consuming furnaces of modern industry, but by his toil in the dirt and darkness adds to the carbon dioxide in the earth's atmosphere so that men in generations to come shall enjoy milder breezes and live under sunnier skies.
How did other predictions from that era pan out?
An electric discharge photographed in the workshop of Nikola Tesla, United States of America.
Photo from L'Illustration, No 3571, August 5, 1911 via Getty Images.
Some of the predictions for the far-off 21st-century that people made back then were accurate, though these futurists often claimed that humanity would advance much faster than we actually did or would take an eternity to accomplish something that was achieved a few years later.
Nikola Tesla predicted the rise of our the smart phone back in 1905 when he said:
"Within a few years a simple and inexpensive device, readily carried about, will enable one to receive on land or sea the principal news, to hear a speech, a lecture, a song or play of a musical instrument, conveyed from any other region of the globe. The invention will also meet the crying need for cheap transmission to great distances, more especially over the oceans. The small working capacity of the cables and the excessive cost of messages are now fatal impediments in the dissemination of intelligence which can only be removed by transmission without wires."
He seemed to think we'd have smartphones much sooner than we did, however. This is understandable since he was trying to invent transatlantic wireless communication at the time, he was just extremely optimistic. On the other hand some predictions look utterly absurd in retrospect. Great thinkers like Alfred Nobel and Guglielmo Marconi predicted that globalization, advanced weaponry, and international communication would make a general European war impossible—they thought so right up until July 1914.
Even with the help of science, predicting the future can be a tricky business. The science of climate change was beginning to take shape at the dawn of the 20th century, but humanity had yet to fully understand how rapidly the problem would sneak up on us. Given how difficult understanding the future is, perhaps we should just listen to what scientists are advising us to do today.
Why do we ignore accurate predictions about impending doom?
Emotional intelligence is a skill sought by many employers. Here's how to raise yours.
- Daniel Goleman's 1995 book Emotional Intelligence catapulted the term into widespread use in the business world.
- One study found that EQ (emotional intelligence) is the top predictor of performance and accounts for 58% of success across all job types.
- EQ has been found to increase annual pay by around $29,000 and be present in 90% of top performers.
Researchers hope the technology will further our understanding of the brain, but lawmakers may not be ready for the ethical challenges.
- Researchers at the Yale School of Medicine successfully restored some functions to pig brains that had been dead for hours.
- They hope the technology will advance our understanding of the brain, potentially developing new treatments for debilitating diseases and disorders.
- The research raises many ethical questions and puts to the test our current understanding of death.
What's dead may never die, it seems<p>The researchers did not hail from House Greyjoy — "What is dead may never die" — but came largely from the Yale School of Medicine. They connected 32 pig brains to a system called Brain<em>Ex</em>. Brain<em>Ex </em>is an artificial perfusion system — that is, a system that takes over the functions normally regulated by the organ. The pigs had been killed four hours earlier at a U.S. Department of Agriculture slaughterhouse; their brains completely removed from the skulls.</p><p>Brain<em>Ex</em> pumped an experiment solution into the brain that essentially mimic blood flow. It brought oxygen and nutrients to the tissues, giving brain cells the resources to begin many normal functions. The cells began consuming and metabolizing sugars. The brains' immune systems kicked in. Neuron samples could carry an electrical signal. Some brain cells even responded to drugs.</p><p>The researchers have managed to keep some brains alive for up to 36 hours, and currently do not know if Brain<em>Ex</em> can have sustained the brains longer. "It is conceivable we are just preventing the inevitable, and the brain won't be able to recover," said Nenad Sestan, Yale neuroscientist and the lead researcher.</p><p>As a control, other brains received either a fake solution or no solution at all. None revived brain activity and deteriorated as normal.</p><p>The researchers hope the technology can enhance our ability to study the brain and its cellular functions. One of the main avenues of such studies would be brain disorders and diseases. This could point the way to developing new of treatments for the likes of brain injuries, Alzheimer's, Huntington's, and neurodegenerative conditions.</p><p>"This is an extraordinary and very promising breakthrough for neuroscience. It immediately offers a much better model for studying the human brain, which is extraordinarily important, given the vast amount of human suffering from diseases of the mind [and] brain," Nita Farahany, the bioethicists at the Duke University School of Law who wrote the study's commentary, told <em><a href="https://www.nationalgeographic.com/science/2019/04/pig-brains-partially-revived-what-it-means-for-medicine-death-ethics/" target="_blank">National Geographic</a>.</em></p>
An ethical gray matter<p>Before anyone gets an <em>Island of Dr. Moreau</em> vibe, it's worth noting that the brains did not approach neural activity anywhere near consciousness.</p><p>The Brain<em>Ex</em> solution contained chemicals that prevented neurons from firing. To be extra cautious, the researchers also monitored the brains for any such activity and were prepared to administer an anesthetic should they have seen signs of consciousness. </p><p>Even so, the research signals a massive debate to come regarding medical ethics and our definition of death. </p><p>Most countries define death, clinically speaking, as the irreversible loss of brain or circulatory function. This definition was already at odds with some folk- and value-centric understandings, but where do we go if it becomes possible to reverse clinical death with artificial perfusion?</p><p>"This is wild," Jonathan Moreno, a bioethicist at the University of Pennsylvania, told <a href="https://www.nytimes.com/2019/04/17/science/brain-dead-pigs.html" target="_blank">the <em>New York Times</em></a>. "If ever there was an issue that merited big public deliberation on the ethics of science and medicine, this is one."</p><p>One possible consequence involves organ donations. Some European countries require emergency responders to use a process that preserves organs when they cannot resuscitate a person. They continue to pump blood throughout the body, but use a "thoracic aortic occlusion balloon" to prevent that blood from reaching the brain.</p><p>The system is already controversial because it raises concerns about what caused the patient's death. But what happens when brain death becomes readily reversible? Stuart Younger, a bioethicist at Case Western Reserve University, <a href="https://www.nature.com/articles/d41586-019-01216-4#ref-CR2" target="_blank">told <em>Nature</em></a> that if Brain<em>Ex</em> were to become widely available, it could shrink the pool of eligible donors.</p><p>"There's a potential conflict here between the interests of potential donors — who might not even be donors — and people who are waiting for organs," he said.</p><p>It will be a while before such experiments go anywhere near human subjects. A more immediate ethical question relates to how such experiments harm animal subjects.</p><p>Ethical review boards evaluate research protocols and can reject any that causes undue pain, suffering, or distress. Since dead animals feel no pain, suffer no trauma, they are typically approved as subjects. But how do such boards make a judgement regarding the suffering of a "cellularly active" brain? <a href="https://bigthink.com/philip-perry/after-death-youre-aware-that-youve-died-scientists-claim" target="_blank">The distress of a partially alive brain</a>? </p><p>The dilemma is unprecedented.</p>
Setting new boundaries<p>Another science fiction story that comes to mind when discussing this story is, of course, <em>Frankenstein</em>. As Farahany told <em>National Geographic</em>: "It is definitely has [sic] a good science-fiction element to it, and it is restoring cellular function where we previously thought impossible. But to have <em>Frankenstein</em>, you need some degree of consciousness, some 'there' there. [The researchers] did not recover any form of consciousness in this study, and it is still unclear if we ever could. But we are one step closer to that possibility."</p><p>She's right. The researchers undertook their research for the betterment of humanity, and we may one day reap some unimaginable medical benefits from it. The ethical questions, however, remain as unsettling as the stories they remind us of.</p>
Starting and running a business takes more than a good idea and the desire to not have a boss.
- Anyone can start a business and be an entrepreneur, but the reality is that most businesses will fail. Building something successful from the ground up takes hard work, passion, intelligence, and a network of people who are equally as smart and passionate as you are. It also requires the ability to accept and learn from your failures.
- In this video, entrepreneurs in various industries including 3D printing, fashion, hygiene, capital investments, aerospace, and biotechnology share what they've learned over the years about relationships, setting and attaining goals, growth, and what happens when things don't go according to plan.
- "People who start businesses for the exit, most of them will fail because there's just no true passion behind it," says Miki Agrawal, co-founder of THINX and TUSHY. A key point of Agrawal's advice is that if you can't see yourself in something for 10 years, you shouldn't do it.
After a decade of failed attempts, scientists successfully bounced photons off of a reflector aboard the Lunar Reconnaissance Orbiter, some 240,000 miles from Earth.
- Laser experiments can reveal precisely how far away an object is from Earth.
- For years scientists have been bouncing light off of reflectors on the lunar surface that were installed during the Apollo era, but these reflectors have become less efficient over time.
- The recent success could reveal the cause of the degradation, and also lead to new discoveries about the Moon's evolution.
A close-up photograph of the laser reflecting panel deployed by Apollo 14 astronauts on the Moon in 1971.
NASA<p>The technology isn't quite new. During the Apollo era, astronauts installed on the lunar surface five reflecting panels, each containing at least 100 mirrors that reflect back to whichever direction it's coming from. By bouncing light off these panels, scientists have been able to learn, for example, that the Moon is drifting away from Earth at a rate of about 1.5 inches per year.<br></p><p style="margin-left: 20px;">"Now that we've been collecting data for 50 years, we can see trends that we wouldn't have been able to see otherwise," Erwan Mazarico, a planetary scientist from NASA's Goddard Space Flight Center in Greenbelt, Maryland, <a href="https://www.nasa.gov/feature/goddard/2020/laser-beams-reflected-between-earth-and-moon-boost-science" target="_blank" rel="dofollow">said</a>. "Laser-ranging science is a long game."</p>
NASA's Lunar Reconnaissance Orbiter (LRO)
NASA<p>But the long game poses a problem: Over time, the panels on the Moon have become less efficient at bouncing light back to Earth. Some scientists suspect it's because dust, kicked up by micrometeorites, has settled on the surface of the panels, causing them to overheat. And if that's the case, scientists need to know for sure.</p><p>That's where the recent LRO laser experiment comes in. If scientists find discrepancies between the data sent back by the LRO reflector and those on the lunar surface, it could reveal what's causing the lunar reflectors to become less efficient. They could then account for these discrepancies in their models.</p>