Once a week.
Subscribe to our weekly newsletter.
The birth of childhood: A brief history of the European child
Did the 20th century bring a breakthrough in how children are treated?
It took several thousand years for our culture to realize that a child is not an object. Learning how to treat children as humans continues to this day.
"Nature wants children to be children before they are men," wrote Jean-Jacques Rousseau in the book Emile, or On Education (1762). While Rousseau did not see children as humans, he appealed to parents to look after their offspring. "If we consider childhood itself, is there anything so weak and wretched as a child, anything so utterly at the mercy of those about it, so dependent on their pity, their care, and their affection?" he asked. At a time when children were regularly entrusted to others during adolescence or left in shelters, Rousseau's demands seemed revolutionary. They paved the way for the breakthrough discovery that indeed, a child is also a human being, capable of feelings, having their own needs and, above all, suffering. But the philosopher himself did not take these ideas to heart. Whenever his lover and later wife, Teresa Levasseur, gave birth to a child, Rousseau immediately gave the baby to an orphanage, where just one in a hundred newborns had a chance to live to adulthood.
Double standards in people's approach to children were not unusual in the past. In ancient Greece, no one condemned parents for leaving a baby by the road or in the garbage. Usually, it was torn apart by animals. Less often, a passer-by would take them – not necessarily guided by mercy. After raising the orphan, the 'Good Samaritan' could sell the child at a slave market, recovering the money invested in their maintenance with interest. This kind of practice did not shock, because in the world of ancient Greece a child had the status of private property, and therefore the public and authorities were indifferent to their fate.
The exception was Sparta, but this did not mean anything good for minors. While in other poleis infanticide was left to parents, in Sparta it was managed by the council of Fyli. In Life of Lycurgus, Plutarch wrote about how the child was inspected by the Fyli elders forming the council: "If they found it stout and well made, they gave order for its rearing, and allotted to it one of the nine thousand shares of land above mentioned for its maintenance, but, if they found it puny and ill-shaped, ordered it to be taken to what was called the Apothetae, a sort of chasm under Taygetus; as thinking it neither for the good of the child itself, nor for the public interest, that it should be brought up, if it did not, from the very outset, appear made to be healthy and vigorous." The boys who passed the selection faced a rather short childhood – when they were seven, they were taken to the barracks, where they were trained to be excellent soldiers until they came of age.
Greek standards for dealing with children were modified slightly by the Romans. Until the second century BCE, citizens of the Eternal City followed the custom to put each new born baby on the ground right after delivery. If the father picked the baby up, the mother could care for it. If not, the newborn landed in the trash – someone could take them away or wild dogs would consume them. It was not until the end of the republic that this custom was considered barbaric and gradually began to fade. However, the tradition requiring that the young man or woman should remain under the absolute authority of their father was still obliged. The head of the family could even kill the offspring with impunity, although he had to consult the decision with the rest of the family beforehand.
When the Greeks and Romans did decide to look after their offspring, they showed them love and attention. In wealthier homes, special emphasis was placed on education and upbringing, so that the descendant "would desire to become an exemplary citizen, who would able to govern as well as obey orders in accordance with the laws of justice," as Plato explained in The Laws. According to the philosopher, children should be carefully looked after, and parents have the duty to care for their physical and mental development. Plato considered outdoor games combined with reading fairy tales, poetry and listening to music as the best way to achieve this goal. Interestingly, Plato did not approve of corporeal punishment as an educational measure.
The great Greek historian and philosopher Plutarch was of a similar opinion. He praised the Roman senator Cato the Elder for helping his wife to bathe their son, and not avoiding changing the baby. When the offspring grew up, the senator spent a lot of time with the boy, studied literary works with him, and taught him history, as well as horse riding and the use of weapons. Cato also condemned the beating of children, considering it to be unworthy of a Roman citizen. As prosperity grew, the revolutionary idea became increasingly popular in the republic. Educator Marcus Fabius Quintilianus (Quintilian) in his Institutes of Orator described corporeal punishment as "humiliating".
Another consequence of the liberalization of customs in the first century CE was taking care of girls' education and gradually equalizing their rights with those of boys. However, only Christians condemned the practice of abandonment of newborns. The new religion, garnering new followers in the Roman Empire from the third century onwards, ordered followers to care unconditionally for every being bestowed with an immortal soul.
This new trend turned out so strong that it survived even the fall of the Empire and the conquest of its lands by the Germanic peoples. Unwanted children began to end up in shelters, eagerly opened by monasteries. Moral pressure and the opportunity to give a child to the monks led to infanticide becoming a marginal phenomenon. Legal provisions prohibiting parents from killing, mutilating and selling children began to emerge. In Poland, this was banned in 1347 by Casimir the Great in his Wiślica Statutes.
However, as Philippe Ariès notes in Centuries of Childhood: A Social History of Family Life: "Childhood was a period of transition which passed quickly, and which was just as quickly forgotten." As few children survived into adulthood, parents usually did not develop deeper emotional ties with their offspring. During the Middle Ages, most European languages did not even know the word 'child'.
Departure from violence
During the Middle Ages, a child became a young man at the age of eight or nine. According to canon law of the Catholic Church, the bride had to be at least 12 years old, and the groom, 14. This fact greatly hindered the lives of the most powerful families. Immediately after the child's birth, the father, wanting to increase the resources and prestige of the family, began looking for a daughter-in-law or son-in-law. While the families decided their fate, the children subject to the transaction had nothing to say. When the King of Poland and Hungary, Louis the Hungarian, matched his daughter Jadwiga with Wilhelm Habsburg, she was only four years old. The husband chosen for her was four years older. To avoid conflicts with the church, the contract between the families was called an 'engagement for the future' (in Latin: sponsalia de futuro). The advantage of these arrangements was such that if political priorities changed, they were easier to break than sacramental union. This was the case with the engagement of Hedwig, who, for the benefit of the Polish raison d'etat, at the age of 13 married Władysław II Jagiełło, instead of Habsburg.
Interest in children as independent beings was revived in Europe when antiquity was discovered. Thanks to the writings of ancient philosophers, the fashion to care for education and educating children returned. Initially, corporeal punishment was the main tool in the education process. Regular beating of the pupils was considered to be so necessary that in the monastery schools a custom of a spring trip to the birch grove arose. There, the students themselves collected a supply of sticks for their teacher for the entire year.
A change in this way of thinking came with Ignatius of Loyola's Society of Jesus, founded in 1540. The Jesuits used violence only in extraordinary situations, and corporeal punishment could only be imposed by a servant, never a teacher. The pan-European network of free schools for young people built by the order enjoyed an excellent reputation. "They were the best teachers of all," the English philosopher Francis Bacon admitted reluctantly. The successes of the order made empiricists aware of the importance of non-violent education. One of the greatest philosophers of the 17th century, John Locke, urged parents to try to stimulate children to learn and behave well, using praise above all other measures.
The aforementioned Rousseau went even further, and criticized all then patterns of treating children. According to the then fashion, noble and rich people did not deal with them, because so did the plebs. The newborn was fed by a wet-nurse, and then was passed on to grandparents or poor relatives who were paid a salary. The child would return home when they were at least five years old. The toddler suddenly lost their loved ones. Later, their upbringing and education was supervised by their strict biological mother. They saw the father sporadically. Instead of love, they received daily lessons in showing respect and obedience. Rousseau condemned all of this. "His accusations and demands shook public opinion, women read them with tears in their eyes. And just like it was once fashionable, among the upper classes, to pass on the baby to the wet-nurse, after Emil it became fashionable for the mother to breastfeed her child," wrote Stanisław Kot in Historia wychowania [The History of Education]. Still, a fashion detached from the law and exposing society to the fate of children could not change the reality.
Shelter and factory
"In many villages and towns, newborn babies were kept for twelve to fifteen days, until there were enough of them. Then they were transported, often in a state of extreme exhaustion, to the shelter," writes Marian Surdacki in Dzieci porzucone w społeczeństwach dawnej Europy i Polski [Children Abandoned in the Societies of Old Europe and Poland]. While the Old Continent elites discovered the humanity of children, less affluent residents began reproducing entirely different ancient patterns on a massive scale. In the 18th century, abandoning unwanted children again became the norm. They usually went to care facilities maintained by local communes. In London, shelters took in around 15,000 children each year. Few managed to survive into adulthood. Across Europe, the number of abandoned children in the 18th century is estimated at around 10 million. Moral condemnation by the Catholic and Protestant churches did not do much.
Paradoxically, the industrial revolution turned out to be more effective, although initially it seemed to have the opposite effect. In Great Britain, peasants migrating to the cities routinely rid themselves of bothersome progeny. London shelters were under siege, and around 120,000 homeless, abandoned children wandered the streets of the metropolis. Although most did not survive a year, those who did required food and clothes. The financing of shelters placed a heavy burden on municipal budgets. "To the parish authorities, encumbered with great masses of unwanted children, the new cotton mills in Lancashire, Derby, and Notts were a godsend," write Barbara and John Lawrence Hammond in The Town Labourer.
At the beginning of the 19th century, English shelters became a source of cheap labour for the emerging factories. Orphans had to earn a living to receive shelter and food. Soon, their peers from poor families met the same fate. "In the manufacturing districts it is common for parents to send their children of both sexes at seven or eight years of age, in winter as well as summer, at six o'clock in the morning, sometimes of course in the dark, and occasionally amidst frost and snow, to enter the manufactories, which are often heated to a high temperature, and contain an atmosphere far from being the most favourable to human life," wrote Robert Owen in 1813. This extraordinary manager of the New Lanark spinning mill built a workers' estate complete with a kindergarten. It offered care, but also taught the children of workers how to read and write.
However, Owen remained a notable exception. Following his appeal, in 1816 the British parliament set up a special commission, which soon established that as many as 20% of workers in the textile industry were under 13 years old. There were also spinning mills where children constituted 70% of the labour force. As a standard, they worked 12 hours a day, and their only day of rest was Sunday. Their supervisors maintained discipline with truncheons. Such daily existence, combined with the tuberculosis epidemic, did not give the young workers a chance to live for too long. Owen and his supporters' protests, however, hardly changed anything for many years. "Industry as such is seeking new, less skilled but cheaper, workers. Small children are most welcome," noted the French socialist Eugène Buret two decades later.
Among the documents available in the British National Archives is the report of a government factory inspector from August 1859. He briefly described the case of a 13-year-old worker, Martha Appleton, from a Wigan spinning mill. Due to unhealthy, inhumane conditions the girl fainted on the job. Her hand became caught in an unguarded machine and all her fingers on that hand were severed. Since her job required both hands to be fast and efficient, Martha was fired, noted the inspector. As he suspected, the girl fainted due to fatigue. The next day, the factory owner decided that such a defective child would be useless. So, he dismissed her.
Where a single man once worked, one now finds several children or women doing similar jobs for poor salaries, warned Eugène Buret. This state of affairs began to alarm an increasing number of people. The activities of the German educator Friedrich Fröbel had a significant impact on this: he visited many cities and gave lectures on returning children to their childhoods, encouraging adults to provide children with care and free education. Fröbel's ideas contrasted dramatically with press reports about the terrible conditions endured by children in factories.
The Prussian government reacted first, and as early as 1839 banned the employment of minors. In France, a similar ban came into force two years later. In Britain, however, Prime Minister Robert Peel had to fight the parliament before peers agreed to adopt the Factory Act in 1844. The new legislation banned children below 13 from working in factories for more than six hours per day. Simultaneously, employers were required to provide child workers with education in factory schools. Soon, European states discovered that their strength was determined by citizens able to work efficiently and fight effectively on the battlefields. Children mutilated at work were completely unfit for military service. At the end of the 19th century, underage workers finally disappeared from European factories.
In defence of the child
"Mamma has been in the habit of whipping and beating me almost every day. She used to whip me with a twisted whip – a rawhide. The whip always left a black and blue mark on my body," 10-year-old Mary Ellen Wilson told a New York court in April 1874. Social activist Etty Wheeler stood in defence of the girl battered by her guardians (her biological parents were dead). When her requests for intervention were repeatedly refused by the police, the courts, and even the mayor of New York, the woman turned to the American Society for the Prevention of Cruelty to Animals (ASPCA) for help. Its president Henry Bergh first agreed with Miss Wheeler that the child was not her guardians' property. Using his experience fighting for animal rights, he began a press and legal battle for little Wilson. The girl's testimony published in the press shocked the public. The court took the child from her guardians, and sentenced her sadistic stepmother to a year of hard labour. Mary Ellen Wilson came under the care of Etty Wheeler. In 1877, her story inspired animal rights activists to establish American Humane, an NGO fighting for the protection of every harmed creature, including children.
In Europe, this idea found more and more supporters. Even more so than among the aristocrats, the bourgeois hardly used corporeal punishment, as it was met with more and more condemnation, note Philippe Ariès and Georges Duby in A History of Private Life: From the Fires of Revolution to the Great War. At the same time, the custom of entrusting the care of offspring to strangers fell into oblivion. Towards the end of the 19th century, 'good mothers' began to look after their own babies.
In 1900, Ellen Key's bestselling book The Century of the Child was published. A teacher from Sweden urged parents to provide their offspring with love and a sense of security, and limit themselves to patiently observe how nature takes its course. However, her idealism collided with another pioneering work by Karl Marx and Friedrich Engels. The authors postulated that we ought to "replace home education by social". The indoctrination of children was to be dealt with by school and youth organizations, whose aim was to prepare young people to fight the conservative generation of parents for a new world.
Did the 20th century bring a breakthrough in how children are treated? In 1924, the League of Nations adopted a Declaration of the Rights of the Child. The opening preamble stated that "mankind owes to the child the best that it has to give." This is an important postulate, but sadly it is still not implemented in many places around the world.
Translated from the Polish by Joanna Figiel
- How Finland's amazing education system works - Big Think ›
- The Eastern European Way: Childhood Independence and Putting ... ›
We explore the history of blood types and how they are classified to find out what makes the Rh-null type important to science and dangerous for those who live with it.
- Fewer than 50 people worldwide have 'golden blood' — or Rh-null.
- Blood is considered Rh-null if it lacks all of the 61 possible antigens in the Rh system.
- It's also very dangerous to live with this blood type, as so few people have it.
Golden blood sounds like the latest in medical quackery. As in, get a golden blood transfusion to balance your tantric midichlorians and receive a free charcoal ice cream cleanse. Don't let the New-Agey moniker throw you. Golden blood is actually the nickname for Rh-null, the world's rarest blood type.
As Mosaic reports, the type is so rare that only about 43 people have been reported to have it worldwide, and until 1961, when it was first identified in an Aboriginal Australian woman, doctors assumed embryos with Rh-null blood would simply die in utero.
But what makes Rh-null so rare, and why is it so dangerous to live with? To answer that, we'll first have to explore why hematologists classify blood types the way they do.
A (brief) bloody history
Our ancestors understood little about blood. Even the most basic of blood knowledge — blood inside the body is good, blood outside is not ideal, too much blood outside is cause for concern — escaped humanity's grasp for an embarrassing number of centuries.
Absence this knowledge, our ancestors devised less-than-scientific theories as to what blood was, theories that varied wildly across time and culture. To pick just one, the physicians of Shakespeare's day believed blood to be one of four bodily fluids or "humors" (the others being black bile, yellow bile, and phlegm).
Handed down from ancient Greek physicians, humorism stated that these bodily fluids determined someone's personality. Blood was considered hot and moist, resulting in a sanguine temperament. The more blood people had in their systems, the more passionate, charismatic, and impulsive they would be. Teenagers were considered to have a natural abundance of blood, and men had more than women.
Humorism lead to all sorts of poor medical advice. Most famously, Galen of Pergamum used it as the basis for his prescription of bloodletting. Sporting a "when in doubt, let it out" mentality, Galen declared blood the dominant humor, and bloodletting an excellent way to balance the body. Blood's relation to heat also made it a go-to for fever reduction.
While bloodletting remained common until well into the 19th century, William Harvey's discovery of the circulation of blood in 1628 would put medicine on its path to modern hematology.
Soon after Harvey's discovery, the earliest blood transfusions were attempted, but it wasn't until 1665 that first successful transfusion was performed by British physician Richard Lower. Lower's operation was between dogs, and his success prompted physicians like Jean-Baptiste Denis to try to transfuse blood from animals to humans, a process called xenotransfusion. The death of human patients ultimately led to the practice being outlawed.4
The first successful human-to-human transfusion wouldn't be performed until 1818, when British obstetrician James Blundell managed it to treat postpartum hemorrhage. But even with a proven technique in place, in the following decades many blood-transfusion patients continued to die mysteriously.
Enter Austrian physician Karl Landsteiner. In 1901 he began his work to classify blood groups. Exploring the work of Leonard Landois — the physiologist who showed that when the red blood cells of one animal are introduced to a different animal's, they clump together — Landsteiner thought a similar reaction may occur in intra-human transfusions, which would explain why transfusion success was so spotty. In 1909, he classified the A, B, AB, and O blood groups, and for his work he received the 1930 Nobel Prize for Physiology or Medicine.
What causes blood types?
It took us a while to grasp the intricacies of blood, but today, we know that this life-sustaining substance consists of:
- Red blood cells — cells that carry oxygen and remove carbon dioxide throughout the body;
- White blood cells — immune cells that protect the body against infection and foreign agents;
- Platelets — cells that help blood clot; and
- Plasma — a liquid that carries salts and enzymes.6,7
Each component has a part to play in blood's function, but the red blood cells are responsible for our differing blood types. These cells have proteins* covering their surface called antigens, and the presence or absence of particular antigens determines blood type — type A blood has only A antigens, type B only B, type AB both, and type O neither. Red blood cells sport another antigen called the RhD protein. When it is present, a blood type is said to be positive; when it is absent, it is said to be negative. The typical combinations of A, B, and RhD antigens give us the eight common blood types (A+, A-, B+, B-, AB+, AB-, O+, and O-).
Blood antigen proteins play a variety of cellular roles, but recognizing foreign cells in the blood is the most important for this discussion.
Think of antigens as backstage passes to the bloodstream, while our immune system is the doorman. If the immune system recognizes an antigen, it lets the cell pass. If it does not recognize an antigen, it initiates the body's defense systems and destroys the invader. So, a very aggressive doorman.
While our immune systems are thorough, they are not too bright. If a person with type A blood receives a transfusion of type B blood, the immune system won't recognize the new substance as a life-saving necessity. Instead, it will consider the red blood cells invaders and attack. This is why so many people either grew ill or died during transfusions before Landsteiner's brilliant discovery.
This is also why people with O negative blood are considered "universal donors." Since their red blood cells lack A, B, and RhD antigens, immune systems don't have a way to recognize these cells as foreign and so leaves them well enough alone.
How is Rh-null the rarest blood type?
Let's return to golden blood. In truth, the eight common blood types are an oversimplification of how blood types actually work. As Smithsonian.com points out, "[e]ach of these eight types can be subdivided into many distinct varieties," resulting in millions of different blood types, each classified on a multitude of antigens combinations.
Here is where things get tricky. The RhD protein previously mentioned only refers to one of 61 potential proteins in the Rh system. Blood is considered Rh-null if it lacks all of the 61 possible antigens in the Rh system. This not only makes it rare, but this also means it can be accepted by anyone with a rare blood type within the Rh system.
This is why it is considered "golden blood." It is worth its weight in gold.
As Mosaic reports, golden blood is incredibly important to medicine, but also very dangerous to live with. If a Rh-null carrier needs a blood transfusion, they can find it difficult to locate a donor, and blood is notoriously difficult to transport internationally. Rh-null carriers are encouraged to donate blood as insurance for themselves, but with so few donors spread out over the world and limits on how often they can donate, this can also put an altruistic burden on those select few who agree to donate for others.
Some bloody good questions about blood types
A nurse takes blood samples from a pregnant woman at the North Hospital (Hopital Nord) in Marseille, southern France.
Photo by BERTRAND LANGLOIS / AFP
There remain many mysteries regarding blood types. For example, we still don't know why humans evolved the A and B antigens. Some theories point to these antigens as a byproduct of the diseases various populations contacted throughout history. But we can't say for sure.
In this absence of knowledge, various myths and questions have grown around the concept of blood types in the popular consciousness. Here are some of the most common and their answers.
Do blood types affect personality?
Japan's blood type personality theory is a contemporary resurrection of humorism. The idea states that your blood type directly affects your personality, so type A blood carriers are kind and fastidious, while type B carriers are optimistic and do their own thing. However, a 2003 study sampling 180 men and 180 women found no relationship between blood type and personality.
The theory makes for a fun question on a Cosmopolitan quiz, but that's as accurate as it gets.
Should you alter your diet based on your blood type?
Remember Galen of Pergamon? In addition to bloodletting, he also prescribed his patients to eat certain foods depending on which humors needed to be balanced. Wine, for example, was considered a hot and dry drink, so it would be prescribed to treat a cold. In other words, belief that your diet should complement your blood type is yet another holdover of humorism theory.
Created by Peter J. D'Adamo, the Blood Type Diet argues that one's diet should match one's blood type. Type A carriers should eat a meat-free diet of whole grains, legumes, fruits, and vegetables; type B carriers should eat green vegetables, certain meats, and low-fat dairy; and so on.
However, a study from the University of Toronto analyzed the data from 1,455 participants and found no evidence to support the theory. While people can lose weight and become healthier on the diet, it probably has more to do with eating all those leafy greens than blood type.
Are there links between blood types and certain diseases?
There is evidence to suggest that different blood types may increase the risk of certain diseases. One analysis suggested that type O blood decreases the risk of having a stroke or heart attack, while AB blood appears to increase it. With that said, type O carriers have a greater chance of developing peptic ulcers and skin cancer.
None of this is to say that your blood type will foredoom your medical future. Many factors, such as diet and exercise, hold influence over your health and likely to a greater extent than blood type.
What is the most common blood type?
In the United States, the most common blood type is O+. Roughly one in three people sports this type of blood. Of the eight well-known blood types, the least common is AB-. Only one in 167 people in the U.S. have it.
Do animals have blood types?
They most certainly do, but they are not the same as ours. This difference is why those 17th-century patients who thought, "Animal blood, now that's the ticket!" ultimately had their tickets punched. In fact, blood types are distinct between species. Unhelpfully, scientists sometimes use the same nomenclature to describe these different types. Cats, for example, have A and B antigens, but these are not the same A and B antigens found in humans.
Interestingly, xenotransfusion is making a comeback. Scientists are working to genetically engineer the blood of pigs to potentially produce human compatible blood.
Scientists are also looking into creating synthetic blood. If they succeed, they may be able to ease the current blood shortage, while also devising a way to create blood for rare blood type carriers. While this may make golden blood less golden, it would certainly make it easier to live with.* While antigens are typically proteins, they can be other molecules as well, such as polysaccharides.
Since 1957, the world's space agencies have been polluting the space above us with countless pieces of junk, threatening our technological infrastructure and ability to venture deeper into space.
- Space debris is any human-made object that's currently orbiting Earth.
- When space debris collides with other space debris, it can create thousands more pieces of junk, a dangerous phenomenon known as the Kessler syndrome.
- Radical solutions are being proposed to fix the problem, some of which just might work. (See the video embedded toward the end of the article.)
In 1957, the Soviet Union launched a human-made object into orbit for the first time. It marked the dawn of the Space Age. But when Sputnik 1's batteries died and the aluminum satellite began lifelessly orbiting the planet, it marked the end of another era: the billions of years during which space was pristine.
Today, the space above Earth is the world's "largest garbage dump," according to NASA. It's littered with 8,000 tons of human-made junk, called space debris, left by space agencies over the past six decades.
The U.S. now tracks more than 25,000 pieces of space junk. And that's only the debris that ground-based radar technologies can track. The U.S. Space Surveillance Network estimates there could be more than 170 million pieces of space debris currently orbiting Earth, with the majority being tiny fragments smaller than 1 mm.
Space debris: Trashing a planet
Space debris includes all human-made objects, big and small, that are orbiting Earth but no longer serve a useful function. A brief inventory of known space junk includes: a spatula, a glove, a mirror, a bag filled with astronaut tools, spent rocket stages, stray bolts, paint chips, defunct spacecraft, and about 3,000 dead satellites — all of which are orbiting Earth at speeds of roughly 18,000 m.p.h.
By allowing space debris to accumulate unchecked, we could be building a prison that keeps us stranded on Earth for centuries.
Most space junk is floating in low Earth orbit (LEO), the region of space within an altitude of about 100 to 1,200 miles. LEO is also where most of the world's 3,000 satellites operate, powering our telecommunications, GPS technologies, and military operations.
"Millions of pieces of orbital debris exist in low Earth orbit (LEO) — at least 26,000 the size of a softball or larger that could destroy a satellite on impact; over 500,000 the size of a marble big enough to cause damage to spacecraft or satellites; and over 100 million the size of a grain of salt that could puncture a spacesuit," wrote NASA's Office of Inspector General Office of Audits.
If LEO becomes polluted with too much space junk, it could become treacherous for spacecraft, threatening not only our modern technological infrastructure, but also humanity's ability to venture into space at all.
By allowing space debris to accumulate unchecked, we could be building a prison that keeps us stranded on Earth for centuries.
An outsized problem
Space debris of any size poses grave threats to spacecraft. But tiny, untrackable micro-debris presents an especially dreadful problem: A paint fragment chipped off a spacecraft might not seem dangerous, but it careens through space at nearly 10 times the speed of a bullet, packing enough energy to puncture an astronaut's suit, crack a window of the International Space Station, and potentially destroy satellites.
Impacts with space debris are common. During the Space Shuttle era, NASA replaced an average of one to two shuttle windows per mission "due to hypervelocity impacts (HVIs) from space debris." To be sure, some space debris are natural micrometeoroids. But much of it is human-made, like the fragment that struck the starboard payload bay radiator of the STS-115 flight in 2006.
"The debris penetrated both walls of the honeycomb structure, and the shock wave from the penetration created a crack in the rear surface of the radiator 6.8 mm long," NASA wrote. "Scanning electron microscopy and energy dispersive X-ray detection analysis of residual material around the hole and in the interior of the radiator shows that the impactor was a small fragment of circuit board material."
The European Space Agency notes that any fragment of space debris larger than a centimeter could shatter a spacecraft into pieces.
Impact chip on the ISSESA
To dodge space junk, the International Space Station (ISS) has to conduct "avoidance maneuvers" a couple times every year. In 2014, for example, flight controllers decided to raise the ISS's altitude by half a mile to avoid collision with part of an old European rocket in its orbital path.
NASA has strict guidelines for how it decides to perform these maneuvers.
"Debris avoidance maneuvers are planned when the probability of collision from a conjunction reaches limits set in the space shuttle and space station flight rules," NASA wrote. "If the probability of collision is greater than 1 in 100,000, a maneuver will be conducted if it will not result in significant impact to mission objectives. If it is greater than 1 in 10,000, a maneuver will be conducted unless it will result in additional risk to the crew."
These precautionary measures are becoming increasingly necessary. In 2020, the ISS had to move three times to avoid potential collisions. One of the latest close-calls came with such little warning that astronauts were instructed to take shelter in the Russian segment of the space station, in order to be closer to their Soyuz MS-16 spacecraft, which serves as an escape pod in case of an emergency.
The Kessler syndrome
The hazards of space debris grow exponentially over time. That's because of a problem that NASA scientist Donald J. Kessler outlined in 1978. The so-called Kessler syndrome states that as space becomes increasingly packed with spacecraft and debris, collisions become more likely. And because each collision would create more debris, it could trigger a chain reaction of collisions — potentially to the point where near-Earth space becomes a shrapnel field through which safe travel is impossible.
A paint fragment chipped off a spacecraft might not seem dangerous, but it careens through space at nearly 10 times the speed of a bullet, packing enough energy to puncture an astronaut's suit, crack a window of the International Space Station, and potentially destroy satellites.
The Kessler syndrome may already be playing out. Perhaps it began with the first known case of a spacecraft being severely damaged by artificial space debris, which occurred in 1996 when the French spy satellite Cerise was struck by a piece of an old European Ariane rocket. The collision tore off a 13-foot segment of the satellite.
The next major space debris incident occurred in 2007 when China conducted an anti-satellite missile test in which the nation destroyed one of its own weather satellites, triggering international criticism and creating more than 3,000 pieces of trackable space debris, most of which was still in orbit ten years after the explosion.
Then, in 2009, an unexpected collision between communications satellites — the active Iridium 33 and the defunct Russian Cosmos-2251 — produced at least 2,000 large fragments of space debris and as many as 200,000 smaller pieces, according to NASA. About half of all space debris currently orbiting Earth came from the Iridium-Cosmos collision and China's missile test.
There's more. Russia's BLITS satellite was spun out of its orbital path in 2013 after being struck by a piece of space debris suspected to have come from China's 2007 missile test; the European Space Agency's Copernicus Sentinel-1A satellite was struck by a tiny particle in 2016; and a window of the ISS was hit by a small fragment that same year.
As nations and private companies plan to send more satellites into orbit, collisions and impacts could soon become more common.
The promise and peril of satellite mega-constellations
Space organizations have recently begun launching satellites into low Earth orbit at an unprecedented pace. The goal is to create "mega-constellations" of satellites that provide high-quality internet access to virtually all parts of the planet.
Internet-providing satellites have existed for years, but they're typically expensive and provide slower service than land-based internet infrastructure. That's mainly because it can take a relatively long time for a signal to travel from the satellite to the user due to the high altitudes at which many of these satellites float above us in geostationary orbit.
China and companies like SpaceX, OneWeb, and Amazon aim to solve this problem by launching thousands of satellites into lower orbits in order to reduce signal latency, or the time it takes for the signal to travel to and from the satellite. But some space experts worry satellite mega-constellations could create more space debris.
"We face entirely new challenges as hundreds of satellites are launched every month now — more than we used to launch in a year," Thomas Schildknecht of the International Astronomical Union said at a European Space Agency conference in April. "The mega-constellations are producing huge risks of collisions. We need more stringent rules for traffic management in space and international mechanisms to ensure enforcement of the rules."
A 2017 study funded by the European Space Agency found that the deployment of satellite mega-constellations into low Earth orbit could increase the number of catastrophic collisions by 50 percent. Still, it remains unclear whether sending more satellites into space will necessarily cause more collisions.
SpaceX, for example, claims that Starlink satellites aren't at significant risk of collision because they're equipped with automated collision-avoidance propulsion systems. However, this system seemed to fail in 2019 when a Starlink satellite had a close call with a European science satellite named Aeolus. The company later said it had fixed the bug.
A batch of 60 Starlink test satellites stacked atop a Falcon 9 rocket.SpaceX
Currently, there are no strict international rules governing the deployment and management of satellite mega-constellations. But there are some international efforts to curb space debris risks.
The most concerted effort is the Inter-Agency Space Debris Coordination Committee (IADC), a forum that comprises 13 of the world's space agencies, including those of the U.S., Russia, China, and Japan. The committee aims "to exchange information on space debris research activities between member space agencies, to facilitate opportunities for cooperation in space debris research, to review the progress of ongoing cooperative activities, and to identify debris mitigation options."
The IADC's Space Debris Mitigation Guidelines list three broad goals:
1. Preventing on-orbit break-ups
2. Removing spacecraft from the densely populated orbit regions when they reach the end of their mission
3. Limiting the objects released during normal operations
But even though the world's space agencies recognize the gravity of the space debris problem, they're reluctant to act because of an incentives-based dilemma.
Space debris: A classic tragedy of the commons
Space debris is everyone's problem, but no one entity is obligated to solve it. It's a tragedy of the commons — an economic scenario in which individuals with access to a shared and scarce resource (space) act in their own best interest (spend the least amount of money). Left unchecked, the shared resource is vulnerable to depletion or corruption.
For example, the U.S. by itself could develop a novel method for removing space debris, which, if successful, would benefit all organizations with assets in space. But the odds of this happening are slim because of a game-theoretical dilemma.
"[In space debris removal] each stakeholder has an incentive to delay its actions and wait for others to respond. This makes the space debris removal setting an interesting strategic dilemma. As all actors share the same environment, actions by one have a potential immediate and future impact on all others. This gives rise to a social dilemma in which the benefits of individual investment are shared by all while the costs are not. This encourages free-riders, who reap the benefits without paying the costs. However, if all involved parties reason this way, the resulting inaction may prove to be far worse for all involved. This is known in the game theory literature as the tragedy of the commons."
Similar to trying to curb climate change, there's no clear answer on how to best incentivize nations to mitigate space debris. (For what it's worth, the game theoretical model in the 2018 study found that a centralized solution — e.g., one where a single actor makes decisions on mitigating space debris, perhaps on behalf of a multinational coalition — is less costly than a decentralized solution.)
Although space organizations have been slow to act, many have been exploring ways to remove space junk from orbit and prevent new debris from forming.
Cleaning up space debris
Space organizations have proposed and experimented with many ways to remove debris from space. Although the techniques vary, most agree on strategy: get rid of the big stuff first.
That's because collisions involving large objects would create lots of new debris. So, removing big debris first would simultaneously clean up low Earth orbit and slow down the phenomenon of cascading collisions described by the Kessler syndrome.
To clean up low Earth orbit, space organizations have proposed using:
- Electrodynamic tethers: In 2017, the Japanese Aerospace Exploration Agency attempted to remove space debris by outfitting a cargo ship with an electrodynamic tether — essentially a fishing net made of stainless steel and aluminium. The craft then tried to "catch" space debris with the aim of dragging it into lower orbit, where it would eventually crash to Earth. The experiment failed.
- Ultra-thin nets: NASA's Innovative Advanced Concepts program has funded research for a project that would deploy extremely thin nets designed to wrap around space debris and drag them down to Earth's atmosphere.
- "Laser brooms": Since the 1990s, space researchers have proposed using ground-based lasers to strategically heat one side of a piece of space debris, which would change its orbit so that it re-enters Earth's atmosphere sooner. Because the laser systems would be based on Earth, this strategy could prove to be relatively affordable.
- Drag sails: As a relatively passive way to accelerate the de-orbit of space junk, NASA and other space organizations have been exploring the viability of attaching sails to space junk that would help guide debris back to Earth. These sails could either be packed within new satellites, to be deployed once the satellites are no longer useful, or attached to existing space junk.
Illustration of Brane Craft Phase II, which would use thin nets to capture space debris.Siegfried Janson via NASA
But perhaps one of the most promising solutions for space debris is the ESA-funded ClearSpace-1 mission. Set to launch in 2025, ClearSpace-1 intends to be the first mission that successfully removes space debris from orbit. The goal is to launch a satellite into orbit and rendezvous with the upper stage of Europe's Vega launcher, which was left in space after a 2013 flight.
ClearSpace-1 satellite using its robotic arm to capture space debrisClearSpace-1
Once the satellite meets up with the debris, it will try to capture the junk with a robotic arm and then perform a controlled atmospheric reentry. The task will be challenging, in part because space junk tumbles as it flies above Earth, meaning the satellite will have to match its movements in order to safely capture it.
Freethink recently spoke to the ClearSpace-1 team to get a better understanding of the mission and its challenges.
Catching the Most Dangerous Thing in Space Freethink via youtube.com
But not all space debris removal strategies center on technology. A 2020 paper published in PNAS argued that imposing taxes on each satellite in orbit would be the most effective way to clean up space. Called "orbital use fees," the plan would charge space organizations an annual fee of roughly $235,000 per each satellite that's in orbit. The fee would, in theory, incentivize nations and companies to declutter space over time.
The main hurdle of orbital-use fees is getting all of the world's space organizations to agree to such a plan. If they do, it could help eliminate the tragedy of the commons aspect of space debris and potentially quadruple the value of the space industry by 2040.
"The costly buildup of debris and satellites in low-Earth orbit is fundamentally a problem of incentives — satellite operators currently lack the incentives to factor into their launch decisions the collision risks their satellites impose on other operators," the researchers wrote. "Our analysis suggests that correcting these incentives, via an OUF, could have substantial economic benefits to the satellite industry, and failing to do so could have substantial and escalating economic costs."
No matter the solution, cleaning up space debris will be a complex and expensive challenge that requires a coordinated, international effort. If the global community wants to maintain modern technological infrastructure and venture deeper into space, conducting business as usual isn't an option.
"Imagine how dangerous sailing the high seas would be if all the ships ever lost in history were still drifting on top of the water," Jan Wörner, European Space Agency (ESA) director general, said in a statement. "That is the current situation in orbit, and it cannot be allowed to continue."
It uses radio waves to pinpoint items, even when they're hidden from view.
"Researchers have been giving robots human-like perception," says MIT Associate Professor Fadel Adib. In a new paper, Adib's team is pushing the technology a step further. "We're trying to give robots superhuman perception," he says.
The researchers have developed a robot that uses radio waves, which can pass through walls, to sense occluded objects. The robot, called RF-Grasp, combines this powerful sensing with more traditional computer vision to locate and grasp items that might otherwise be blocked from view. The advance could one day streamline e-commerce fulfillment in warehouses or help a machine pluck a screwdriver from a jumbled toolkit.
The research will be presented in May at the IEEE International Conference on Robotics and Automation. The paper's lead author is Tara Boroushaki, a research assistant in the Signal Kinetics Group at the MIT Media Lab. Her MIT co-authors include Adib, who is the director of the Signal Kinetics Group; and Alberto Rodriguez, the Class of 1957 Associate Professor in the Department of Mechanical Engineering. Other co-authors include Junshan Leng, a research engineer at Harvard University, and Ian Clester, a PhD student at Georgia Tech.Play video
As e-commerce continues to grow, warehouse work is still usually the domain of humans, not robots, despite sometimes-dangerous working conditions. That's in part because robots struggle to locate and grasp objects in such a crowded environment. "Perception and picking are two roadblocks in the industry today," says Rodriguez. Using optical vision alone, robots can't perceive the presence of an item packed away in a box or hidden behind another object on the shelf — visible light waves, of course, don't pass through walls.
But radio waves can.
For decades, radio frequency (RF) identification has been used to track everything from library books to pets. RF identification systems have two main components: a reader and a tag. The tag is a tiny computer chip that gets attached to — or, in the case of pets, implanted in — the item to be tracked. The reader then emits an RF signal, which gets modulated by the tag and reflected back to the reader.
The reflected signal provides information about the location and identity of the tagged item. The technology has gained popularity in retail supply chains — Japan aims to use RF tracking for nearly all retail purchases in a matter of years. The researchers realized this profusion of RF could be a boon for robots, giving them another mode of perception.
"RF is such a different sensing modality than vision," says Rodriguez. "It would be a mistake not to explore what RF can do."
RF Grasp uses both a camera and an RF reader to find and grab tagged objects, even when they're fully blocked from the camera's view. It consists of a robotic arm attached to a grasping hand. The camera sits on the robot's wrist. The RF reader stands independent of the robot and relays tracking information to the robot's control algorithm. So, the robot is constantly collecting both RF tracking data and a visual picture of its surroundings. Integrating these two data streams into the robot's decision making was one of the biggest challenges the researchers faced.
"The robot has to decide, at each point in time, which of these streams is more important to think about," says Boroushaki. "It's not just eye-hand coordination, it's RF-eye-hand coordination. So, the problem gets very complicated."
The robot initiates the seek-and-pluck process by pinging the target object's RF tag for a sense of its whereabouts. "It starts by using RF to focus the attention of vision," says Adib. "Then you use vision to navigate fine maneuvers." The sequence is akin to hearing a siren from behind, then turning to look and get a clearer picture of the siren's source.
With its two complementary senses, RF Grasp zeroes in on the target object. As it gets closer and even starts manipulating the item, vision, which provides much finer detail than RF, dominates the robot's decision making.
RF Grasp proved its efficiency in a battery of tests. Compared to a similar robot equipped with only a camera, RF Grasp was able to pinpoint and grab its target object with about half as much total movement. Plus, RF Grasp displayed the unique ability to "declutter" its environment — removing packing materials and other obstacles in its way in order to access the target. Rodriguez says this demonstrates RF Grasp's "unfair advantage" over robots without penetrative RF sensing. "It has this guidance that other systems simply don't have."
RF Grasp could one day perform fulfilment in packed e-commerce warehouses. Its RF sensing could even instantly verify an item's identity without the need to manipulate the item, expose its barcode, then scan it. "RF has the potential to improve some of those limitations in industry, especially in perception and localization," says Rodriguez.
Adib also envisions potential home applications for the robot, like locating the right Allen wrench to assemble your Ikea chair. "Or you could imagine the robot finding lost items. It's like a super-Roomba that goes and retrieves my keys, wherever the heck I put them."
The research is sponsored by the National Science Foundation, NTT DATA, Toppan, Toppan Forms, and the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS).
Is working from home the ultimate liberation or the first step toward an even unhappier "new normal"?