A thought experiment from 1867 leads scientists to design a groundbreaking information engine.
- Their engine is the fastest ever such contraption, using information as "fuel."
- The application of the technology may lie in nanotechnology and nano-biology.
- Inspired by an 1867 thought experiment, researchers design an information engine.
Can information become a source of energy? Scientists from Simon Fraser University (SFU) in Canada devised an ultrafast engine that claims to operate on information, potentially opening up a groundbreaking new frontier in humanity's search for new kinds of fuel. The study, published in Proceedings of the National Academy of Sciences (PNAS), describes how the researchers turned the movements of tiny particles into stored energy.
How would an information engine even work? The idea for such a contraption, which at first sounds like it would break the laws of physics, was first proposed by the Scottish scientist James Clerk Maxwell back in 1867. Colorfully named "Maxwell's demon," such a machine would theoretically achieve something akin to perpetual motion. Maxwell's thought experiment was meant to show that it may be possible to violate the second law of thermodynamics, which basically states that the amount of entropy, or disorder, always increases.
Maxwell imagined a hypothetical creature, a demon, who would control the opening and closing of a tiny door between two gas chambers. The demon's goal would be to send fast-moving gas particles into one compartment and the slow ones to another. By doing this, one compartment would be hotter (containing faster molecules) and one cooler. The demon would essentially create a system with greater order and stored energy than what it started with. Without expending any energy, it would seemingly accomplish a decrease in entropy.
A 1929 paper on Maxwell's demon by the Hungarian physicist Leo Szilard actually showed that the thought experiment would not violate the second law of thermodynamics. The demon, proved Szilard, has to exert some amount of energy to figure out if the molecules were hot or cold.
Over 150 years later, researchers built a system that operates according to the ideas in Maxwell's thought experiment, turning information into "work."
SFU physics professor and senior author John Bechhoefer, who was involved in the experiments, explained in a press statement that their group "wanted to find out how fast an information engine can go and how much energy it can extract, so we made one."
SFU physics professor David Sivak, who led the theorists on the project, said their team made a significant advance in the design of the information engine, having "pushed its capabilities over ten times farther than other similar implementations, thus making it the current best-in-class."
Designing an information engineTheir design is akin to a microscopic particle that is submerged in water, while being attached to a spring that is, in turn, connected to a stage that can be moved up. The researchers, playing the role of Maxwell's demon, observe the particle going up or down due to thermal motion, then move the stage up if the particle randomly bounced upward. If it bounces down, they wait. As elaborated by PhD student Tushar Saha, "This ends up lifting the entire system using only information about the particle's position."
Caption: Schematic of the information engine. (A) Ratcheted spring-mass system under gravity. (B) Experimental realization using horizontal optical tweezers in a vertical gravitational field. Feedback operations on the right side in A and B are indicated by the small red "swoosh" arrows.Credit: TK Saha et al., PNAS, 2021.
Of course, a particle is too small to attach to a spring, so the actual set-up utilized an instrument known as an optical trap, which "uses a laser to create a force on the particle that mimics that of the spring and stage." As they repeated the process, without pulling the particle directly, the particle was raised to a "great height," storing up a large amount of gravitational energy, according to the researchers.
PhD student Tushar Saha working on the information ratchet, an experimental apparatus that lifts a heavy microscopic particle using information.Credit: Simon Fraser University
The amount of power this system generates is "comparable to molecular machinery in living cells," with "speeds comparable to fast-swimming bacteria," said postdoctoral fellow Jannik Ehrich.
While applications of this still-developing technology are yet to be fully explored, the researchers see potential uses in nanotechnology and nanobiology. Improving computing speed may also be a potential avenue to pursue, according to the researchers
Harvard scientists propose how mysterious Fast Radio Bursts from outer space could actually be powering the spacecrafts of an advanced alien civilization.
Two Harvard astronomers published a paper with an imagination-grabbing explanation of Fast Radio Bursts (FRBs), mysterious space signals that were first observed in 2007. These bursts are likely to be coming from galaxies billions of lights years away and have enormous energy to be visible from such a distance.
The powerful bursts are millisecond-long and while only 18 of them have been recorded so far, scientists think there could be an estimated 10,000 FRBs speeding through the cosmos every day. Previous theories proposed their sources to be newborn neutron stars or even nebulas powered by pulsar winds. But no concrete originator of the radio waves has yet been identified. This led astronomers at the Harvard-Smithsonian Center for Astrophysics to theorize that the signals could potentially be coming a device that someone created.
"Fast radio bursts are exceedingly bright given their short duration and origin at great distances, and we haven't identified a possible natural source with any confidence," said Avi Loeb, theorist at the Harvard-Smithsonian Center for Astrophysics. "An artificial origin is worth contemplating and checking."
And who would make an “artificial” device in distant space? Yes, it’s aliens.
"We examine the possibility that FRBs originate from the activity of extragalactic civilizations,” said the scientists.
Check out this video from Space.com for some visuals of the idea:
What are some of the clues that point to an unnatural creation for these signals? For one, they are way too hot and bright. According to George Dvorsky at Gizmodo, who interviewed the theorists, the beams have a brightness temperature of 1037 degrees. The number speaks to the amount of microwave radiation of a space object.
Other reasons to suspect aliens - the radio bursts repeat but in a rather unpredictable way and are concentrated around a specific frequency. Both of these factors are not consistent with the neutron star/pulsar explanation of FRBs.
What the Loeb and his co-author Manasvi Lingam suggest may be happening is quite ingenious. They think the bursts could actually be energy beams that are emanating from giant transmitters. Their purpose? To transport spaceships made by advanced alien civilizations at amazing speeds. Imagine solar space vehicles equipped by light sails that absorb the transmitted radio bursts and zip forward through the cosmos.
The scientists went as far as figuring out the feasibility of creating such a device and while the technology necessary is not something humans can yet muster, more sophisticated spacefaring beings could make it happen. The transmitter would have to be a solar-powered and water-cooled contraption twice the size of Earth, concluded the astronomers. We are talking 15,000 miles in length. The power this would generate could propel payloads of a million tons, which their statement compares to “20 times the largest cruise ships on Earth.”
"That's big enough to carry living passengers across interstellar or even intergalactic distances," said Lingam.
To an observer on Earth, the transmission of the radio burst would appear as a brief flash due to relativity. The spacecraft would receive the burst of energy through mirrors that gather the sunlight. The resulting acceleration of the ship could approach the speed of light.
While Loeb readily offers that their work is speculative, he does think there is merit in such thinking.
"Science isn't a matter of belief, it's a matter of evidence. Deciding what’s likely ahead of time limits the possibilities. It's worth putting ideas out there and letting the data be the judge,” explained Loeb.
The scientists also suggested that a way to study the idea further would be to focus on repeated FRBs whose origins cannot be attributed to “cataclysmic astrophysical events”.
You can read their proposal, published in Astrophysical Journal Letters, here.
As far as light sail technology, NASA is planning to test what it calls a Near Earth Asteroid Scout, a sunlight-powered spacecraft, in 2018.
An artist's illustration of a light-sail powered by a radio beam (red) generated on the surface of a planet. The leakage from such beams as they sweep across the sky would appear as Fast Radio Bursts (FRBs) Credit: M. Weiss/CfA
Physicists finds evidence from just after the Big Bang that supports the controversial holographic universe theory.
An international study claims to have found first observed evidence that our universe is a hologram.
What is the holographic universe idea? It's not exactly that we are living in some kind of Star Trekky computer simulation. Rather the idea, first proposed in the 1990s by Leonard Susskind and Gerard 't Hooft, says that all the information in our 3-dimensional reality may actually be included in the 2-dimensional surface of its boundaries. It's like watching a 3D show on a 2D television.
"Imagine that everything you see, feel and hear in three dimensions (and your perception of time) in fact emanates from a flat two-dimensional field. The idea is similar to that of ordinary holograms where a three-dimensional image is encoded in a two-dimensional surface, such as in the hologram on a credit card. However, this time, the entire universe is encoded,“ explained the study's co-author Professor Kostas Skenderis of Mathematical Sciences at the University of Southampton.
That's still pretty mind-bending.
The new study involved a team of theoretical physicists and astrophysicists from the U.K., Canada and Italy who studied the cosmic microwave background and discovered enough irregularities there that pointed to the holographic theory as a legitimate rival to the theory of cosmic inflation, the way these anomalies are usually explained.
The new analysis by the scientists was made possible by the advancement of telescope and sensing tech that can look for information in the "white noise" or microwaves that remain from the early universe right after the Big Bang.
By studying and mapping data from the Planck space telescope, the team found that the observational data they found was largely predictable by the math of holographic theory.
"Holography is a huge leap forward in the way we think about the structure and creation of the universe. Einstein's theory of general relativity explains almost everything large scale in the universe very well, but starts to unravel when examining its origins and mechanisms at quantum level. Scientists have been working for decades to combine Einstein's theory of gravity and quantum theory. Some believe the concept of a holographic universe has the potential to reconcile the two. I hope our research takes us another step towards this," added Professor Skenderis.
A sketch of the timeline of the holographic Universe where time runs from left to right. The holographic phase (far left) is where the image is blurry because space and time haven't been defined yet. After this phase comes to a close, the Universe goes into a geometric phase, which can be described by Einstein's equations. Credit: Paul McFadden
The implications of this study could lead the scientists to improved understanding of how time and space were created.
"When we go into this concept of holography, it's a new way of thinking about things. Even the scientists who worked on this for the past 20 years don't have the right tools or the right language to describe what's going on," said Skenderis. "It's a new paradigm for a physical reality."
The study's lead author, Niayesh Afshordi of the Perimeter Institute and the University of Waterlo, expressed a similarly positive sentiment about their finding:
"I would argue this is the simplest theory of the early universe. And so far, this is as simple as it gets. And it could help explain everything we see," Afshordi said.
You can read the paper by the researchers, from the University of Southampton (UK), University of Waterloo (Canada), Perimeter Institute (Canada), INFN, Lecce (Italy) and the University of Salento (Italy) here in the journal Physical Review Letters.
Cover photo: Satellites, planes and comets transit across the night sky under stars that appear to rotate above Corfe Castle on August 12, 2016 in Corfe Castle, United Kingdom. The Perseids meteor shower occurs every year when the Earth passes through the cloud of debris left by Comet Swift-Tuttle, and appear to radiate from the constellation Perseus in the north eastern sky. (Photo by Dan Kitwood/Getty Images)
Physicist Erik Verlinde's theory successfully predicts the distribution of gravity around 33,000+ galaxies without relying on unobserved "dark matter".
Dutch theoretical physicist Erik Verlinde has been shaking up the physics world with his controversial theory of “emergent gravity”, which sees gravity not as a fundamental force but rather as a force that comes into existence as a result of microscopic changes in the spacetime’s structure. Verlinde came out with this theory in 2010, taking on the laws of Newton and calling gravity “an illusion”. In 2016, his follow-up paper argued that there is also no mysterious “dark matter” in existence, which is supposed to (along with dark energy) make up to 95% of the universe but has so far not been detected.
Now a team of Dutch astronomers, led by Margot Brouwer from Leiden Observatory, has tested one aspect of Verlinde’s theory and found that it actually worked!
Brouwer’s team relied on the effect of “gravitational lensing” to test Verlinde’s prediction of gravity distribution around 33,000+ galaxies. Planets closer to Earth tend to bend light that's coming from planets farther away, thus creating a lens effect. This can be used to establish a galaxy's mass.
Normally, at distances that are up to a hundred times the radius of the galaxy, Einstein’s theory of gravity actually doesn’t account for the strength of the force of gravity. The existence of the hypothetical dark matter is invoked to make the numbers work. But Verlinde’s theory actually predicts how much gravity there would be without relying on dark matter, using only the mass of the visible matter.
Measuring the distribution of gravity using gravitational lensing. Credit: APS/Alan Stonebraker; galaxy images from STScI/AURA, NASA, ESA, and the Hubble Heritage Team
Brouwer used Verlinde’s theory to calculate a prediction for the gravity of 33,613 galaxies and found that it compares well with the numbers from the measurements via gravitational lensing. The scientist cautions, however, that dark matter could still be an explanation for the additional gravitational force but as a free, unobserved parameter. The trouble with "free parameters" is that they can be tweaked to adjust for differences between observations and hypotheses.
"The dark matter model actually fits slightly better with the data than Verlinde’s prediction," Brouwer explained to the New Scientist. "But then if you mathematically factor in the fact that Verlinde’s prediction doesn’t have any free parameters, whereas the dark matter prediction does, then you find Verlinde’s model is actually performing slightly better.
As this test only looks at the validity of Verlinde’s theory in a very specific situation, more work needs to be done to prove its worth more broadly.
"The question now is how the theory develops, and how it can be further tested. But the result of this first test definitely looks interesting, “ said Brouwer.
Watch her explain her approach and work here:
The results will be published in the Monthly Notices of the Royal Astronomical Society. You can read the paper online here.
COVER PHOTO: Former Microsoft software developer Charles Simonyi flies during a parabolic flight aboard a zero-gravity simulator, a Russian IL-76 MDK aircraft used for astronauts' training flights in weightlessness, in Star City outside Moscow, 26 February 2007. (Photo credit: MAXIM MARMUR/AFP/Getty Images)
Climate change is a topic that's politically charged rather than scientifically charged. Bill Nye offers tips for how those on the side of science can begin to have meaningful conversations with skeptics.
Danny Miller is at odds with many of his friends; they don’t believe in climate change, but he does. It’s a predicament Bill Nye can lend some guidance on; science skeptics and climate change deniers have been one of his longest uphill battles in the public sphere.
So what is Nye’s advice for having meaningful discussions with climate change deniers and perhaps even bringing them slowly around to see reason? Nye admits that public figures who deny climate change have been alarmingly successful at casting doubt over the credibility of science so, as a starting point, it’s important to choose your language carefully. The word 'theory' has lost its integrity in recent years – it seems like anyone these days can have a theory. "I have a theory it’s raining outside," Nye jokes, with a hint of sadness. So understanding and relaying the real definition of the word to people you don’t see eye to eye with can be a crucial tool.
Most people hear the word "theory" and assume it’s an idea or statement in need of proof. A scientist hears the word "theory" and recognizes it as certifiable fact because it’s been proven. A hypothesis is one thing, that’s the first step towards an idea becoming a theory. When a hypothesis is proven, then it is a theory. So climate change theory isn’t a wishy washy idea people can choose to believe in or not; it’s backed by data, and is a concrete concept.
Bill Nye's most recent book is Unstoppable: Harnessing Science to Change the World.