Differences in the way that the Hubble constant—which measures the rate of cosmic expansion—are measured have profound implications for the future of cosmology.
- The Hubble constant is used to estimate the rate of expansion of the universe.
- There are two different ways to calculate its value, but they give different results.
- The difference may give physicists an opening to find new cosmic laws, but there is huge uncertainty about which path to take in finding them.
There's something wrong with the universe. Okay, it's not the universe that's the problem; it's our understanding of the universe. The problem lies with cosmology—the branch of science that studies cosmic evolution—and it's only getting worse. But that may, or may not, turn out to be a good thing.
Talk to an astronomer or a physicist about the state of the art in understanding the universe and they'll tell you we've entered the "Precision Age" of cosmology. The data relevant to cosmic evolution have gotten so good we know all the relevant parameters – things like the universe's age and average density – down to a few decimal places. That's a pretty impressive achievement.
One of the most important of these cosmic parameters is what's known as the Hubble constant (cosmologists write it as Ho). Modern cosmology tells us the universe has been expanding since its beginning in the Big Bang. The Hubble constant specifies the rate of that expansion. It's also related to the age of the universe. Larger values of Ho mean a younger universe. Smaller values of Ho mean an older universe.
A conflict between different ways of measuring [the Hubble constant] is now making big news in cosmology, and no one is sure what's the right next step.
Back when Edwin Hubble first discovered that the universe was expanding, his crude data gave Ho = 500 (we'll ignore the units). This value was so large it gave an age of the universe that was shorter than the age of the sun or the earth. Better measurements soon gave much lower values of Ho, resolving this conflict. But the idea of conflicts with measured values of Ho didn't go away. A conflict between different ways of measuring Ho is now making big news in cosmology, and no one is sure what's the right next step.
More constants, more problems
There are basically two modern ways to measure the Hubble constant. The first is based on looking at what cosmologists call the "late" universe. Astronomers try to make direct measurements of how fast distant objects are moving away from us (i.e., their redshift). There are two parts to these kinds of observations. First, astronomers need an accurate measurement of an object's distance. Then they need to obtain an accurate measurement of its redshift. Using supernovae as "standard candles" for getting distances to far away galaxies, this late universe method gives a value of the Hubble constant of Ho = 74.03.
The other method relies on data from the "early" universe, i.e., right after the Big Bang. Microwave radiation emitted by matter about 300,000 years after the cosmic beginning provides astronomers with a rich source of early universe measurements. The best data from this cosmic microwave background comes from the Planck satellite launched back in 2009. And the best analysis of the Planck data yields Ho = 67.40, which is clearly not the same value as supernova data. Hence the two methods produce conflicting results. Not knowing which value is right, we can't pin down other properties like, for example, the exact age of the universe.
The conflict between the two approaches is itself not news. People have been playing this game for a while, and during all that time, there was always some difference between the early and late universe approaches. But everyone thought it was just a matter of time until new and better data resolved the conflict. Eventually, it was believed, the final value would lie somewhere between Ho = 74.03 and Ho = 67.40. But things haven't worked out that way and that is news.
Over the last few years, measurements of the late universe approach have been getting better and better. This means the inherent "errors" or "uncertainty" in this value of Ho are getting so small there's no chance for a reconciliation with the early universe methods. The gold standard for a measurement is when it achieves the "5 sigma" level, which basically means the confidence in the measured value reaches astronomical (no pun intended) levels. With measurements announced in 2019, the late universe value of Ho was close, or had crossed, the 5 sigma threshold.
So, if the late universe measurement is solid, then what's going on? What are cosmologists missing? The most exciting possibility is that the conflict is not about errors in measurement or analysis but instead point us towards the holy grail of new physics.
To make their early universe measurements of Ho, cosmologists must heavily rely on their dominant cosmological model. This is something called the "Lambda Cold Dark Matter" model or Lambda-CDM. It is based on the universe being made mainly of dark energy (lambda) and a slow moving form of dark matter. This model (or theory) makes predictions that have been very, very well tested. In other words, it works. But the tension between the two methods of determining Ho has some cosmological theorists ready to make changes to Lambda-CDM that could have big consequences for our understanding of the universe. These changes range from just fiddling with the nature of dark energy all the way up to changing Einstein's theory of relativity.
The problem is Lambda-CDM works so well, in so many ways, that it's not something one throws out lightly. Any change to any of its components will have consequences that can mess up the places that it already does work in explaining what we see in the cosmos. What all this means is that the tension in Hubble's constant offers us a lesson in how science progresses. Cosmologists have a paradigm they love and it mostly works. But along comes this problem and, as philosopher of science Thomas Kuhn pointed out, there are typical ways scientists will respond to the problem. At first everyone thinks the problem will go away. But then it doesn't. So what should they do? They could tinker with the old theory in a way that looks jury-rigged. They could abandon the old theory entirely at enormous cost. They could also keep poking around and hope things work themselves out. So what should they do? What would you do?
New studies find the interstellar comet 2I/Borisov is the most "pristine" ever discovered.
One of the only interstellar visitors ever discovered traversing our Solar System, the rogue comet 2l/Borisov, is also one of the most "pristine" such space objects ever. The comet, which was first spotted in 2019 by the amateur Ukrainian astronomer Gennady Borisov, likely never flew too close to any star including our sun, which left its composition very similar to how it was upon formation.
Comets, which are space bodies made of frozen gas, rock, and ice, are usually impacted by the heat and radiation they encounter on their way. What's attractive to scientists in studying comets that haven't changed much in their lifetimes is that they have a similar composition to the gas and dust that was present at the formation of the Solar System 4.5 billion years ago. Analyzing pristine comets can lead to a deeper understanding of the Solar System's beginnings and evolution.
The 2I/Borisov comet is only the second interstellar object ever found in our Solar System. The first one was 1I/'Oumuamua, detected in 2017.
The new study, based on observations from the European Southern Observatory's Very Large Telescope (ESO's VLT) in Chile, was led by Stefano Bagnulo of the Armagh Observatory and Planetarium in Northern Ireland.
"2I/Borisov could represent the first truly pristine comet ever observed," said Bagnulo.
The 2I/Borisov interstellar comet captured with the VLT.Credit:ESO/O. Hainaut
As reported in Nature Communications, his team used a technique called polarimetry, which measures the polarization of light, to study the space body. This helped the team compare 2I/Borisov to other local comets. The properties of the new comet were quite different from others they found in the Solar System, except for Hale-Bopp, a comet discovered in 1995 which is also considered very pristine.
The study's co-author Alberto Cellino from the Astrophysical Observatory of Torino, Italy, commented upon this connection, arrived at by analyzing polarization along with the comet's color:
"The fact that the two comets are remarkably similar suggests that the environment in which 2I/Borisov originated is not so different in composition from the environment in the early Solar System.".
In a fascinating nod to just how powerful Earth's top telescopes have become, another set of ESO researchers published a different study in Nature Astronomy on the comet's composition using data from the Atacama Large Millimeter/submillimeter Array (ALMA). This team, led by astronomer Bin Yang, was able to gather many clues about 2I/Borisov's makeup from its coma – the envelope of dust surrounding it. Inside the coma, they discovered compact pebbles, grains around one millimeter in size. They could also tell that the relative amounts of carbon monoxide and water in the comet changed significantly as it came closer to the Sun.
This indicated to them that the materials in the comet came from different places in the cosmos. Matter in the comet's home star system was likely mixed in a discernible pattern that related to how far the comet was from its star, found the scientists. This was possibly affected by the presence of giant planets, which stirred up materials in their system through strong gravity. Astronomers think this kind of process also took place in the early period of the Solar System's life.
"Imagine how lucky we were that a comet from a system light-years away simply took a trip to our doorstep by chance," remarked Yang.
In 2029, the European Space Agency plans to launch the Comet Interceptor project that would allow scientists to study comets that speed through our Solar System with even greater precision.
Researchers propose a new method that could definitively prove the existence of dark matter.
- Scientists identified a data signature for dark matter that can potentially be detected by experiments.
- The effect they found is a daily "diurnal modulation" in the scattering of particles.
- Dark matter has not yet been detected experimentally.
Dark matter, a type of matter that is predicted to make up around 27 percent of the known universe, has never been detected experimentally. Now a team of astrophysicists and cosmologists think they found a clue that may lead them to finally detect the elusive material, so hard to find because it does not absorb, reflect, or emit light.
The existence of dark matter has so far been predicted by inference from its gravitational effects on the motion of the stars and galaxies rather than direct observation. No existing technologies can pick it out. This has led researchers at the Shanghai Jiao Tong University and the Purple Mountain Observatory of the Chinese Academy of Sciences to identify characteristic dark matter signatures that would be easier to detect.
Their new paper proposes a new type of effect that relates to the so-called "sub-GeV dark matter" which is boosted by cosmic rays. Looking for this effect can potentially allow direct detection of dark matter using nuclear recoil techniques.
The diurnal effect of accelerated dark matter rays. Credit: Ge et al.
The research team included Shao-Feng Ge and Qiang Yuan, who explained that their approach is to look for a prominent signature of accelerated dark matter particles that come from the galaxy's center, where dark matter and cosmic rays are at high density. They found that these particles have a "diurnal modulation" – a scattering pattern that is linked to the time of day. At periods when the Galaxy Center faces the side of the planet that's opposite the location of the detector, the Earth shadows a large amount of these particles. At other times, they come in as a signal with "higher recoil energy."
"The conventional diurnal effect is only for slow moving (nonrelativistic) DM particles in our galaxy (so-called standard DM halo)," Ge and Yuan said to Phys.org. "The effect is negligibly small either from direct experimental constraints, or due to the detection threshold. For light DM particles, on the other hand, the DM-nucleus interaction is much less constrained, which leaves room for strong diurnal modulation."
Researchers Ning Zhou and Jianglai Liu, who were also involved in the study, said in an interview that the signature they are proposing could be "a smoking gun of cosmic ray boosted dark matter detection".
The researchers plan next to look for the signature in previously gathered data, as well as in underground dark matter experiments.
They are also encouraging scientists around the world to look for this signature in their data.
Check out the new paper "Diurnal Effect of Sub-GeV Dark Matter Boosted by Cosmic Rays" published in Physical Review Letters.
Can spacekime help us make headway on some of the most pernicious inconsistencies in physics?
- Our linear model of time may be holding back scientific progress.
- Spacekime theory can help us better understand the development of diseases, financial and environmental events, and even the human brain.
- This theory helps us better utilize big data, develop AI, and can even solve inconsistencies in physics.
We take for granted the western concept of linear time. In ancient Greece, time was cyclical and if the Big Bounce theory is true, they were right. In Buddhism, there is only the eternal now. Both the past and the future are illusions. Meanwhile, the Amondawa people of the Amazon, a group that first made contact with the outside world in 1986, have no abstract concept of time. While we think we know time pretty well, some scientists believe our linear model hobbles scientific progress. We're missing whole dimensions of time, in this view, and our limited perception could be the last obstacle to a sweeping theory of everything.
Theoretical physicist Itzhak Bars of the University of Southern California, Los Angeles, is the most famous scientist with such a hypothesis, known as two-time physics. Here, time is 2D, visualized as a curved plane interwoven into the fabric of the "normal" dimensions—up-down, left-right, and backward-forward. While the hypothesis is over a decade old, Bars isn't the only scientist with such an idea. But what's different with spacekime theory is that it uses a data analytics approach, rather than a physics one. And while it posits that there are at least two dimensions of time, it allows for up to five.
In the spacekime model, space is 5D. Besides the ones we normally encounter, the extra dimensions are so infinitesimally small, we never notice them. This relates to the Kaluza–Klein theory developed in the early 20th century, which stated that there might be an extra, microscopic dimension of space. In this view, space would be curved like the surface of Earth. And like Earth, those who travel the entire distance would, eventually, loop back to their place of origin.
Kaluza-Klein theory unified electromagnetism and gravity, but wasn't accepted at the time, although it did help in the search for quantum gravity. The concept of additional dimensions was revived in the 1990s with Paul Wesson's Space-Time-Matter Consortium. Today, proponents of superstring theory say there may be as many as 10 different dimensions, including nine of space and one of time.
The Spacekime model
Spacekime theory was developed by two data scientists. Dr. Ivo Dinov is the University of Michigan's SOCR Director, as well as a professor of Health Behavior and Biological Sciences, and Computational Medicine and Bioinformatics. SOCR stands for: Statistics Online Computational Resource designs. Dr. Dinov is an expert in "mathematical modeling, statistical analysis, computational processing, scientific visualization of large datasets (Big Data) and predictive health analytics." His research has focused on mathematical modeling, statistical inference, and biomedical computing.
His colleague, Dr. Milen Velchev Velev, is an associate professor at the Prof. Dr. A. Zlatarov University in Bulgaria. He studies relativistic mechanics in multiple time dimensions, and his interests include "applied mathematics, special and general relativity, quantum mechanics, cosmology, philosophy of science, the nature of space and time, chaos theory, mathematical economics, and micro-and-macroeconomics."
Drs. Dinov and Velev began developing spacekime theory around four or five years ago, while working with big data in the healthcare field. "We started looking at data that intrinsically has a temporal dimension to it," Dr. Dinov told me during a video chat. "It's called longitudinal or time varying data, longitudinal time variance—it has many, many names. This is data that varies with time. In biomedicine, this is the de facto, standard data. All big health data is characterized by space, time, phenotypes, genotypes, clinical assessments, and so forth."
A better way to manage big data
"We started asking big questions," Dinov said. "Why are our models not really fitting too well? Why do we need so many observations? And then, we started playing around with time. We started digging and experimenting with various things. And then we realized two important facts.
"Number one, if we use what's called color-coded representations of the complex plane, we can define spacekime, or higher dimensional spacetime, in such a way that it agrees with the common observations that we make in (the longitudinal time series in) ordinary spacetime. That agreement was very important to us, because it basically says, yes, the higher dimensional theory does not contradict our common observations.
"The second realization was that, since this extra dimension of time is imperceptible, we needed to approximate, model, or estimate, one of the unobservable time characteristics, which we call the kime phase. After about a year, we discovered that there is a mathematically elegant tool called the Laplace Transform that allows us to analytically represent time series data as kime-surfaces. Turns out, the spacekime mathematical manifold is a natural, higher dimensional extension of classical Minkowski, four-dimensional spacetime."
Our understanding of the world is becoming more complex. As a result, we have big data to contend with. How do we find new ways to analyze, interpret and visual such data? Dinov believes spacekime theory can help in some pretty impressive ways. "The result of this multidimensional manifold generalization is that you can make scientific inferences using smaller data samples. This requires that you have a good model or prior knowledge about the phase distribution," he said. "For instance, we can use spacekime process representation to better understand the development or pathogenesis to model the distributions of certain diseases.
"Suppose we are evaluating fMRIs of Alzheimer's disease subjects. Assume we know the kime phase distribution for another cohort of patients suffering from amyotrophic lateral sclerosis, Lou Gehrig's disease. The ALS kime-phase distribution could be used for evaluating the Alzheimer's patients," and many other neurodegenerative populations. Dinov also thinks spacekime analytics could help improve political polling, increase our understanding of complex financial and environmental events, and even the innerworkings of the human brain, all without having to take the huge samples required today to make accurate models or predictions. Spacekime theory even offers opportunities to design novel AI analytical techniques. But it goes beyond that.
The problem of time
Spacekime theory can help us make headway on some of the most pernicious inconsistencies in physics, such as Heisenberg's uncertainty principle and the seemingly irreconcilable rift between quantum physics and general relativity, what's known as "the problem of time."
Dinov wrote that the "approach relies on extending the notions of time, events, particles, and wave functions to complex-time (kime), complex-events (kevents), data, and inference-functions." Basically, working with two points of time allows you to make inferences on a radius of points associated with a certain event. With Heisenberg's uncertainty principle, according to this model, since time is a plane, a certain particle would be in one position or phase, time-wise, in terms of velocity, and another phase, in terms of position.
This idea of hidden dimensions of time is a little like Plato's allegory of the cave or how an X-ray signifies what's underneath, but doesn't convey a 3D image. From a data science perspective, it all comes down to utility. Dinov believes that if we can calculate the true phase dispersion of complex phenomena, we can better understand and control them.
Drs. Dinov and Velev's book on spacekime theory comes out this August. It's called "Data Science: Time Complexity, Inferential Uncertainty, and Spacekime Analytics".
Results from an experiment using the Large Hadron Collider challenges the accepted model of physics.
- Researchers working on the Large Hadron Collider experiments obtained unusual results.
- The data suggests possible existence of new particles or interactions.
- The findings aren't accounted for by the Standard Model of particle physics.
Scientists working on the Large Hadron Collider discovered new particles whose unusual behavior doesn't conform to the Standard Model of particle physics. The find may indicate the existence of entirely new particles or interactions and can result in new physics being formulated.
The Standard Model of particle physics, our best current theory, says that particles known as "beauty quarks" or "B mesons" should decay equally into muons or electrons. However, measurements from a new experiment on the Large Hadron Collider (LHC), the world's largest scientific instrument and its most powerful particle accelerator based at the CERN lab on the Franco-Swiss border, show that is not taking place. B mesons decaying in the LHC produced more electrons and less muons than the theory predicted. These measurements may mean that new, yet-to-be-detected particles are contributing to the imbalance.
The research, carried out by physicists from Imperial College London and the Universities of Bristol and Cambridge, was part of the LHCb experiment, one of the four particle detectors at the Large Hadron Collider.
One of the study's co-authors, Dr. Mitesh Patel from Imperial College, explained the significance of their achievement:
"We were actually shaking when we first looked at the results, we were that excited," shared Patel in a press release. "Our hearts did beat a bit faster. It's too early to say if this genuinely is a deviation from the Standard Model but the potential implications are such that these results are the most exciting thing I've done in 20 years in the field. It has been a long journey to get here."
The LHCb experiment at the Large Hadron Collider at CERN.
Imperial College Ph.D. student Daniel Moise, who was involved in the study, thinks the findings can lead to new discoveries:
"The result offers an intriguing hint of a new fundamental particle or force that interacts in a way that the particles currently known to science do not," said Moise. "If this is confirmed by further measurements, it will have a profound impact on our understanding of nature at the most fundamental level."
The scientists are looking next to verify their results in follow-up experiments.
This is not the only discrepancy with the Standard Model that physicists have uncovered. The nature of dark matter and the unequal distribution of matter and antimatter in the Universe have also been thrown wrenches into most accepted physics ideas.
Check out the new paper "Test of lepton universality in beauty-quark decays" published as a preprint.