Skip to content
Starts With A Bang

Standard Model survives its biggest challenge yet

For years and over three separate experiments, “lepton universality” appeared to violate the Standard Model. LHCb at last proved otherwise.
The LHCb detector, at 5600 metric tonnes, is 21 meters long, 10 meters high, and 13 meters wide, optimized for detecting and studying particles (and their subsequent decays) that contain b-quarks inside them. As of March 2022, there are over 1500 scientists, engineers and technicians that work on the LHCb collaboration.
(Credit: CERN/LHCb collaboration)
Key Takeaways
  • With the Standard Model of particle physics, we don’t simply get the particles that make up our conventional existence, but three copies of them: multiple generations of quarks and leptons.
  • According to the Standard Model, many processes that occur in one generation of leptons (electrons, muons, and taus) should occur in all the others, so long as you account for their mass differences.
  • This property, known as lepton universality, was challenged by three independent experiments. But in a tour-de-force advance, LHCb has vindicated the Standard Model once again. Here’s what it means.
Sign up for the Starts With a Bang newsletter
Travel the universe with Dr. Ethan Siegel as he answers the biggest questions of all

In all of science, perhaps the greatest quest of all is to go beyond our current understanding of how the Universe works to find a more fundamental, truer description of reality than what we have at present. In terms of what the Universe is made of, this has happened many times, as we discovered:

  • the periodic table of the elements,
  • the fact that atoms have electrons and a nucleus,
  • that the nucleus contains protons and neutrons,
  • that protons and neutrons themselves are composite particles made of quarks and gluons,
  • and that there are additional particles beyond quarks, gluons, electrons, and photons that compose our reality.

The full description of particles and interactions known to exist comes to us in the form of the modern Standard Model, which has three generations of quarks and leptons, plus the bosons that describe the fundamental forces as well as the Higgs boson, responsible for the non-zero rest masses of all the Standard Model particles.

But very few people believe that the Standard Model is complete, or that it won’t be superseded someday by a more comprehensive, fundamental theory. One of the ways we’re attempting to do that is by testing the Standard Model’s predictions directly: by creating heavy, unstable particles, watching them decay, and comparing what we observe to the Standard Model’s predictions. For more than a decade, the idea of lepton universality seemed incompatible with what we were observing, but a superior test by the LHCb collaboration just gave the Standard Model a stunning victory. Here’s the full, triumphant story.

The quarks, antiquarks, and gluons of the standard model have a color charge, in addition to all the other properties like mass and electric charge. All of these particles, except gluons and photons, experience the weak interaction. Only the gluons and photons are massless; everyone else, even the neutrinos, have a non-zero rest mass.
Credit: E. Siegel/Beyond the Galaxy

The Standard Model is so powerful because it basically combines three theories — the theory of the electromagnetic force, the weak force, and the strong force — into one coherent framework. All of the particles that exist can have charges under any or all of these forces, interacting directly with the bosons that mediate the interactions corresponding to that particular charge. The particles that make up the matter we know of are generally called fermions, and consist of the quarks and leptons, which come in three generations apiece as well as their own antiparticles.

One of the ways we have of testing the Standard Model is by looking at its predictions in detail, calculating what the probability would be of all of the possible outcomes for any particular setup. For example, whenever you create an unstable particle — e.g., a composite particle like a meson or baryon made up of one or more heavy quarks, like a strange, charm, or bottom quark — there isn’t just one decay path it can take, but a wide variety, all with their own explicit probability of occurring. If you can calculate the probability of all possible outcomes and then compare what you measure at a particle accelerator that produces them in great numbers, you can put the Standard Model to a myriad of tests.

This chart of particles and interactions details how the particles of the Standard Model interact according to the three fundamental forces that Quantum Field Theory describes. When gravity is added into the mix, we obtain the observable Universe that we see, with the laws, parameters, and constants that we know of governing it. However, many of the parameters that nature obeys cannot be predicted by theory, they must be measured to be known, and those are “constants” that our Universe requires, to the best of our knowledge.
Credit: Contemporary Physics Education Project/DOE/SNF/LBNL

One type of test we can perform is called lepton universality: the notion that, except for the fact that they have different masses, the charged leptons (electron, muon, tau) and the neutrinos (electron neutrino, muon neutrino, tau neutrino), as well as their respective antiparticles, should all behave the same as one another. For example, when a very massive Z-boson decays — and take note that the Z-boson is much more massive than all of the leptons — it has equal probabilities of decaying into an electron-positron pair as it does into a muon-antimuon or a tau-antitau pair. Similarly, it has equal probability of decaying into neutrino-antineutrino pairs of all three flavors. Here, experiment and theory agree, and the Standard Model is safe.

But over the first part of the 21st century, we started to see some evidence that when both charged and neutral mesons that contain bottom quarks decayed into a meson that contained a strange quark as well as a charged lepton-antilepton pair, the probability of getting an electron-positron pair differed from the probability of getting a muon-antimuon pair by much more than their mass differences could account for. This hint, from experimental particle physics, led many to hope that perhaps we had stumbled upon a violation of the Standard Model’s predictions, and hence, a hint that could take us beyond known physics.

The leading-order diagrams, in the Standard Model, that can produce kaon + lepton-antilepton pairs from two types of B-meson. At both large and small q^2 values, and in both channels, the expected ratios of muons-antimuons to electrons-positrons is expected to be identical.
(Credit: LHCb Collaboration, preprint, arXiv:2212.09153, 2022)

Starting in 2004, two experiments that were producing significant numbers of both charged and neutral mesons that contained bottom quarks, BaBar and Belle, sought to put the notion of lepton universality to the test. If the probabilities, when corrected for what we call the “square of the dilepton invariant mass” (i.e., the energy taken to produce either an electron-positron or muon-antimuon pair), or , corresponded to the Standard Model’s predictions, then the ratio between the number of electron-positron and muon-antimuon decay events should be 1:1. That was what was expected.

Belle’s results were completely consistent with a 1:1 ratio, but Babar’s were a little bit low (just under 0.8), which got a lot of people excited about the Large Hadron Collider at CERN. You see, in addition to the two main detectors — ATLAS and CMS — there was also the LHCb detector, optimized and specialized to look for decaying particles that were created with a bottom quark inside. Three results were published as more and more data came in from LHCb testing lepton universality, with that ratio stubbornly remaining low relative to 1. Going into the latest results, the error bars kept shrinking with more statistics, but the average ratio hadn’t changed substantially. Many began to get excited as the significance increased; perhaps this would be the anomaly that at last “broke” the Standard Model for good!

The results of the BaBar, Belle, and the first three releases of the LHCb experiment’s investigations into tests of lepton universality. By examining the ratios of muons-antimuons versus electrons-positrons in B-meson decays to kaons plus lepton-antilepton pairs, an anomaly emerged showing a difference between the two lepton families, where none is predicted by the Standard Model.
(Credit: LHCb Collaboration, preprint, arXiv:2212.09153, 2022)

It turns out there were actually four independent tests that could be done with the LHCb data:

  • to test the decay of charged B-mesons into charged kaons for low parameters,
  • to test the decay of charged B-mesons into charged kaons for higher parameters,
  • to test the decay of neutral B-mesons into excited-state kaons for low parameters,
  • and to test the decay of neutral B-mesons into excited-state kaons for higher parameters.

If there were new physics that existed that could come into play and affect these Standard Model predictions, you’d expect them to play a greater role for higher values of (or, in other words, at higher energies), but you’d expect them to better agree with the Standard Model for lower values of .

But that wasn’t what the data was indicating. The data was showing that all of the tests that had been conducted (which were three of the four; all but the charged B-mesons at low ) were indicating the same low value of that ratio that should’ve been 1:1. When you combined the results of all tests conducted, the result was indicating a ratio of about 0.85, not 1.0, and it was significant enough that there was only about a 1-in-1000 chance that it was a statistical fluke. That left three main possibilities, all of which needed to be considered.

This event shows one example of a rare decay of a B-meson that involved an electron and positron as part of their decay projects, as observed by the LHCb detector.
(Credit: CERN/LHCb experiment)
  1. This really was a statistical fluke, and that with more and better data, the ratio of electron-positrons to muon-antimuons should regress to the expected 1.0 value.
  2. There was something funny going on with how we were either collecting or analyzing the data — a systematic error — that had slipped through the cracks.
  3. Or the Standard Model really is broken, and that with better statistics, we’d reach the 5- threshold to announce a robust discovery; the prior results were suggestive, at about a 3.2- significance, but not there yet.

Now, there’s really no good “test” to see if option 1 is the case; you simply need more data. Similarly, you can’t tell if option 3 is the case or not until you reach that vaunted threshold; until you get there, you’re only speculating.

But there are lots of possible options for how option 2 could rear its head, and the best explanation I know of is to teach you about a word that has a special meaning in experimental particle physics: cuts. Whenever you have a particle collider, you have a lot of events: lots of collisions, and lots of debris that comes out. Ideally, what you’d do is keep 100% of the interesting, relevant data that’s important for the particular experiment you’re trying to perform, while throwing away 100% of the irrelevant data. That’s what you’d analyze to arrive at your results and to inform your conclusions.

Choosing which bits of data to include and exclude, and knowing how to model your background properly, are essential for comparing your experimental results with the appropriate theoretical implication. If the background is incorrectly modeled, or the wrong data is included/excluded (i.e., cut), your results will not be 100% indicative of the underlying science.
(Credit: LHCb Collaboration, preprint, arXiv:2212.09153, 2022)

But it’s not actually possible, in the real world, to keep everything you want and throw away everything you don’t. In a real particle physics experiment, you look for specific signals in your detector in order to identify the particles you’re looking for: tracks that curve a certain way within a magnetic field, decays that display a displaced vertex a certain distance from the collision point, specific combinations of energy and momentum that arrive in the detector together, etc. When you make a cut, you make it based on a measurable parameter: throwing away what “looks like” what you don’t want and keeping what “looks like” what you do.

Only then, once the proper cut is made, do you do your analysis.

Upon learning this for the first time, many experimental particle physics undergrad and graduate students have a miniature version of an existential crisis. “Wait, if I make my cuts in a particular way, couldn’t I just wind up ‘discovering’ anything I wanted to at all?” Thankfully, it turns out there are responsible practices that one must follow, including understanding both your detector’s efficiency as well as what other experimental signals might overlap with what you’re attempting to separate out by making your cuts.

The LHCb detector has a known and quantifiable difference in detection efficiency between electron-positron pairs and muon-antimuon pairs. Accounting for this difference is an essential step in measuring the probabilities and rates of decays of B-mesons into kaons plus one lepton-antilepton combination over another. They have now shown that lepton universality appears to be true, as the properly calibrated ratios of electron-positron pairs and muon-antimuon pairs appears to be indistinguishable from 1.0 with a proper recalibration.
Credit: LHCb Collaboration, R. Aaij et al., JINST, 2019

It had been known for some time that electrons (and positrons) have different efficiencies in the LHCb detector than muons (and antimuons), and that effect was well-accounted for. But sometimes, when you have a particular type of meson traveling through your detector — a pion or a kaon, for instance — the signal that it creates is very similar to the signals that electrons generate, and so misidentification is possible. This is important, because if you’re trying to measure a very specific process that involves electrons (and positrons) as compared to muons (and antimuons), then any confounding factor can bias your results!

This is precisely the type of “systematic error” that can pop up and make you think you’re detecting a significant departure from the Standard Model. It’s a dangerous type of error, because as you gather greater and greater statistics, the departure you infer from the Standard Model will become more and more significant. And yet, it isn’t a real signal that indicates that something about the Standard Model is wrong; it’s simply a different type of decay that can bias you in either direction, as you’re attempting to see decays with both kaons and electron-positron pairs. If you either oversubtract or undersubstract the unwanted signal, you’re going to wind up with a signal that fools you into thinking you’ve broken the Standard Model.

This figure from the December 20, 2022 LHCb Collaboration publication shows how, across all four classes of B-meson to K-meson plus lepton-antilepton pairs, the probability of identifying an event as an electron changed similarly (and, importantly, away from the expected 1.0 ratio) in all four data sets dependent on the tagging parameters. This led LHCb researchers to more correctly identify which events were kaons (or pions) versus which events were leptons, a vital step in better understanding their data.
(Credit: LHCb Collaboration, preprint, arXiv:2212.09153, 2022)

The chart, above, shows how these misidentified backgrounds were discovered. These four separate classes of measurements show that the inferred probabilities of having one of these kaon-electron-positron decays from a B-meson all change together when you change the criteria to answer the key question of, “Which particle in the detector is an electron?” Because the results changed coherently, the LHCb scientists — after a herculean effort — were finally able to better identify the events that revealed the desired signal from previously-misidentified background events.

With this recalibration now possible, the data was able to be analyzed correctly in all four channels. Two things of note could immediately be observed. First, the ratio of the two types of leptons that could be produced, electron-positron pairs and muon-antimuon pairs, all shifted dramatically. Instead of about 0.85, the four ratios all leapt up to become very close to 1.0, with the four respective channels showing ratios of 0.994, 0.949, 0.927, and 1.027 each. But second off, the systematic errors, aided by the better understanding of background, shrank so that they’re only between 2 and 3% in each channel, a remarkable improvement.

This graph shows, with the necessary recalibration of the LHCb data based on appropriately and correctly tagged backgrounds versus lepton-antilepton signals, shows how the alleged signal in all four channels has regressed to a value completely consistent with the Standard Model: a ratio of 1.0 and not ~0.85, as previous studies had indicated.
(Credit: LHCb Collaboration, preprint, arXiv:2212.09153, 2022)

All told, this now means that lepton universality — a core prediction of the Standard Model — now appears to hold true across all the data we have, something that couldn’t be said before this reanalysis. It means that what appeared to be a ~15% effect has now evaporated, but it also means that future LHCb work should be able to test lepton universality to the 2-3% level, which would be the most stringent test of all time on this front. Finally, it further validates the value and capabilities of experimental particle physics and the particle physicists who conduct it. Never has the Standard Model been tested so well.

The importance of testing your theory in novel ways, to better precision and with larger data sets than ever before, cannot be overstated. Sure, as theorists, we are always searching for novel ways to go beyond the Standard Model that remain consistent with the data, and it’s exciting whenever you discover a possibility that’s still viable. But physics, fundamentally, is an experimental science, driven forward by new measurements and observations that take us into new, uncharted territory. As long as we keep pushing the frontiers forward, we’re guaranteed to someday discover something novel that unlocks whatever the “next level” is in refining our best approximation of reality. But if we allow ourselves to be mentally defeated before exhausting every avenue available to us, we’ll never learn how truly rich the ultimate secrets of nature actually are.

The author thanks repeated correspondence with Patrick Koppenburg and a wonderfully informative thread by a pseudonymous LHCb collaboration member.

In this article
Sign up for the Starts With a Bang newsletter
Travel the universe with Dr. Ethan Siegel as he answers the biggest questions of all

Related

Up Next