Once a week.
Subscribe to our weekly newsletter.
The Neuroscience Power Crisis: What's the fallout?
Last week a paper ($) was published in Nature Reviews Neuroscience that is rocking the world of neuroscience. The crack team of researchers including neuroscientists, psychologists, geneticists and statisticians analysed meta-analyses of neuroscience research to determine the statistical power of the papers contained within.
The group discovered that neuroscience as a field is tremendously underpowered, meaning that most experiments are too small to be likely to find the subtle effects being looked for and the effects that are found are far more likely to be false positives than previously thought. It is likely that many theories that were previously thought to be robust might be far weaker than previously imagined. This topic by its very nature is something that is very difficult to assess on the level of any individual study, but when the field is looked at as a whole, an assessment of the statistical power across a broad spread of the literature becomes possible and this has brought worrying implications.
Something that the research only briefly touches on is that neuroscience may not be alone. Underpowered research could indeed be endemic through other sciences besides neuroscience. This may be a consequence of institutionalised failings resulting in a spread of perverse incentives such as the pressure on scientists to churn out paper after paper rather than genuinely producing quality work. This has big implications on our assumption that science is self correcting; today in certain areas this may not necessarily be the case. I sat down with Katherine Button and Marcus Munafò, a couple of the lead researcherson the project, to discuss the impact of the research. The conversation is below:
I'd like to begin by asking you if any individual low powered studies you might have stumbled upon are particularly striking to you. I'm particularly curious of low powered studies that stand out as having made an impact on the field or perhaps ones that were the most heavily spun upon release or resulted in dubious interpretations.
K: We looked at meta-analyses and didn't look directly at the individual studies which contributed to those meta-analyses. Some of the quality of the meta-analyses stood out because of unclear reporting of results; in some cases we had to work quite hard to extract the data, but because we were working at the meta level we weren't really struck by the individual studies.
M: It's probably worth taking a step back from this paper and thinking about the motivation for doing it in the first place, and the sort of things that gave rise to the motivation to write the paper. My research group is quite broad in its interests, so we do some genetic work, some human psychopharmacology work, I've worked with people on animal studies. Dating back several years, one of the consistent themes that was coming out of my research was that some effects that are apparently robust, if you read the published literature, are actually much harder to replicate than you might think. That's true across a number of different fields; for example if you look at candidate gene studies, it is now quite widely agreed that most of these are just too small to detect an effect that would be plausible, given what we know about genetic effects now. A whole literature has built up around specific associations that captured the scientific imagination, but when you look at the data either through a meta-analysis, or by trying to replicate the finding yourself, you find it's a lot more nebulous than some readings of the literature would have you believe. This was coming out as a really consistent theme. I started by doing meta-analysis as a way of identifying genetic variants robustly associated with outcomes so I could then genotype those outcomes myself, back in the day when genotyping was expensive. It proved that actually none of them was particularly robust, that was the clear finding.
I cut my teeth on meta-analytic techniques in that way and started applying the technique a bit more widely to human behavioural studies and so on, and one of the things that was really striking was that the average power in such diverse fields was really low - about 20%. That was the motivation behind looking at this more systematically and doing it in a way that would allow us to frame the problem, hopefully constructively, to an audience that might not have come across these problems in detail before. I could point at individual papers, but I'd be reluctant to, as that would say more about what I happen to have read rather than particularly problematic papers. It's a broad problem, I don't think it's about a particular field or a particular method.
K: During my PhD I looked at emotional processing in anxiety and whether processing is biased towards a certain type of emotional expressions. In a naive reading of the literature, certain things came out, like there is a strong bias for fearful faces or disgusted faces, for example, but when I tried to replicate these findings, my results didn't seem to fit. When I looked at the literature more critically, I realised that the reported effects were all over the place. I work in a medical department where there is an emphasis of the need for more reliable methods and statistical approaches, and Marcus was one of my PhD supervisors and had investigated the problems of low power in other literatures. Applying the knowledge gained from statistical methods training to critique the emotion processing literature lead me to think that a lot of this literature is probably false-positive. I wouldn't be surprised if that was the same for other fields.
M: We tried to draw in people from a range of fields - John Ioannidis is an epidemiologist, Jonathan Flint is a psychiatric geneticist, Emma Robinson does animal model work and behavioural pharmacology, Brian Nosek is a psychologist, Kate works in a medical department, I work in a psychology department, and one of the points we try to make is that individual fields have learned some specific lessons. Clinical trials have learned about the value of pre-registration of study protocols and power analysis, genetics has learned about the importance of large scale consortial efforts, meta-analysis, stringent statistical criteria and replication. Many of those lessons could be brought together and applied more or less universally.
Can you explain the importance of meta-analyses for assessing the problem of underpowered research?
K: To work out the power that a study has to detect a true effect requires an estimation of the size of that true underlying effect. We can never really know what the true underlying effect is, so the best estimate we have is the effect size indicated by a meta-analysis, because that will be based on several studies’ attempt to measure that effect. We used the meta-analyses as a proxy for the true underlying effect and then went back and looked at the power the individual studies would have had assuming that meta-effect was actually true. That's why you have to do this meta-analytic approach, because just calculating the power an individual study has to detect the effect observed in that study is circular and meaningless in this context.
M: We really are trying to be constructive - we don't want this to be seen as a hatchet job. I think we've all made these kinds of mistakes in the past, certainly I have, and I’m sure I’ll continue to make mistakes without meaning to, but one of the advantages of this kind of project is that it’s made me think about how I can improve my own practices, such as by pre-registering study protocols.
K: And it's not just mistakes, it's also a practicality issue - resources are often limited. Yet even if you know your study is underpowered it's still useful to say that “with this sample size, we can detect an effect of this is the size”. If you are upfront about the limitations of a small sample, then at least you know what size of effects you can and can’t detect, and interpret the results accordingly.
M: And make it clear when your study is confirmatory and when your study is exploratory – that distinction, I think, is blurred at the moment; my big concern is with the incentive structures that scientists have to work within. We are incentivised to crank the handle and run smaller studies that we can get published, rather than take longer to run fewer studies that might be more authoritative but aren't going to make for as weighty a CV in the long run because, however much emphasis there is on quality, there is still an extent to which promotions and grant success are driven just by how heavy your CV is.
I'm also interested in how in your opinion neuroscience compares to psychology and other sciences more broadly in terms of the level of statistical power in published research, do you think neuroscience is an anomaly or is the problem equally prevalent across in other disciplines?
M: My sense is that wherever we've looked we've come up with the same answer. We haven't looked everywhere but there is no field that has particularly stood out as better or worse, with the possible exception of phase three clinical trials that are funded by research councils without vested interests - those tend to be quite authoritative. But again, our motivation was not that neuroscience is particularly problematic - we were trying to raise these issues with a new audience and present some of the potential solutions that have been learned in fields such as genetics and clinical trials. It was more about reaching an audience than saying this field is better or worse than other fields because my sense is this is a universal problem.
Are there any particularly urgent areas you would like to highlight where under-powered research is an issue?
K: The emotional processing and anxiety literature – only because I am familiar with it. But I agree with Marcus’ point that these problems go across research areas and you are only familiar with them within the fields in which you work. I started off thinking that there were genuine effects to be found. There are so many studies with such conflicting evidence that you write a paper and try and say the evidence is conflicting and not very reliable, but then reviewers might say “how about so-and-so’s study?” and you just don’t have the space in papers to give a critique of all the methodological failings of all these studies.
M: I think there is a real distinction to be made between honest error where there are people who are trying to do a good job but they are incentivised to promote their findings and market their findings and it’s all unconscious and not malicious. There may be people who actually think of really gaming the system and don’t actually care whether or not they are right – that’s a really important distinction.
K: Something we do in my department is work with statisticians who are very careful about not overstating the claims of what we’ve found, I’ve done a few things looking at predictors of response to treatment which is effectively subgroup analysis of existing trial data and we try to be really upfront about the fact that these analyses are exploratory and that there are lots of limitations of subgroup analyses. I try to put at the forefront –‘type one and type two errors are possible and these findings need to be replicated before you believe any of them’. But as soon as you find a significant p-value, there are still a lot of reviewers that say ‘oh but this is really important for this, that or the other’ and no one wants to publish a nicely considered paper. There is a real emphasis from people saying ‘but why can’t you speculate on why this is really important and the implications this could have’ and you think that it could be important, but it could also be complete chance, so at every stage you are battling against the hyping up of your research.
M: I’ve had reviewers do this for us. In one case we were fairly transparent about presenting all our data and some of them were messy and some of them less so, and one of the reviewers said ‘just drop this stuff, it makes for a cleaner story and cleaner data if you don’t report all your data’ and we said ‘well actually we’d rather report everything and be transparent!’
K: As soon as you drop the nineteen things that didn’t come out, your one chance finding looks really amazing!
M: This is what I mean about honest error, the reviewer had no vested interest, the reviewer wasn’t trying to hype our results for us because – why would he or she? It’s just the system.
K: I think story telling is a real problem because a good story helps people to understand what your saying – it’s like when you write a blog you have to have a theme so people can follow you but there’s a balance to be struck between making your work accessible to readers but also not missing the point completely and going off on a tangent.
M: But that’s at the design stage; one of the things we are incentivised to do - wrongly in my opinion – is to include loads of measures so you’ve got a chance of finding something and then dropping all the other measures so it’s easier to tell the story. Actually what would be better is from the outset to design a study with relatively few outcomes where they all have their place and then you can write them up with all of them in there even if the results aren’t clear cut.
K: But that would require a lack of publication bias to really incentivise that, throwing all of your eggs into one basket is incentivised against really heavily. What we’ve tried to do recently when we are doing pilot studies, is writing in the protocols ‘we are going to be looking at all these different outcomes but this is our primary analysis and all these others are secondary exploratory analyses’. There are ways to report honestly and include lots of variables.
Q How big do you feel the gap is between bad science and institutionalised problems?
M: It’s not just about statistics; it takes a lot of guts as a PhD student to run the risk of having no publications at the end of your PhD.
K: It’s terrifying. Whether you get a post-doc depends on what your CV looks like.
M: I think of it as a continuum where there are very few people who are fraudulent, but then there are very few people who are perfect scientists, most of us are in the middle, where you become very invested in your ideas, there is confirmation bias, so one of the obvious things is you do an experiment as planned, you get exactly the results you expect and you think – great – and start writing it up, but if that process happens and you don’t get the results you were expecting you go back and check your data. So there can easily be a systematic difference in the amount of error checking that happens from one case to another, but in both cases there is the same likelihood that there will be errors in the data. It takes a lot of courage at the stage where you’ve run the analysis and got the results you were expecting to then go back and test them to destruction. Many scientists do this, but some don’t, not because they’re malicious but because that’s a natural psychological phenomenon – confirmation bias – you see what you are expecting to see.
Q Are there any specific bad practices that you think need to be highlighted?
M: Again, one of my main issues is with current incentive structures, which are hard for people to change from the bottom up – if you change your behaviour you are suddenly disadvantaged, relative to everyone else, in the short term. Then you have the problem that a lot of it is actually unconscious, well meant, non-malicious human instinct. Then you have the problem that when you do identify concerns there is no framework from which you say something without coming across as really hostile and confrontational – and that’s not necessarily constructive.
Button KS, Ioannidis JP, Mokrysz C, Nosek BA, Flint J, Robinson ES, & Munafò MR (2013). Power failure: why small sample size undermines the reliability of neuroscience. Nature reviews Neuroscience, 14 (5), 365-76 PMID: 23571845
Image credit: Shutterstock/Feraru Nicolae
Scientists used CT scanning and 3D-printing technology to re-create the voice of Nesyamun, an ancient Egyptian priest.
- Scientists printed a 3D replica of the vocal tract of Nesyamun, an Egyptian priest whose mummified corpse has been on display in the UK for two centuries.
- With the help of an electronic device, the reproduced voice is able to "speak" a vowel noise.
- The team behind the "Voices of the Past" project suggest reproducing ancient voices could make museum experiences more dynamic.
Scientists have reproduced the voice of an ancient Egyptian priest by creating a 3D-printed replica of his mummified vocal tract.
An international and interdisciplinary team, led by David Howard, a professor of electronic engineering at Royal Holloway, used computed tomography (CT) scanning technology to measure the dimensions of the vocal tract of Nesyamun, a mummy that's spent about two centuries on display at Leeds City Museum in the United Kingdom.
The team then used those measurements to 3D-print an artificial vocal tract, through which they produced sounds using a peculiar electronic device called the Vocal Tract Organ. (You can check it out here.)
"The Vocal Tract Organ, a first in its own right, provided the inspiration for doing this," Howard told CNET.
Nesyamun, whose priestly duties included chanting and singing the daily liturgy, can once again "speak" — at least, in the form of a vowel noise that sounds something like a cross between the English pronunciation of the vowels in "bed" and "bad."
Of course, the new "voice" of Nesyamun is an approximation, and given the lack of actual recordings of his voice, and the degeneration of his body over millennia, it's impossible to know just how accurate it is. But the researchers suggested that their "Voice from the Past" project offers a chance for people to "engage with the past in completely new and innovative ways."
Howard et al.
"While this approach has wide implications for heritage management/museum display, its relevance conforms exactly to the ancient Egyptians' fundamental belief that 'to speak the name of the dead is to make them live again'," they wrote in a paper published in Nature Scientific Reports. "Given Nesyamun's stated desire to have his voice heard in the afterlife in order to live forever, the fulfilment of his beliefs through the synthesis of his vocal function allows us to make direct contact with ancient Egypt by listening to a sound from a vocal tract that has not been heard for over 3000 years, preserved through mummification and now restored through this new technique."
Connecting modern people with history
It's not the first time scientists have "re-created" an ancient human's voice. In 2016, for example, Italian researchers used software to reconstruct the voice of Ötzi, an iceman who was discovered in 1991 and is thought to have died more than 5,000 years ago. But the "Voices of the Past" project is different, the researchers note, because Nesyamun's mummified corpse is especially well preserved.
"It was particularly suited, given its age and preservation [of its soft tissues], which is unusual," Howard told Live Science.
As to whether Nesyamun's reconstructed voice will ever be able to speak complete sentences, Howard told The Associated Press, that it's "something that is being worked on, so it will be possible one day."
John Schofield, an archaeologist at the University of York, said that reproducing voices from history can make museum experiences "more multidimensional."
"There is nothing more personal than someone's voice," he told The Associated Press. "So we think that hearing a voice from so long ago will be an unforgettable experience, making heritage places like Karnak, Nesyamun's temple, come alive."
Ancient corridors below the French capital have served as its ossuary, playground, brewery, and perhaps soon, air conditioning.
- People have been digging up limestone and gypsum from below Paris since Roman times.
- They left behind a vast network of corridors and galleries, since reused for many purposes — most famously, the Catacombs.
- Soon, the ancient labyrinth may find a new lease of life, providing a sustainable form of air conditioning.
Ancient mining areas below Paris for limestone (red) and gypsum (green).Credit: Émile Gérards (1859–1920) / Public domain
"If you're brave enough to try, you might be able to catch a train from UnLondon to Parisn't, or No York, or Helsunki, or Lost Angeles, or Sans Francisco, or Hong Gone, or Romeless."
China Miéville's fantasy novel Un Lun Dun is set in an eerie mirror version of London. In it, he hints that other cities have similar doubles. On the list that he offhandedly rattles off, Paris stands out. Because the City of Light really does have a twisted sister. Below Paris Overground is Paris Underground, the City of Darkness.
Most people will have heard of the Catacombs of Paris: subterranean charnel houses for the bones of around six million dead Parisians. They are one of the French capital's most famous tourist attractions – and undoubtedly its grisliest.
But they constitute only a small fragment of what the locals themselves call les carrières de Paris ("the mines of Paris"), a collection of tunnels and galleries up to 300 km (185 miles) long, most of which are off-limits to the public, yet eagerly explored by so-called cataphiles.
The Grand Réseau Sud ("Great Southern Network") takes up around 200 km beneath the 5th, 6th, 14th, and 15th arrondissements (administrative districts), all south of the river Seine. Smaller networks run beneath the 12th, 13th, and 16th arrondissements. How did they get there?
Paris stone and plaster of Paris
It all starts with geology. Sediments left behind by ancient seas created large deposits of limestone in the south of the city, mostly south of the Seine; and gypsum in the north, particularly in the hills of Montmartre and Ménilmontant. Highly sought after as building materials, both have been mined since Roman times.
The limestone is also known as Lutetian limestone (Lutetia is the Latin name for ancient Paris) or simply "Paris stone." It has been used for many famous Paris landmarks, including the Louvre and the grand buildings erected during Georges-Eugène Haussmann's large-scale remodelling of the city in the mid-19th century. The stone's warm, yellowish color provides visual unity and a bright elegance to the city.
The fine-powdered gypsum of northern Paris, used for making quick-setting plaster, was so famed for its quality that "plaster of Paris" is still used as a term of distinction. However, as gypsum is very soluble in water, the underground cavities left by its extraction were extremely vulnerable to collapse.
Like living on top of a rotting tooth: subsidence starts far below the surface, but it can destroy your house.Credit : Delavanne Avocats
In previous centuries, a road would occasionally open up to swallow a chariot, or even a whole house would disappear down a sinkhole. In 1778, a catastrophic subsidence in Ménilmontant killed seven. That's why the Montmartre gypsum quarries were dynamited rather than just left as they were. The remaining gypsum caves were to be filled up with concrete.
The official body governing Paris down below is the Inspection Générale des Carrières (IGC), founded in the late 1770s by King Louis XVI. The IGC was tasked with mapping and, where needed, propping up the current and ancient (and sometimes forgotten) mining corridors and galleries hiding beneath Paris.
A delightful hiding place
Also around that time, the dead of Paris were getting in the way of the living. At the end of the 18th century, their final destination consisted of about 200 small cemeteries, scattered throughout the city — all bursting at the seams, so to speak. There was no room to bury the newly dead, and the previously departed were fouling up both the water and air around their respective churchyards.
Something radical had to happen. And it did. From 1785 until 1814, the smaller cemeteries were emptied of their bones, which were transported with full funerary pomp to their final resting place in the ancient limestone quarries at Tombe-Issoire. Three large and modern cemeteries were opened to receive the remains of subsequent generations of Parisians: Montparnasse, Père-Lachaise, and Passy.
The six million dead Parisians in the Catacombs, from all corners of the capital and across many centuries, together form the world's largest necropolis — their now anonymized skulls and bones methodically stacked, occasionally into whimsical patterns. The Catacombs are fashioned into a memorial to the brevity of life. The message above the entrance reads: Arrête! C'est ici l'empire de la Mort. ("Halt! This is the empire of Death.")
That has not stopped the Catacombs, accessible via a side door to a classicist building on the Avenue du Colonel Henri Rol-Tanguy, making just about every Top 20 list of things to see in Paris.
An underground economy
However, while the Catacombs certainly are the most famous part of the centuries-old network beneath Paris, and in non-pandemic times draw thousands of tourists each day, they constitute just 1.7 km (1 mile) of the 300-km (185-mile) tunneling total.
Subterranean Paris wasn't just used for mining and storing dead people. In the 17th century, Carthusian monks converted the ancient quarries under their monastery into distilleries for the green or yellow liqueur that still carries their name, chartreuse.
Because the mines generally keep a constant cool temperature of around 15° C (60° F), they were also ideal for brewing beer, as happened on a large scale from the end of the 17th century until well into the 20th century. Several caves were dug especially for establishing breweries, and not just because of the ambient temperature: going underground allowed brewers to remain close to their customers without having to pay a premium for real estate up top.
Overview of the Paris Catacombs.Credit: Inspection Générale des Carrières, 1857 / Public domain.
At the end of the 19th century, the underground breweries of the 14th arrondissement alone produced more than a million hectoliters (22 million gallons) per year. One of the most famous of Paris' underground breweries, Dumesnil, stayed in operation until the late 1960s.
In that decade, the network of corridors and galleries south of the Seine, long since abandoned by miners, became the unofficial playground for the young people of Paris. They explored the fantastical world beneath their feet, in some cases via entry points located in their very schools. Fascinated, these cataphiles ("catacomb lovers") read up on old books, explored the subterranean labyrinth, and drew up schematics that were passed around among fellow initiates as reverently as treasure maps.
As Robert Macfarlane writes in Underland, Paris-beneath-their-feet became "a place where people might slip into different identities, assume new ways of being and relating, become fluid and wild in ways that are constrained on the surface."
Some larger caves turned into notorious party zones: a 7-meter-tall gallery below the Val-de-Grâce hospital is widely known as "Salle Z." Over the last few decades, various other locations in subterranean Paris have hosted jazz and rock concerts and rave parties — like no other city, Paris really has an "underground music scene."
Hokusai's Great Wave as the backdrop to the "beach" under Paris.Credit: Reddit
Cataphiles vs. cataphobes
With popularity came increased reports of nuisance and crime — the tunnels provided easy access to telephone cables, which were stolen for the resale value of their copper.
The general public's "discovery" of the underground network led the city of Paris to officially interdict all access by non-authorized persons. That decree dates back to 1955, but the "underground police" have an understanding with seasoned cataphiles. Their main targets are so-called tourists, who by their lack of knowledge expose themselves to risk of injuries or worse, and degrade their surroundings, often leaving loads of litter in their wake.
The understanding does not extend to the IGC. Unlike in the 19th century, when weak cavities were shored up by purpose-built pillars, the policy now is to inject concrete to fill up endangered spaces — thus progressively blocking off parts of the network. That procedure has also been used to separate the Catacombs to prevent "infiltration" of the site by cataphiles.
Many subterranean streets have their own names, signs and all. This is the Rue des Bourguignons (Street of the Burgundians) below the Champs des Capucins (Capuchin Field), neither of which exists on the surface.Credit: Jean-François Gornet via Wikimedia and licensed under
The cataphiles, however, are fighting back. In a game of cat and mouse with the authorities, they are reopening blocked passages and creating chatières ("cat flaps") through which they can squeeze into chambers no longer accessible via other underground corridors.
Catacomb climate control
Alone against the unstoppable tide of concrete, the amateurs of Underground Paris would be helpless. But the fight against climate change may turn the subterranean labyrinths from a liability into an asset — and the City of Paris into an ally.
The UN's 2015 Climate Plan — concluded in Paris, by the way — requires the world to reduce greenhouse gas emissions by 75 percent by 2050. And Paris itself wants to be Europe's greenest city by 2030. More sustainable climate control of our living spaces would be a great help toward both targets. A lot of energy is spent heating houses in winter and cooling them in summer.
This is where the constant temperature of the Parisian tunnels comes in. It's not just good for brewing beer; it's a source of geothermal energy, says Fieldwork, an architectural firm based in Paris. It can be used to temper temperatures, helping to cool houses in summer and warming them in winter.
One catch for the cataphiles: it also works when the underground cavities are filled up with concrete. So perhaps one day, Paris Underground, fully filled up with concrete, will completely fall off the map, reducing the city's formerly real doppelgänger into an air conditioning unit.
Cool in summer, warm in winter: Paris Underground could become Paris A/C.Credit: Fieldwork
Strange Maps #1083
Got a strange map? Let me know at email@example.com.
Meconium contains a wealth of information.
- A new study finds that the contents of an infants' first stool, known as meconium, can predict if they'll develop allergies with a high degree of accuracy.
- A metabolically diverse meconium, which indicates the initial food source for the gut microbiota, is associated with fewer allergies.
- The research hints at possible early interventions to prevent or treat allergies just after birth.
The prevalence of allergies arising in childhood has increased over the last 50 years, with 30 percent of the human population now having some kind of atopic disease such as eczema, food allergies, or asthma. The cause of this increase is still subject to debate, though it has been associated with a number of factors, including changes to the gut microbiomes of infants.
A new study by Canadian researchers published in Cell Reports Medicine may shed further light on how these allergies develop in children by examining the contents of their first diaper.
The things you do for science
The research team examined the first stool of 100 infants from the CHILD Cohort Study. The first stool of an infant is a thick, green, horrid-looking substance called meconium. It consists of various things that the infant ingests during the second half of gestation. Additionally, it provides not only a snapshot of what the infant was exposed to during that time, but it also reveals what the food sources will be for the initial gut bacteria that colonize the baby's digestive tract.
The content of the meconium was examined and found to contain such varied elements as amino acids, lipids, carbohydrates, and myriad other substances.
A graph of the comparative, summed abundance of different elements in a metabolic pathway after scaling to median abundance of each metabolite. The blue figures are those children without atopy, the yellow ones show the data for those with an atopic condition. Petersen et al.
The authors fed this information into an algorithm that used this data, along with the identities of the bacteria present as well as the baby's overall health, to predict which infants would go on to develop allergies within one year. The algorithm got it right 76 percent of the time.
A way to prevent childhood allergies?
Infants whose meconium had a less diverse metabolic niche the initial microbes to settle in the gut were at the highest risk of developing allergies a year later. Many of these elements were associated with the presence or absence of different bacterial groups in the digestive system of the child, which play an increasingly appreciated role in our overall health and development. The findings were summarized by senior co-author Dr. Brett Finlay:
"Our analysis revealed that newborns who developed allergic sensitization by one year of age had significantly less 'rich' meconium at birth, compared to those who didn't develop allergic sensitization."
The findings could be used to help understand how allergies form and even how to prevent them. Co-author Dr. Stuart Turvey commented on this possibility:
"We know that children with allergies are at the highest risk of also developing asthma. Now we have an opportunity to identify at-risk infants who could benefit from early interventions before they even begin to show signs and symptoms of allergies or asthma later in life."
A model for early childhood allergies
Petersen et al.
As shown above, the authors constructed a model of how they believe metabolites and bacterial diversity help prevent allergies. Increased diversity of metabolic products in the meconium encourage the development of "healthy" families of bacteria, like Peptostreptococcaceae, which in turn promote the development of a healthy and diverse gut microbiome. Ultimately, such diversity decreases the likelihood that a child will develop allergies.