While we can see many solar storms coming, some are "stealthy." A new study shows how to detect them.
- "Stealth" solar storms are difficult to detect before they are near Earth.
- The use of various imaging techniques from multiple angles allowed researchers to detect these stealth storms earlier than ever.
- Not seeing one coming could have disastrous effects on our electronic infrastructure.
Solar storms are a collection of disturbances on the sun that influence space weather. They include things like solar flares and coronal mass ejections (CMEs), a large release of plasma in the solar wind. They can affect Earth in a number of ways, such as by increasing the number of particles that hit the Earth's magnetic field causing an aurora or — in severe cases — by disrupting technology and radio transmissions.
Most of the time, scientists can see storms as they occur on the sun. Information about the impact on Earth can be gathered a few days before it is likely to reach us. However, in as many as 20 percent of CMEs, there is little to no noticeable activity on the sun to give us an early warning. These "stealth" CMEs can have a huge impact on space weather but have proven difficult to spot until they have nearly arrived.
Luckily, a new study published in Frontiers in Astronomy and Space Sciences reports on new ways to detect so-called stealth solar storms long before they hit Earth.
The benefits of looking at the Sun
Unlike regular CMEs, stealth CMEs do not tend to give typical warning signs like clear dimming or brightening of the surface of the sun. Instead, they seem to form in a higher region of the sun's atmosphere called the corona than is typical. Unfortunately, watching for changes in the corona does not always give scientists the information they need to predict where a mass of plasma is moving.
In this study, the researchers took advantage of knowing the approximate origins of four stealth CMEs that were determined by data collected from Earth and the STEREO satellite, which was at a different angle with respect to the sun. The four CMEs differed in angle and intensity and occurred at different points in the solar cycle.
By using different imaging processes, subtle shifts in the upper corona were identified in each of the four cases examined. Most of the events also originated near areas with particularly strong magnetic fields.
The authors suggest that the small brightening and dimming effects they observed could be used to detect these CMEs in the future using similar methods. While they admit that the study does not provide a way to detect these CMEs before they form, they conclude that "identifying the source region of a stealth CME represents a first step toward providing more reliable predictions."
A bad day for Earth
Solar storms are not merely of academic interest. Large storms have occurred before, and the damage they can cause is potentially devastating. A strong solar storm in 1989 caused blackouts in Quebec and disrupted broadcasts of Radio Free Europe. That storm has nothing on the "Carrington Event" of 1859, however.
That solar storm was incredibly powerful, producing auroras visible in places like Queensland, Australia and the Caribbean. The auroras over New England were so bright that the residents could read newspapers by their light. Telegraph systems fried as a result of the huge amount of electromagnetic energy added to the Earth's magnetosphere, occasionally starting fires as they spontaneously sparked. Some telegraph operators reported being able to operate their machines without connecting them to wires.
A storm estimated to be just as powerful as the Carrington Event occurred in 2012, but the plasma it ejected narrowly missed Earth. According to a study by the National Academy of Sciences, the total cost of such an event to the United States today could be more than two trillion dollars. It would also cause damage that could take years to fully repair. It goes without saying that having large portions of our electric systems and technology fried with little time to prepare might also make things unpleasant for a lot of people.
Smaller storms hit Earth once every three years, often causing damage to systems that use electricity. Larger events are rarer, but not as rare as we would hope. A study from a few years ago calculated that the odds of a Carrington level event occurring is 12 percent per decade.
May the odds be in our favor
With odds and consequences like that, the ability to see a "stealth" solar storm coming might prove to be one of the most important tools humanity ever discovered.
Given enough warning, precautions can be taken to help minimize the damage to electronics from a large solar storm. For example, satellites can be moved out of harm's way, power grids can be primed to avoid being overloaded, and transformers can be taken offline to keep them from being destroyed.
If we fail to see the next Carrington Event coming, it might be a while before you can read the article we'll write about it.
A new government report describes 144 sightings of unidentified aerial phenomena.
On June 25, 2021, the Office of the Director of National Intelligence released a much-anticipated report on UFOs to Congress.
The military has rebranded unidentified flying objects as unidentified aerial phenomena – UAPs – in part to avoid the stigma that has been attached to claims of aliens visiting the Earth since the Roswell incident in 1947. The report presents no convincing evidence that alien spacecraft have been spotted, but some of the data defy easy interpretation.
I'm a professor of astronomy who has written extensively on the search for life in the universe. I also teach a free online class on astrobiology. I do not believe that the new government report or any other sightings of UFOs in the past are proof of aliens visiting Earth. But the report is important because it opens the door for a serious look at UFOs. Specifically, it encourages the U.S. government to collect better data on UFOs, and I think the release of the report increases the chances that scientists will try to interpret that data. Historically, UFOs have felt off limits to mainstream science, but perhaps no more.
Three videos from the U.S. military sparked a recent surge in interest in UFOs.
What's in the UFO report?
The No. 1 thing the report focuses on is the lack of high-quality data. Here are the highlights from the slender nine-page report, covering a total of 144 UAP sightings from U.S. government sources between 2004 and 2021:
- “Limited data and inconsistent reporting are key challenges to evaluating UAP."
- Some observations “could be the result of sensor errors, spoofing, or observer misperception."
- “UAP clearly pose a safety of flight issue and may pose a challenge to U.S. national security."
- Of the 144 sightings, the task force was “able to identify one reported UAP with high confidence. In that case, we identified the object as a large, deflating balloon. The others remain unexplained."
- “Some UAP many be technologies deployed by China, Russia, another nation, or non-governmental entity."
UFOs are taboo among scientists
UFO means unidentified flying object. Nothing more, nothing less. You'd think scientists would enjoy the challenge of solving this puzzle. Instead, UFOs have been taboo for academic scientists to investigate, and so unexplained reports have not received the scrutiny they deserve.
One reason is that most scientists think there is less to most reports than meets the eye, and the few who have dug deeply have mostly debunked the phenomenon. Over half of sightings can be attributed to meteors, fireballs and the planet Venus.
Another reason for the scientific hesitance is that UFOs have been co-opted by popular culture. They are part of a landscape of conspiracy theories that includes accounts of abduction by aliens and crop circles. Scientists worry about their professional reputations, and the association of UFOs with these supernatural stories causes most researchers to avoid the topic.
But some scientists have looked. In 1968, Edward U. Condon at the University of Colorado published the first major academic study of UFO sightings. The Condon Report put a damper on further research when it found that “nothing has come from the study of UFOs in the past 21 years that has added to scientific knowledge."
However, a review in 1998 by a panel led by Peter Sturrock, a professor of applied physics at Stanford University, concluded that some sightings are accompanied by physical evidence that deserves scientific study. Sturrock also surveyed professional astronomers and found that nearly half thought UFOs were worthy of scientific study, with higher interest among younger and more well-informed astronomers.
If astronomers are intrigued by UFOs – and believe some cases deserve study with academic rigor – what's holding them back? A history of mistrust between ufologists and scientists hasn't helped. And while UFO research has employed some of the tools of the scientific method, it has not had the core of skeptical, evidence-based reasoning that demarcates science from pseudoscience.
A search of 90,000 recent and current grants awarded by the National Science Foundation finds none addressing UFOs or related phenomena. I've served on review panels for 35 years, and can imagine the reaction if such a proposal came up for peer review: raised eyebrows and a quick vote not to fund.
A decadeslong search for aliens
While the scientific community has almost entirely avoided engaging with UFOs, a much more mainstream search for intelligent aliens and their technology has been going on for decades.
The search is motivated by the fact that astronomers have, to date, discovered over 4,400 planets orbiting other stars. Called exoplanets, some are close to the Earth's mass and at just the right distance from their stars to potentially have water on their surfaces – meaning they might be habitable.
Astronomers estimate that there are 300 million habitable worlds in the Milky Way galaxy alone, and each one is a potential opportunity for life to develop and for intelligence and technology to emerge. Indeed, most astronomers think it very unlikely that humans are the only or the first advanced civilization.
This confidence has fueled an active search for extraterrestrial intelligence, known as SETI. It has been unsuccessful so far. As a result, researchers have recast the question “Are we alone?" to “Where are the aliens?" The absence of evidence for intelligent aliens is called the Fermi paradox. First articulated by the physicist Enrico Fermi, it's a paradox because advanced civilizations should be spread throughout the galaxy, yet we see no sign of their existence.
The SETI activity has not been immune from scientists' criticism. It was starved of federal funding for decades and recently has gotten most of its support from private sources. However, in 2020, NASA resumed funding for SETI, and the new NASA administrator wants researchers to pursue the topic of UFOs.
In this context, the intelligence report is welcome. The report draws few concrete conclusions about UFOs and avoids any reference to aliens or extraterrestrial spacecraft. However, it notes the importance of destigmatizing UFOs so that more pilots report what they see. It also sets a goal of moving from anecdotal observations to standardized and scientific data collection. Time will tell if this is enough to draw scientists into the effort, but the transparency to publish the report at all reverses a long history of secrecy surrounding U.S. government reports on UFOs.
I don't see any convincing evidence of alien spacecraft, but as a curious scientist, I hope the subset of UFO sightings that are truly unexplained gets closer study. Scientists are unlikely to weigh in if their skepticism generates attacks from “true believers" or they get ostracized by their colleagues. Meanwhile, the truth is still out there.
This article has been updated to clarify that the report was produced by the Office of the Director of National Intelligence.
Using image analysis tools developed for astronomy, researchers are predicting cancer therapy responses.
This article was originally published on our sister site, Freethink.
Can a system for charting the stars be used to treat cancer? (And no, I don't mean astrology.) Researchers at Johns Hopkins think so. Using a sky-mapping algorithm developed by astronomers, the scientists have found a way to predict whether cancer will respond to immunotherapy.
"This platform has the potential to transform how oncologists will deliver cancer immunotherapy," Drew Pardoll, M.D., Ph.D., director of the Bloomberg-Kimmel Institute for Cancer Immunotherapy, said in a Johns Hopkins' release.
Predicting the future is life or death: Immunotherapy harnesses the body's own immune system to attack cancerous tumor cells. But tumors have a multitude of nasty tricks to evade our immune system.
Immunotherapy needs to get around these tumor defenses, allowing our own powerful weapons to fight back.
An immunotherapy treatment for melanoma can block a protein called PD-1, helping the immune system spot and destroy cancer cells. But only some melanoma patients will respond well to anti-PD-1 drugs, and time is of the essence with an aggressive cancer like melanoma.
"The ability to predict response or resistance is critical to choosing the best treatments for each patient's cancer," the researchers state.
Lighting the way: To build their prediction model, the researchers took melanoma biopsies — about 127,400 mosaic images, comprising a million cells — and used immunofluorescence to highlight proteins in the tissue.
Immunofluorescence works via antibodies that glom on to certain proteins and glow, revealing their targets.
Using their tags, the researchers were able to illuminate the tumor's microenvironment by examining the immune cells in and around the melanoma. From there, they located six biomarkers that, taken together, were "highly predictive" of a cancer's response to anti-PD-1 therapy.
"The data outputs were linked to patient outcomes, informing in a clinically relevant way how cancer evades the immune system," the team wrote in their Science paper.
The fault in melanoma's stars: The key to this cancer-prediction algorithm was imaging techniques originally developed for astronomy.
Using a sky-mapping algorithm developed by astronomers, the scientists have found a way to predict whether cancer will respond to immunotherapy.
The image analysis tools were created for the Sloan Digital Sky Survey, a map of the universe spearheaded by Alexander Szalay, professor of physics, astronomy, and computer science.
"The sky survey 'stitched' together millions of telescopic images of billions of celestial objects, each expressing distinct signatures — just like the different fluorescent tags on the antibodies used to stain the tumor biopsies," Johns Hopkins explains.
The algorithm, called AstroPath, is already being applied to lung cancer, and the team hopes it will lead to therapeutic guidance for other cancers as well.
If you truly want to understand modern astrophysics, knowing how to read this graph is essential.
- The invention of spectroscopy and photography converted astronomy into astrophysics.
- With these new tools, astrophysicists gathered untold amounts of data on stars.
- When these stars were plotted on a graph, amazing patterns emerged.
Like people, stars are born, live, and then die. But how do scientists know that stars are born and die? Where did that knowledge come from? After all, for most of human history, many people thought that stars were eternal and unchanging. What was it that set astronomers on the path to seeing stars as something bound by time and change? The answer comes in the form of a simple and beautiful diagram first made 100 or so years ago.
Astronomy becomes astrophysics
By the end of the 19th century, new tools were being added to telescopes that turned astronomy into astrophysics. The most important of these was the spectrograph, which let astronomers see how much energy a star emitted at different wavelengths (or colors). It's also what allowed astrophysicists to conclude definitively that the sun is a star.
Photography also revolutionized the field by providing a permanent record of observations so that they could be compared and correlated with other photographed observations. Using the spectrograph and photographic plates, astrophysicists began to amass a huge storehouse of data on stars.
At observatories in Europe and the U.S., the spectra of hundreds of thousands of stars were taken. Later these spectra were sorted into different classification "bins" based on patterns found in the way that stars emitted their energy at different wavelengths. (It's worth noting that this sorting work was both challenging and exhausting and, in many cases, was done by bright young women who were not allowed to be formal astronomy students.) After the work was done, the classification bins for the spectra eventually were recognized to be associated with the star's surface temperature.
Photographic data also allowed the stars to be sorted in another way, in this case, based on their brightness, which was a measure of the total energy they radiated into space.
What all this means is by the first years of the 20th century, astronomers had something new and tremendously valuable: a big, hard-won treasure trove of stellar data giving each star's temperature and brightness. Now the question was what to do with it.
The Hertzsprung-Russell diagram
The simple answer to this kind of question in science was the same then as it is now: make a plot and see what happens.
Each of about 100,000 stars was placed on a two-dimensional graph. The temperature was on the horizontal axis, and the brightness was on the vertical axis. That's basically what Danish astronomer Ejnar Hertzsprung and American astronomer Henry Russell each did, independently of each other, to create what is now called the Hertzsprung-Russell (HR) diagram.
So, what does "interesting" in this kind of plot mean? Well, I can tell you what would not be interesting. If stars just appeared randomly on the plot — as if someone had taken a shotgun to it — that would not be interesting. It would mean that there was no correlation between brightness and temperature.
Thankfully, a shotgun patten is definitely not what astronomers saw in the HR diagram. Instead, most of the stars collected on a thick diagonal line stretching from one corner of the plot to the other. Astronomers called this line the Main Sequence. There were also other places, outside the Main Sequence, where the stars collected. What astronomers were seeing in their data was the unmistakable indication of a hidden order.
The patterns in the HR diagram told astrophysicists that something was going on inside stars. The Main Sequence, for example, told astrophysicists that a strong link must exist between the energy stars pumped into space and how hot their surfaces got. That link implied that there was hidden physics tying stellar energy output and stellar surface temperature together in a powerful chain of cause and effect. If they could understand that chain, they could answer the 2500-year-old holy grail of astronomy questions — what makes stars shine?
It would take another 50 years after the first HR diagrams appeared before astrophysicists could really see how the Main Sequence and other patterns were a direct consequence of stellar physics in the form of stellar aging over time. For that, they would need the invention of nuclear physics and a theory of thermonuclear fusion. We'll take up that story in another post.
For today, it's enough to marvel at how the simple act of throwing a bunch of stars onto a plot unveiled a hidden pattern that could not have been seen otherwise. That pattern was a clue, a hint of which direction to face, spurring scientists forward eventually to unlocking the mystery of the stars.
A new artificial intelligence method removes the effect of gravity on cosmic images, showing the real shapes of distant galaxies.
A new AI-based tool developed by Japanese astronomers promises to remove unwanted noise in data to generate a cleaner view of the true shape of galaxies. The scientists successfully tried this approach on real data from Japan's Subaru Telescope and discovered that the distribution of mass produced by their technique corresponded to the established models.
The scientists from the National Astronomical Observatory of Japan (NAOJ) in Tokyo believe their method could be very useful in the analysis of big data from large astronomy surveys. These surveys help us study the structure of the universe by focusing on gravitational lensing patterns.
The trouble with gravitational lensing
Gravitational lensing refers to the phenomenon whereby massive space objects like a cluster of galaxies can distort or bend the light that comes from objects in their background. In other words, images of distant space bodies can be made to look strange by the gravitational pull of objects in the foreground.
One example of this is the "Eye of Horus" galaxy system, discovered by NAOJ astronomers in 2016. The striking images of the system, named in honor of the sacred eye of an ancient Egyptian sky god, are the byproduct of two distant galaxies being lensed by a closer galaxy.
The issue with gravitational lensing for astronomers is that it can make it hard to differentiate galaxy images that are distorted by gravity from galaxies that are actually distorted. This so-called "shape noise" undermines confidence in research into the universe's large structures.
Eye of Horus galaxy system. The yellow object at the center represents a galaxy about 7 billion light-years away that bends the light from two galaxies in the background that are even farther away.Credit: NAOJ
A new approach
The new study, published in the Monthly Notices of the Royal Astronomical Society, shows how the research team was able to counteract shape noise by utilizing ATERUI II, the most powerful astronomy supercomputer in the world. By feeding it pretend and real data from the Subaru Telescope, the scientists had the computer simulate 25,000 mock galaxy catalogs. They added realistic noise to these data sets while teaching their artificial intelligence network through deep learning to pick out the correct data from the noise.
"This research shows the benefits of combining different types of research: observations, simulations, and AI data analysis," shared team's leader Masato Shirasaki. He added, "In this era of big data, we need to step across traditional boundaries between specialties and use all available tools to understand the data. If we can do this, it will open new fields in astronomy and other sciences."
How the AI works
Employing a generative adversarial network (GAN), the Japanese astronomers' AI learned to find details that previously could not be seen, explained the observatory's press release. The GAN developed by the scientists actually uses two networks — one of them generates an image of a lens map without noise, while the other one compares it to the real noise-free lens map, tagging the created images as a fake. By running this system through a large number of noise and denoised map pairs, both of the networks are trained. The first one makes lens maps that are closer to the real ones, while the other network does a better job of identifying fakes.
The diagram of the AI (generative adversarial network) utilized in the study. Credit: NAOJ
To further test their method, the scientists turned their AI's attention to real data from 21 square degrees of the sky, showing that the distribution of foreground mass is in accordance with what is predicted by the standard cosmological model.