Skip to content
Starts With A Bang

Ask Ethan: How do scientists color the Universe?

When we see pictures from Hubble or JWST, they show the Universe in a series of brilliant colors. But what do those colors really tell us?
At left is the iconic view of the Pillars of Creation as seen by Hubble. Beginning in 2022, JWST (at right) has viewed the pillars as well, revealing details such as newly forming stars, faint protostars, and cool gas that are invisible to even Hubble's impressive capabilities.
Credits: NASA, ESA, CSA, STScI; Joseph DePasquale (STScI), Anton M. Koekemoer (STScI), Alyssa Pagan (STScI)
Key Takeaways
  • When we look at astronomical images of the Universe, whether from Hubble, JWST, or any other observatory, they typically show a broad array of colorful features.
  • But these color-coded images don’t necessarily show us the same things human eyes would see; instead, they’re optimized to encode important information in an easy-to-process visual format.
  • Here’s how scientists “colorize” the Universe in a variety of ways, and what those colorizations tell us about what’s truly present and detectable inside these objects.
Sign up for the Starts With a Bang newsletter
Travel the universe with Dr. Ethan Siegel as he answers the biggest questions of all

For just a moment, I want you to close your eyes and think about the most famous, most spectacular images of the Universe that you’ve ever seen. Did you picture planets or moons within our Solar System? Perhaps you thought of nebulous regions of gas, where new stars are forming inside. Maybe a snapshot of a recently deceased star, such as a planetary nebula or a supernova remnant, is what best captured your imagination. Alternatively, maybe you thought about glittering collections of stars or even entire galaxies, or — my personal favorite — a deep-field view of the Universe, complete with galaxies of all different sizes, shapes, colors and brightnesses.

These full-color images aren’t necessarily what your limited human eyes would see, but are instead color-coded in such a way that they reveal a maximal amount of information about these objects based on the observations that were acquired. Why do scientists and visual artists make the choices that they do? That’s what Elizabeth Belshaw wants to know, writing in to ask:

“When we see stars or galaxies from Hubble and Webb, they are in color. Are they really black, void of color, or are colors assigned? Do the colors say, for example, blue for oxygen or red for hydrogen, etc.? What do the colors mean? Who assigns the colors?”

The goal, remember, is to maximize the amount of information encoded in a way that humans, both amateur and professional, can easily digest. With that in mind, let’s start with the basics: how we actually see in the first place.

human cone cell response
The three types of cone cells found in human eyes, S, M, and L, shown with the wavelength range that they respond to: short, medium, and long wavelengths. Some humans lack one type of cone, rendering them color blind, while a few people have four types of cones and can see more colors than the rest of us: tetrachromats. The greatest sensitivity of human eyes to the intensity of light occurs between 500 and 600 nanometers, with the response dropping off rapidly at the most extreme red and violet wavelengths.
Credit: BenRG/Wikimedia Commons

As human beings, our eyes are one of the most specialized sensory organs that a living being can possess. At a basic level, our eyes consist of:

  • a pupil, which lets light into the eye,
  • a lens, which focuses the incoming light,
  • a retina, which acts like a screen for that light to land on,
  • rod cells, which are sensitive to all types of visible light,
  • cone cells, which (in most humans) come in three varieties, and are color-sensitive,
  • and our optic nerve, which takes those signals up to our brains,

where our minds then synthesize all of that incoming light into what we commonly perceive as an image. We can wear contacts or corrective lenses (or have laser eye surgery) to augment our own natural lenses, allowing us to better focus the light, but ultimately it comes down to the rod and cone cells to detect that light once it strikes our retina.

Rod cells are sensitive to overall brightness, and work even in extremely faint light, but are relatively insensitive to color. They are why, if you awake during a moonless night and don’t turn any artificial lights on, you can indeed still see, but your vision will be purely monochromatic: devoid of color sensitivity. During brighter conditions, your rod cells move back and your cone cells move forward, enabling us to distinguish between reds, greens, and blues, as the different types of cone cells are sensitive to light of different wavelengths. When those signals (from both rod and cone cells) go to our brains, we interpret those signals as color and brightness, which enables us to reconstruct an image.

This 1888 image of the Andromeda Galaxy, by Isaac Roberts, is the first astronomical photograph ever taken of another galaxy. It was taken without any photometric filters, and hence all the light of different wavelengths is summed together. Every star that’s part of the Andromeda galaxy has not moved by a perceptible amount since 1888, a remarkable demonstration of how far away other galaxies truly are. Although Andromeda is a naked-eye object under even modestly dark skies, it was not recorded until the year 964, and was not shown to be extragalactic until 1923.
Credit: Isaac Roberts

The earliest astronomical images, like the above image of the Andromeda galaxy (from the 1880s), weren’t sensitive to color at all; they simply gathered all of the light that came from an object and focused it onto a photographic plate, where it made a (monochromatic) image. However, humans swiftly figured out a fascinating trick that enabled them to create color images: to filter out certain components of light.

The idea of a filter is relatively simple: you simply allow light in a certain wavelength window (i.e., of a certain set of colors) to pass through, where they’ll be recorded, while all other wavelengths of light are excluded. By setting up filters, for example, for:

  • red light, where the long wavelengths that human vision is sensitive to pass through,
  • green light, where the intermediate wavelengths that human vision is sensitive to pass through,
  • and blue light, where the short wavelengths that human vision is sensitive to pass through,

you can then create three separate images of the same object. When you then project the red-filtered image with red light, the green-filtered image with green light, and the blue-filtered image with blue light, the fact that color is additive allows your brain to interpret a full-color image.

This photograph, from 1911, demonstrates the technique of additive color mixing as applied to photography. Three color filters, blue, yellow, and red, were applied to the subject, producing the three photographs at right. When the data from the three are added together in the proper proportions, a color image is produced. Our brains, with three different types of cones sending signals from our eyes, do this automatically.
Credit: Sergei Mikhailovich Prokudin-Gorskii

This technique can be applied to paintings, photographs, video projectors, or the LED lights on a modern screen: the science is the same in all cases. However, this is a very restrictive setup, because it’s focused extraordinarily narrowly on constructing images that represent what human eyes see in the world, for humans to then digest subsequently. We can certainly do this in astronomy, as we’ve developed what are known as photometric filters: filters that do something very comparable with the images we’re capable of acquiring of the Universe. Instead of the astrophotography techniques of the 1880s, where we just gathered all of the light from a source across all wavelengths, we can apply filters that restrict the light that comes in to be in a specific wavelength range.

We can do this multiple times across different wavelength ranges, gathering data about the light emitted by various astronomical objects in each filter that we choose. The human eye may only have three types of cone (in addition to just one type of rod) within it, but we are sensitive to millions of colors and tremendous variations in brightness based on the relative response of the various types of cells within our eyes. We often choose to represent astronomical data — particularly optical data acquired by telescopes that are optimized for visible light observations — in ways that very closely approximate “true color,” or what human eyes would see if they, rather than telescopes, were acquiring this data.

This full-scale view of the Andromeda Galaxy, M31, showcases its star-forming regions lining its spiral arms, its dust lanes, and its central, gas-poor region. Unlike the Milky Way, Andromeda lacks a prominent central bar. This image is a fairly close approximation of what human eyes would see if they could make out these details in Andromeda.
Credit: Adam Evans/flickr

This isn’t the only way to go about coloring the Universe, however, and that’s an extremely good thing. If you’re an astronomer, you have to recognize that sometimes, you want to focus on very specific features, not the standard red-green-blue colors of objects overall. For example, you might want to hone in on the presence of very hot hydrogen gas, or doubly ionized oxygen gas, or some other element or molecule at a certain temperature or in a certain ionization state. Other times, you want to draw out features that might not fall into the visible part of the spectrum at all, but might be a gamma-ray, X-ray, ultraviolet, infrared, or radio signature instead.

In certain cases, you might want a combination of those effects: you might want to use X-ray observations to highlight the presence and abundance of different elements that can be found across a region of the sky, such as finding out how various types of atoms are distributed across an exploded star’s remnant. What’s important to recognize is that we aren’t wedded to this overly restrictive idea of “true color” or “what human eyes can see” when it comes to the objects in the Universe. Instead, we can tweak our color-and-brightness-based representations of any set of astronomical data so that it best displays the types of features we’re seeking to highlight.

elements
This image from NASA’s Chandra X-ray Observatory shows the location of different elements in the Cassiopeia A supernova remnant including silicon (red), sulfur (yellow), calcium (green), and iron (purple), as well as the overlay of all such elements (top). A supernova remnant expels heavy elements created in the explosion back into the Universe, and each element produces X-rays within a narrow energy range, allowing maps of their locations to be constructed.
Credit: NASA/CXC/SAO

In other words, there’s no one “universal” color scheme that we use. This is true even for individual telescopes, like Hubble or JWST. While human eyes may only possess four overall types of light receptor (one rod type and three cone types), enabling us to construct a full-color, brightness-relative image of whatever it is we’re looking at, that doesn’t mean we’re restricted to “seeing only what our eyes can see.”

Instead, what astronomers do is leverage these human capabilities by assigning colors to certain detectable features within an image. For example, JWST often, with the filters that are chosen to view an object, doesn’t even acquire any visible light at all; in its usual use case, it gathers only near-infrared and/or mid-infrared light. Even within its NIRCam (near-infrared camera) and MIRI (mid-infrared instrument) instruments, there are more than 20 different filters to choose from when deciding what type of light we should be collecting.

Frequently, we collect light from far greater numbers of filters than just three of them, even though three is the maximum number of unique cones we have in our eyes. With JWST, Hubble, or any other observatory, we have a choice for how we assign colors to the data (including data from different filters), and making different choices can result in extraordinarily different views of even the very same object.

Two images of the same nebula, one left and one right. The left nebula image has a reddish hue, while the right has enhanced red-green colors, highlighting different features of the nebula.
An excerpt from pages 176-177 of Infinite Cosmos, showcasing the Southern Ring Nebula. These two images, despite their differing appearances, are both JWST images of the same object. The main difference is that different NIRCam and MIRI filters, examining different wavelengths of light, are chosen in assembling these images, highlighting very different phenomena at play inside the nebula.
Credit: National Geographic Books

JWST is particularly impressive in this regard. Across all observing modes, there are up to 29 different types of image that can be produced with JWST, depending on which instruments and which filters are highlighted. However, there are some general rules that scientists (and amateurs) who perform the task of image processing generally follow, in order to make certain interpretations of the data easier and more accessible.

Just as human eyes assign:

  • bluer colors to shorter wavelengths of light,
  • green/yellow colors to intermediate wavelengths of light,
  • and redder colors to longer wavelengths of light,

so do most data visualizations. Blue light corresponds, in general, to higher-energy phenomena, while red light corresponds to lower-energy phenomena. This is typically how colors for observatories like JWST and Hubble are assigned: with shorter-wavelength light assigned to blue colors, medium-wavelength light assigned greens and/or yellows, and longer-wavelength light assigned red colors.

It’s by taking these assigned colors, where those colors are used to bring out (or highlight) certain features that they’re assigned to, and then displaying them all together, at once, that we wind up with the final images we’re so used to seeing.

LMC MIRI
This 10-frame animation shows each individual filter used to view the same region of the Large Magellanic Cloud (LMC), with an assigned color RGB composite used to bring out various features available to MIRI’s unique view. The final-frame image showcases the full composite in RGB color; despite having more filters than human eyes can process, the data provides scientific information well beyond what our eyes can comprehend.
Credit: Team MIRI; processing by E. Siegel

There isn’t a “right way” or a “wrong way” to do this colorization, of course; it’s all a matter of how you want the viewer to visually interpret what you’re attempting to show them. The pros that work on JWST have their own methods that they use, but even among different researchers using JWST data, there’s a substantial amount of freedom to make choices about what aspects to show or highlight. Sometimes, it’s more useful to show separate images that were acquired over different wavelength ranges, as the features they reveal often correspond to very different astrophysical objects and processes.

multiwavelength andromeda
The Andromeda galaxy, the closest large galaxy to Earth, displays a tremendous variety of details depending on which wavelength or set of wavelengths of light it’s viewed in. Even the optical view, at top left, is a composite of numerous different filters. Shown together, they reveal an incredible set of phenomena present in this spiral galaxy. Multiwavelength astronomy can shed unexpected views on almost any astronomical object or phenomenon, revealing details in one wavelength that are wholly invisible in another.
Credit: infrared: ESA/Herschel/PACS/SPIRE/J. Fritz, U. Gent; X-ray: ESA/XMM-Newton/EPIC/W. Pietsch, MPE; optical: R. Gendler

Other times, it’s more profound to show features spanning a broad array of wavelengths all together, as they can paint a more holistic picture of the region of space that you’re examining in detail. Features such as black holes, radio jets, heated dust, magnetically-carved filaments, and much, much more are all at play toward our galactic center, and a single, multiwavelength view can showcase all of those features at once.

A vivid cosmic scene reveals colorful nebulae and stardust in vibrant shades of blue, purple, and orange, set against a backdrop of space. NASA observatories capture this celestial beauty, unveiling hidden holes in the vast tapestry of the universe.
This composite image of the galactic center shows features ranging from ultra-high energy X-rays down to very low energy radio waves, as well as many wavelengths in between. By focusing on a variety of wavelengths at the highest resolutions and greatest sensitivities possible, we can map out, understand, and discover our Universe as never before.
Credit: NASA/JPL-Caltech/ESA/CXC/STScI

Sometimes, there are features that only appear at very long wavelengths, such as radio wavelengths, that might highlight jets from an active galaxy that spill out into the circumgalactic medium, and you’ll want to show them alongside the host galaxy that spawned them, which might be best viewed in visible light wavelengths. This, too, is a way that light of different wavelengths can be combined for illustrative purposes, even though it’s in a way that our human eyes would never be able to perceive on their own.

A group of stars and galaxies in space.
The powerful radio galaxy Hercules A, shown above, is a stunning example of how central activity from the galaxy’s active black hole influences not only the host galaxy, but a large region of space extending far outside the galaxy itself, as visible from the extent of the radio lobes highlighted visually.
Credit: NASA, ESA, S. Baum and C. O’Dea (RIT), R. Perley and W. Cotton (NRAO/AUI/NSF), and the Hubble Heritage Team (STScI/AURA)

And finally, sometimes it’s incredibly informative to show animations that transition between different views, such as the below image of the Phantom Galaxy, Messier 74, which transitions from visible light (Hubble) views to near-infrared (JWST NIRCam) views to mid-infrared (JWST MIRI) views, where each individual view has been uniquely colorized to show — within its relevant wavelength range — shorter-wavelength features in bluer colors and longer-wavelength features in redder colors.

JWST phantom galaxy M74
This three-panel animation shows three different views of the center of the Phantom Galaxy, M74 (NGC 628). The familiar color image is the Hubble (optical) view, the second panel showcases near-infrared views from both Hubble and Webb, while the mid-infrared panel shows the warm dust that will eventually form new stars at a later time, containing data from JWST alone.
Credit: ESA/Webb, NASA & CSA, J. Lee and the PHANGS-JWST Team; ESA/Hubble & NASA, R. Chandar; Acknowledgement: J. Schmidt; Animation: E. Siegel

Finally, it’s worth noting that many features that light is opaque to in one range of wavelengths, meaning that light of those wavelengths is blocked and cannot pass through this region of space, allow light to transparently pass through at other wavelengths. The Pillars of Creation, famously, is a region rich in neutral matter, which includes light-blocking dust. That dust is extremely efficient at blocking short wavelengths of light, including visible light.

In 1995, Hubble observed those pillars largely in visible light, and saw those three ghostly pillars with only a few hints of stars inside and nearby.

Then, in 2014, Hubble re-observed those pillars with a new set of cameras, which were capable of seeing a little bit farther into the infrared portion of the spectrum (and with a wider field-of-view), enabling more detailed features and many additional stars and protostars to be revealed.

Finally, in 2022, JWST observed those pillars as well, seeing much farther into the infrared and without any visible light data included in its views at all. The gas and dust of the pillars themselves is therefore revealed in much greater detail, along with an enormous number of much more easily-visible stars.

pillars of creation jwst
Over the timespan of 27 years, our view of the Pillars of Creation has not only expanded in size and resolution, but also in terms of wavelength coverage. The longer-wavelengths of light, as revealed in unprecedented resolution by JWST, allow us to see features that could never be exposed by an optical telescope, even one in space, on its own. We can also tell, although the effect is subtle, that the Pillars are slowly evaporating, and that after ~100,000 years or so, they will be completely gone.
Credits: NASA, ESA, CSA, STScI; the Hubble Heritage Team; J. Hester and P. Scowen; animation by E. Siegel

The overall point is that there’s no one correct, universal “right way” to colorize the Universe. It depends on:

  • what features are actually present in the astronomical object you’re targeting,
  • what features are revealed and detectable by the observations you’re considering,
  • what features you want to highlight for the person who’ll be viewing the image,
  • and how closely you’re wedded to the idea of shorter wavelengths appearing bluer and longer wavelengths appearing redder,

as different color schemes can lead to vastly different views of even the exact same data.

jwst smacs 0723 hubble
This almost-perfectly-aligned image composite shows the first JWST deep field’s view of the core of cluster SMACS 0723 and contrasts it with the older Hubble view. The JWST image of galaxy cluster SMACS 0723 is the first full-color, multiwavelength science image taken by the JWST. It was, for a time, the deepest image ever taken of the ultra-distant Universe, with 87 ultra-distant galaxy candidates identified within it. They await spectroscopic follow-up and confirmation to determine how distant they truly are.
Credit: NASA, ESA, CSA, and STScI; NASA/ESA/Hubble (STScI); composite by E. Siegel

Sometimes, you want element-specific signatures, such as when you map out the presence of neutral hydrogen in a galaxy (and, typically, color it pink) or of doubly-ionized oxygen in a superheated nebula (and, typically, color it green). Other times, you’ll want color-coded signatures that reflect the various energies of the features you’re imaging, with violet and blue colors corresponding to the highest-energy, shortest-wavelength features and with orange and red colors corresponding to the lowest-energy, longest wavelength ones. And still at other times, you’ll want to fuse together observations across many different wavelengths, choosing a color scheme to maximize the contrast between different features.

In cartography, there’s an important saying: the map is not the territory. The saying is designed to remind us that our representation of the world does not necessarily match up, one-to-one and feature-to-feature, with how the world actually is. When it comes to how we color the Universe, we have to remember that each “picture” we see is not a true depiction of how that object is, but is only a representation of that object based on the data that’s been acquired and the colors we’ve assigned to its various features. The important thing is to be clear and consistent about the color scheme we’re using, and to remember the goal: it’s to communicate information about the object in question in a way our limited senses can digest them, not to remain limited to the boundaries of what our senses can typically perceive!

Send in your Ask Ethan questions to startswithabang at gmail dot com!

Sign up for the Starts With a Bang newsletter
Travel the universe with Dr. Ethan Siegel as he answers the biggest questions of all

Related

Up Next