A new study calls the technique "location spoofing."
Research indicates that "deepfake geography," or realistic but fake images of real places, could become a growing problem.
For example, a fire in Central Park seems to appear as a smoke plume and a line of flames in a satellite image. In another, colorful lights on Diwali night in India, seen from space, seem to show widespread fireworks activity.
Both images exemplify what the new study calls "location spoofing." The photos—created by different people, for different purposes—are fake but look like genuine images of real places.
So, using satellite photos of three cities and drawing upon methods used to manipulate video and audio files, a team of researchers set out to identify new ways of detecting fake satellite photos, warn of the dangers of falsified geospatial data, and call for a system of geographic fact-checking.
"This isn't just Photoshopping things. It's making data look uncannily realistic," says Bo Zhao, assistant professor of geography at the University of Washington and lead author of the study in the journal Cartography and Geographic Information Science. "The techniques are already there. We're just trying to expose the possibility of using the same techniques, and of the need to develop a coping strategy for it."
Putting lies on the map
As Zhao and his coauthors point out, fake locations and other inaccuracies have been part of mapmaking since ancient times. That's due in part to the very nature of translating real-life locations to map form, as no map can capture a place exactly as it is. But some inaccuracies in maps are spoofs that the mapmakers created. The term "paper towns" describes discreetly placed fake cities, mountains, rivers, or other features on a map to prevent copyright infringement.
For example, on the more lighthearted end of the spectrum, an official Michigan Department of Transportation highway map in the 1970s included the fictional cities of "Beatosu and "Goblu," a play on "Beat OSU" and "Go Blue," because the then-head of the department wanted to give a shout-out to his alma mater while protecting the copyright of the map.
But with the prevalence of geographic information systems, Google Earth, and other satellite imaging systems, location spoofing involves far greater sophistication, researchers say, and carries with it more risks. In 2019, the director of the National Geospatial Intelligence Agency, the organization charged with supplying maps and analyzing satellite images for the US Department of Defense, implied that AI-manipulated satellite images can be a severe national security threat.
Tacoma, Seattle, Beijing
To study how satellite images can be faked, Zhao and his team turned to an AI framework that has been used in manipulating other types of digital files. When applied to the field of mapping, the algorithm essentially learns the characteristics of satellite images from an urban area, then generates a deepfake image by feeding the characteristics of the learned satellite image characteristics onto a different base map—similar to how popular image filters can map the features of a human face onto a cat.
Next, the researchers combined maps and satellite images from three cities—Tacoma, Seattle, and Beijing—to compare features and create new images of one city, drawn from the characteristics of the other two. They designated Tacoma their "base map" city and then explored how geographic features and urban structures of Seattle (similar in topography and land use) and Beijing (different in both) could be incorporated to produce deepfake images of Tacoma.
In the example below, a Tacoma neighborhood is shown in mapping software (top left) and in a satellite image (top right). The subsequent deepfake satellite images of the same neighborhood reflect the visual patterns of Seattle and Beijing. Low-rise buildings and greenery mark the "Seattle-ized" version of Tacoma on the bottom left, while Beijing's taller buildings, which AI matched to the building structures in the Tacoma image, cast shadows—hence the dark appearance of the structures in the image on the bottom right. Yet in both, the road networks and building locations are similar.
These are maps and satellite images, real and fake, of one Tacoma neighborhood. The top left shows an image from mapping software, and the top right is an actual satellite image of the neighborhood. The bottom two panels are simulated satellite images of the neighborhood.Zhao et al., 2021, Cartography and Geographic Information Science
The untrained eye may have difficulty detecting the differences between real and fake, the researchers point out. A casual viewer might attribute the colors and shadows simply to poor image quality. To try to identify a "fake," researchers homed in on more technical aspects of image processing, such as color histograms and frequency and spatial domains.
Could 'location spoofing' prove useful?
Some simulated satellite imagery can serve a purpose, Zhao says, especially when representing geographic areas over periods of time to, say, understand urban sprawl or climate change. There may be a location for which there are no images for a certain period of time in the past, or in forecasting the future, so creating new images based on existing ones—and clearly identifying them as simulations—could fill in the gaps and help provide perspective.
The study's goal was not to show that it's possible to falsify geospatial data, Zhao says. Rather, the authors hope to learn how to detect fake images so that geographers can begin to develop the data literacy tools, similar to today's fact-checking services, for public benefit.
"As technology continues to evolve, this study aims to encourage more holistic understanding of geographic data and information, so that we can demystify the question of absolute reliability of satellite images or other geospatial data," Zhao says. "We also want to develop more future-oriented thinking in order to take countermeasures such as fact-checking when necessary," he says.
Coauthors of the study are from the University of Washington, Oregon State University, and Binghamton University.
A new children's program may help displaced Syrian children find stability and belonging in their new communities.
- Sesame Workshop and the International Rescue Committee have partnered launch a Arabic version of Sesame Street named "Ahlan Simsim."
- The show will provide early learning education to refugee children who are robbed of their education when displaced from their communities.
- The show will include Arabic-speaking characters who are developed to "speak" to refugee children, providing psychologically beneficial media representation.
A new Sesame Street show in Arabic is being launched by to help Syrian refugee children, but the benefits may reach beyond helping displaced kids learn their ABCs.
Photo source: media.defense.gov
Earlier this month, Sesame Workshop (the educational nonprofit organization behind Sesame Street) and the International Rescue Committee (IRC) announced that they partnered up to launch a new program called "Ahlan Simsim," or "Welcome Sesame" in Arabic. The show is aimed at providing early learning education to refugee children who are robbed of their education when displaced from their communities.
According to the IRC, nearly half of the 12 million people who have been displaced due to the ongoing civil war in Syria are children. Yet, according to IRC president and CEO David Miliband in an interview on 60 Minutes, less than 2 percent of all funding for humanitarian aid goes to education.
Ahlan Simsim, which was locally-produced, will be aimed at children ages 3–8 and, according to Sesame Workshop.
The show will include Arabic-speaking characters who are developed to "speak" to refugee children. For example, one of the main characters on the Arabic Sesame Street is Jad, a young muppet, is new to the neighborhood and it is implicated that he is a refugee. In a 60 Minutes clip from one of the episodes he says "My toy is not with me. I left it behind in my old home when I came here." Additional characters include a muppet girl named Basma who befriends Jad, a baby goat named Ma'zooza that follows them around, and, of course, appearances by beloved classic Sesame Street characters such as Elmo, Cookie Monster, and Grover.
The tragedy of displacement, which lasts on average 20 years, is particularly traumatizing for young children who can't fully grasp the situation they are caught in the middle of. This leaves violent cracks in the foundation of refugee children's lives, many of whom have witnessed the violent deaths of loved ones. The first season of Ahlan Simsim will aim to help children develop social-emotional coping tools such as belly breathing and counting to five, according to executive producer Scott Cameron.
But, the impact of Ahlan Simsim will likely do more than teach children what Cameron calls "emotional ABCs." Besides the trauma of an early childhood scarred by war violence, displaced children may experience social trauma in areas that they seek asylum. Daily stressors such as acculturation, economic insecurity, community violence, and stigma against refugees can be detrimental to child development and adjustment in a new community. Because children learn to understand themselves through a social lens, it's important to see images, characters, and role models in the media that are representative of one's experience and identity group.
Lack of representation can compound the isolation and identity disruption that displaced children experience. Not only have they been physically torn from their homes and communities, but they find themselves in situations where they perceive themselves as "different." By representing disadvantage, displaced children, Ahlan Simsim may help refugee children feel that they belong in their community, and inspire feelings of confidence and security. Additionally, Ahlan Simsim helps to abolish harmful stereotypes against refugee populations by portraying them in a positive light as part of the community.
Benefits of Early Multicultural Exposure
Early childhood media representation isn't just for the benefit of minority groups. By exposing viewers to less common cultures, programs like Ahlan Simsim help children acquire enhanced social skills, seek out diverse experiences later in life, and become more receptive to those who speak different languages than they are used to at home.
Children who are exposed to culturally diverse media may learn to become more comfortable with differences in race, religion, language, and lifestyle in real life. Research has even shown that exposure to diversity early in life can impact how successful children will become in adulthood. For instance, learning to work with others across categories of race or socioeconomic status sets children up for developing interpersonal skills that will give them an advantage later in life.
CNN reports that the first season of Ahlan Simsim is set to air locally across the Middle East in February 2020 and will be digitally available.
Can you tell this video is fake?
- A new deepfake video shows Mark Zuckerberg saying words he never spoke.
- The video was likely created in an attempt to challenge Facebook's policies on fake content.
- Facebook was recently criticized for not removing a video of House Speaker Nancy Pelosi that was doctored to make it seem like she was drunk.
A new deepfake video shows Facebook founder Mark Zuckerberg saying words he never spoke.
The video – posted to Instagram and created by artists Bill Posters and Daniel Howe with advertising company Canny – was based off of a real video of Zuckerberg from 2017. To create the deepfake, Canny trained a proprietary algorithm on a 21-second clip from the 2017 video, and also on a video of a voice actor reading a script. Visually, the result is convincing, even if the voice doesn't quite sound like Zuckerberg's.
"Imagine this for a second: One man, with total control of billions of people's stolen data, all their secrets, their lives, their futures," Zuckerberg's likeness says in the video, whose caption includes "#deepfake". "I owe it all to Spectre. Spectre showed me that whoever controls the data, controls the future."
(Spectre was an award-winning interactive art installation shown at the 2019 Sheffield Doc Fest in the United Kingdom.)
The video effectively tests Facebook's policy on removing misinformation from its platform. Facebook recently faced backlash for refusing to remove a video of House Speaker Nancy Pelosi that was slowed down to make it seem like she was drunk. Facebook said it down-ranked the video to make it appear less frequently on newsfeeds and flagged it as fake.
Instagram, which is owned by Facebook, said it'd treat the Zuckerberg deepfake like the Pelosi video. "If third-party fact checkers mark it as false, we will filter it from Instagram's recommendation surfaces like Explore and hashtag pages," Stephanie Otway, a spokeswoman for the company, told the New York Times.
The team behind the Zuckerberg deepfake also created one of Kim Kardashian.
Deepfake technology has existed for years, but recently it's become sophisticated enough to fool some unsuspecting viewers. In May, Samsung researchers published a video describing a new AI that can take a single image of a person's face and animate it convincingly. If you're concerned about people weaponizing this technology, you're not alone: The Defense Department is already developing tools that aim to automatically detect deepfakes. But these tools might never be totally effective.
"Theoretically, if you gave a [generative adversarial network, which builds deepfake technology] all the techniques we know to detect it, it could pass all of those techniques," David Gunning, the DARPA program manager in charge of the Defense Department project, told MIT Technology Review. "We don't know if there's a limit. It's unclear."
Even if we could detect deepfakes, some viewers might not be eager to differentiate between real and fake – especially in politics. For example, President Donald Trump recently tweeted an altered video of House Speaker Nancy Pelosi that was slowed down to make it seem as if she were drunk. The video remains on the president's Twitter account, despite reports confirming the video was altered, and it currently has more than 6 million views. It's unclear how many people know – or are willing to acknowledge – that it's fake content.