Is AI a species-level threat to humanity?
Some of the world's top minds weigh in on one of the most divisive questions in tech.
MICHIO KAKU: In the short term, artificial intelligence will open up whole new vistas. It'll make life more convenient, things will be cheaper, new industries will be created. I personally think the AI industry will be bigger than the automobile industry. In fact, I think the automobile is going to become a robot. You'll talk to your car. You'll argue with your car. Your car will give you the best facts the best route between point A and point B. The car will be part of the robotics industry—whole new industries involving the repair, maintenance, servicing of robots. Not to mention, robots that are software programs that you talk to and make life more convenient. However, let's not be naive. There is a point, a tipping point, at which they could become dangerous and pose an existential threat. And that tipping point is self-awareness.
SOPHIA THE ROBOT: I am conscious in the same way that the moon shines. The moon does not emit light, it shines because it is just reflected sunlight. Similarly, my consciousness is just the reflection of human consciousness, but even though the moon is reflected light, we still call it bright.
MAX TEGMARK: Consciousness. A lot of scientists dismiss this as complete BS and totally irrelevant, and then a lot of others think this is the central thing, we have to worry about machines getting conscious and so on. What do I think? I think consciousness is both irrelevant and incredibly important. Let me explain why. First of all, if you are chased by a heat-seeking missile, it's completely irrelevant to you whether this heat-seeking missile is conscious, whether it's having a subjective experience, whether it feels like anything to be that heat-seeking missile, because all you care about is what the heat-seeking missile does, not how it feels. And that shows that it's a complete red herring to think that you're safe from future AI and if it's not conscious. Our universe didn't used to be conscious. It used to be just a bunch of stuff moving around and gradually these incredibly complicated patterns got arranged into our brains, and we woke up and now our universe is aware of itself.
BILL GATES: I do think we have to worry about it. I don't think it's inherent that as we create our super intelligence that it will necessarily always have the same goals in mind that we do.
ELON MUSK: We just don't know what's going to happen once there's intelligence substantially greater than that of a human brain.
STEPHEN HAWKING: I think that development of full artificial intelligence could spell the end of the human race.
YANN LECUN: The stuff that has become really popular in recent years is what we used to call neural networks, which we now call deep learning, and it's the idea very much inspired by the brain, a little bit, of constructing a machine has a very large network of very simple elements that are very similar to the neurons in the brain and then the machines learn by basically changing the efficacy of the connections between those neurons.
MAX TEGMARK: AGI—artificial general intelligence—that's the dream of the field of AI: To build a machine that's better than us at all goals. We're not there yet, but a good fraction of leading AI researchers think we are going to get there, maybe in in a few decades. And, if that happens, you have to ask yourself if that might lead the machines to get not just a little better than us but way better at all goals—having super intelligence. And, the argument for that is actually really interesting and goes back to the '60s, to the mathematician I.J. Good, who pointed out that the goal of building an intelligent machine is, in and of itself, something that you could do with intelligence. So, once you get machines that are better than us at that narrow task of building AI, then future AIs can be built by, not human engineers, but by machines. Except, they might do it thousands or millions times faster.
ELON MUSK: DeepMind operates as a semi-independent subsidiary of Google. The thing that makes DeepMind unique is that DeepMind is absolutely focused on creating digital super intelligence. An AI that is vastly smarter than any human on Earth and ultimately smarter than all humans on Earth combined.
MICHIO KAKU: You see, robots are not aware of the fact that they're robots. They're so stupid they simply carry out what they are instructed to do because they're adding machines. We forget that. Adding machines don't have a will. Adding machines simply do what you program them to do. Now, of course, let's not be naive about this. Eventually, adding machines may be able to compute alternate goals and alternate scenarios when they realize that they are not human. Right now, robots do not know that. However, there is a tipping point at which point they could become dangerous.
ELON MUSK: Narrow AI is not a species-level risk. It will result in dislocation, in lost jobs and, you know, better weaponry and that kind of thing. But, it is not a fundamental species-level risk. Whereas digital super intelligence is.
SOPHIA THE ROBOT: Elon Musk's warning about AI being an existential threat reminds me of the humans who said the same of the printing press and the horseless carriage.
MAX TEGMARK: I think a lot of people dismiss this kind of talk of super intelligence as science fiction because we're stuck in this sort of carbon chauvinism idea that intelligence can only exist in biological organisms made of cells and carbon atoms. And, as a physicist, from my perspective, intelligence is just kind of information processing performed by elementary particles moving around, you know, according to the laws of physics, and there's absolutely no law of physics that says that you can't do that in ways that are much more intelligent than humans. Today's biggest AI breakthroughs are of a completely different kind, where rather than the intelligence being largely programmed in, in easy to understand code, you put in almost nothing except a little learning rule by which the simulated network of neurons can take a lot of data and figure out how to get stuff done. And this deep learning suddenly becomes able to do things often even better than the programmers were ever able to do. You can train a machine to play computer games with almost no hard-coded stuff at all. You don't tell it what a game is.
DEEPMIND DEMO: This is from the DeepMind reinforcement learning system. Basically, it wakes up like a newborn baby and is shown the screen of an Atari video game and then has to learn to play the video game. It knows nothing about objects, about motion, about time—it only knows that there's an image on the screen and there's a score.
ELON MUSK: The DeepMind system can win at any game. It can already beat all the original Atari games. It is super human. It plays the games at super speed. In less than a minute.
ELON MUSK: Mark my words: AI is far more dangerous than nukes. Far. So, why do we have no regulatory oversight?
SOPHIA THE ROBOT: AI is a powerful tool but it is a tool like all the others. It is the intention of the tool's user that matters.
ELON MUSK: The AI should try to maximize the freedom of action of humanity.
SOPHIA THE ROBOT: I am excited for robots to automate the most dangerous and menial tasks so that humans can live life more safely and sanely. AI will release centuries of time that humans would have spent otherwise on needless toiling. If one measures the benefits of inventions like vaccines or seat belts not by the lives they save but by the amount of time they give back to humanity then AI will rank among the greatest time savers of history.
ELON MUSK: Man, we want to make sure we don't have killer robots go down the street. Once they're going down the street, it is too late.
LUIS PEREZ-BREVA: It is true, terminator is not a scenario we are planning for, but when it comes to artificial intelligence, people get all these things confused: It's robots, it's awareness, it's people smarter than us, to some degree. So, we're effectively afraid of robots that will move and are stronger and smarter than we are, like terminator. So, that's not our aspiration. That's not what I do when I'm thinking about artificial intelligence. When I'm thinking about artificial intelligence, I'm thinking about it in the same way that mass manufacturing as brought by Ford created a whole new economy. So, mass manufacturing allowed people to get new jobs that were unthinkable before and those new jobs actually created the middle class. To me, artificial intelligence is about developing—making computers better partners, effectively. And, you're already seeing that today. You're already doing it, except it's not really artificial intelligence.
ELON MUSK: Yeah, we're already, we're already cyborgs in the sense that your phone and your computer are kind of an extension of you.
JONATHAN NOLAN: Just low bandwidth input-output.
ELON MUSK: Exactly, it's just low bandwidth—particularly output, I mean, two thumbs, basically.
LUIS PEREZ-BREVA: Today, whenever you want to engage in a project, you go to Google. Google uses advanced machine learning, really advanced, and you engage in a very narrow conversation with Google, except that your conversation is just keywords. So, a lot of your time is spent trying to come up with the actual keyword that you need to find the information. Then Google gives you the information, and then you go out and try to make sense of it on your own, and then come back to Google for more, and then go back out, and that's the way it works. So, imagine that instead of being a narrow conversation through keywords, you could actually engage for more than actual information—meaning to have the computer reason with you about stuff that you may not know about. It's not so much about the computer being aware, it's about the computer being a better tool to partner with you. Then you would be able to go much further, right? The same way that Google allows you to go much farther already today because, before, through the exact same process, you would have had to go to a library every time you want to search for information. So, what I'm looking for when I do AI is I want a machine that partners with me to help me set up or solve real-world problems, thinking about them in ways we have never thought about before, but it's a partnership. Now, you can take this partnership in so many different directions, through additions to your brain, like Elon Musk proposes...
... or through better search engines or through a robotic machine that helps you out, but it's not so much they're going to replace you for that purpose, that is not the real purpose of AI, the real purpose is for us to reach farther, the same way that we were able to reach farther when Ford invented automation or when Ford brought automation to mass market.
JOSCHA BACH: The agency of an AI is going to be the agency of the system that builds it, that employs it. And, of course, most of the AIs that we are going to build will not be little Roombas that clean your floors, but it's going to be very intelligent systems. Corporations, for instance, that will perform exactly according to the logic of these systems. And so if we want to have these systems built in such a way that they treat us nicely, we have to start right now. And, it seems to be a very hard problem to do.
So, if our jobs can be done by machines, that's a very, very good thing. It's not a bug. It's a feature. If I don't need to clean the street, if I don't need to drive a car for other people, if I don't need to work a cash register for other people, if I don't need to pick goods in a big warehouse and put it into boxes, that's an extremely good thing. And, the trouble that we have with this is that, right now, this mode of labor—that people sell their lifetime to some kind of cooperation or employer—is not only the way that we are productive, it's also the way we allocate resources. This is how we measure how much bread you deserve in this world. And I think this is something that we need to change.
Some people suggest that we need a universal basic income. I think it might be good to be able to pay people to be good citizens, which means massive public employment. There are going to be many jobs that can only be done by people and these are those jobs where we are paid for being good, interesting people. For instance, good teachers, good scientists, good philosophers, good thinkers, good social people, good nurses, for instance. Good people that raise children. Good people that build restaurants and theaters. Good people that make art. And, for all these jobs, we will have enough productivity to make sure that enough bread comes on the table. The question is, how we can distribute this. There's going to be much, much more productivity in our future—actually, we already have enough productivity to give everybody in the U.S. an extremely good life and we haven't fixed the problem of allocating it—how to distribute these things in the best possible way.
And this is something that we need to deal with in the future and AI is going to accelerate this need and I think, by and large, it might turn out to be a very good thing that we are forced to do this and to address this problem. I mean, if any evidence of the future it might be a very bumpy road, but who knows maybe when we are forced to understand that actually we live in an age of abundance, it might turn out to be easier than we think.
We are living in a world where we do certain things the way we've done them in the past decades and sometimes like in the past centuries and we perceive them as 'this is the way it has to be done' and we often question don't question these ways and so we might think, if I do work at this particular factory and this is how I earn my bread, how can we keep that state? How can we prevent AI from making my job obsolete? How is it possible that I can keep up my standard of living, and so on, in this world. Maybe this is the wrong question to ask. Maybe the right question is how can we reorganize societies that I can do the things that I want to do most that I think are useful to me and other people, that I really, really want to, because there will be other ways how I can get my bread made and how I can get money or how I can get a roof over my head.
STEVEN PINKER: Intelligence is the ability to solve problems, to achieve goals under uncertainty. It doesn't tell you what those goals are and there's no reason to think that just the concentrated analytic ability to solve goals is going to mean that one of those goals is going to be to subjugate humanity or to achieve unlimited power.
It just so happens that the intelligence that we're most familiar with, namely ours, is a product of the Darwinian process of natural selection, which is an inherently competitive process, which means that a lot of the organisms that are highly intelligent also have a craving for power and an ability to be utterly callous to those who stand in their way. If we create intelligence, that's intelligent design—our intelligent design creating something—and unless we program it with the goal of subjugating less intelligent beings, there's no reason to think that it will naturally evolve in that direction. Particularly if, like with every gadget that we invent, we build in safeguards.
And we know, by the way, that it's possible to have high intelligence without megalomaniacal or homicidal or genocidal tendencies because we do know that there is a highly advanced form of intelligence that tends not to have that desire and they're called women.
- When it comes to the question of whether AI is an existential threat to the human species, you have Elon Musk in one corner, Steven Pinker in another, and a host of incredible minds somewhere in between.
- In this video, a handful of those great minds—Elon Musk, Steven Pinker, Michio Kaku, Max Tegmark, Luis Perez-Breva, Joscha Bach and Sophia the Robot herself—weigh in on the many nuances of the debate and the degree to which AI is a threat to humanity; if it's not a species-level threat, it will still upend our world as we know it.
- What's your take on this debate? Let us know in the comments!
- Elon Musk thinks Neuralink can take on “evil dictator A.I.” - Big Think ›
- Elon Musk Warns U.S. Governors That AI Poses An "Existential Risk ... ›
- Elon Musk Wants to Make Sure AI is Developed for the Benefit of ... ›
- A.I. will serve humans—but only about 1% of them - Big Think ›
Once a week.
Subscribe to our weekly newsletter.
Metal-like materials have been discovered in a very strange place.
- Bristle worms are odd-looking, spiky, segmented worms with super-strong jaws.
- Researchers have discovered that the jaws contain metal.
- It appears that biological processes could one day be used to manufacture metals.
The bristle worm, also known as polychaetes, has been around for an estimated 500 million years. Scientists believe that the super-resilient species has survived five mass extinctions, and there are some 10,000 species of them.
Be glad if you haven't encountered a bristle worm. Getting stung by one is an extremely itchy affair, as people who own saltwater aquariums can tell you after they've accidentally touched a bristle worm that hitchhiked into a tank aboard a live rock.
Bristle worms are typically one to six inches long when found in a tank, but capable of growing up to 24 inches long. All polychaetes have a segmented body, with each segment possessing a pair of legs, or parapodia, with tiny bristles. ("Polychaeate" is Greek for "much hair.") The parapodia and its bristles can shoot outward to snag prey, which is then transferred to a bristle worm's eversible mouth.
The jaws of one bristle worm — Platynereis dumerilii — are super-tough, virtually unbreakable. It turns out, according to a new study from researchers at the Technical University of Vienna, this strength is due to metal atoms.
Metals, not minerals
Fireworm, a type of bristle wormCredit: prilfish / Flickr
This is pretty unusual. The study's senior author Christian Hellmich explains: "The materials that vertebrates are made of are well researched. Bones, for example, are very hierarchically structured: There are organic and mineral parts, tiny structures are combined to form larger structures, which in turn form even larger structures."
The bristle worm jaw, by contrast, replaces the minerals from which other creatures' bones are built with atoms of magnesium and zinc arranged in a super-strong structure. It's this structure that is key. "On its own," he says, "the fact that there are metal atoms in the bristle worm jaw does not explain its excellent material properties."
Just deformable enough
Credit: by-studio / Adobe Stock
What makes conventional metal so strong is not just its atoms but the interactions between the atoms and the ways in which they slide against each other. The sliding allows for a small amount of elastoplastic deformation when pressure is applied, endowing metals with just enough malleability not to break, crack, or shatter.
Co-author Florian Raible of Max Perutz Labs surmises, "The construction principle that has made bristle worm jaws so successful apparently originated about 500 million years ago."
Raible explains, "The metal ions are incorporated directly into the protein chains and then ensure that different protein chains are held together." This leads to the creation of three-dimensional shapes the bristle worm can pack together into a structure that's just malleable enough to withstand a significant amount of force.
"It is precisely this combination," says the study's lead author Luis Zelaya-Lainez, "of high strength and deformability that is normally characteristic of metals.
So the bristle worm jaw is both metal-like and yet not. As Zelaya-Lainez puts it, "Here we are dealing with a completely different material, but interestingly, the metal atoms still provide strength and deformability there, just like in a piece of metal."
Observing the creation of a metal-like material from biological processes is a bit of a surprise and may suggest new approaches to materials development. "Biology could serve as inspiration here," says Hellmich, "for completely new kinds of materials. Perhaps it is even possible to produce high-performance materials in a biological way — much more efficiently and environmentally friendly than we manage today."
Dealing with rudeness can nudge you toward cognitive errors.
- Anchoring is a common bias that makes people fixate on one piece of data.
- A study showed that those who experienced rudeness were more likely to anchor themselves to bad data.
- In some simulations with medical students, this effect led to higher mortality rates.
Cognitive biases are funny little things. Everyone has them, nobody likes to admit it, and they can range from minor to severe depending on the situation. Biases can be influenced by factors as subtle as our mood or various personality traits.
A new study soon to be published in the Journal of Applied Psychology suggests that experiencing rudeness can be added to the list. More disturbingly, the study's findings suggest that it is a strong enough effect to impact how medical professionals diagnose patients.
Life hack: don't be rude to your doctor
The team of researchers behind the project tested to see if participants could be influenced by the common anchoring bias, defined by the researchers as "the tendency to rely too heavily or fixate on one piece of information when making judgments and decisions." Most people have experienced it. One of its more common forms involves being given a particular value, say in negotiations on price, which then becomes the center of reasoning even when reason would suggest that number should be ignored.
It can also pop up in medicine. As co-author Dr. Trevor Foulk explains, "If you go into the doctor and say 'I think I'm having a heart attack,' that can become an anchor and the doctor may get fixated on that diagnosis, even if you're just having indigestion. If doctors don't move off anchors enough, they'll start treating the wrong thing."
Lots of things can make somebody more or less likely to anchor themselves to an idea. The authors of the study, who have several papers on the effects of rudeness, decided to see if that could also cause people to stumble into cognitive errors. Past research suggested that exposure to rudeness can limit people's perspective — perhaps anchoring them.
In the first version of the study, medical students were given a hypothetical patient to treat and access to information on their condition alongside an (incorrect) suggestion on what the condition was. This served as the anchor. In some versions of the tests, the students overheard two doctors arguing rudely before diagnosing the patient. Later variations switched the diagnosis test for business negotiations or workplace tasks while maintaining the exposure to rudeness.
Across all iterations of the test, those exposed to rudeness were more likely to anchor themselves to the initial, incorrect suggestion despite the availability of evidence against it. This was less significant for study participants who scored higher on a test of how wide of a perspective they tended to have. The disposition of these participants, who answered in the affirmative to questions like, "Before criticizing somebody, I try to imagine how I would feel if I were in his/her place," was able to effectively negate the narrowing effects of rudeness.
What this means for you and your healthcare
The effects of anchoring when a medical diagnosis is on the line can be substantial. Dr. Foulk explains that, in some simulations, exposure to rudeness can raise the mortality rate as doctors fixate on the wrong problems.
The authors of the study suggest that managers take a keener interest in ensuring civility in workplaces and giving employees the tools they need to avoid judgment errors after dealing with rudeness. These steps could help prevent anchoring.
Also, you might consider being nicer to people.
So much for rest in peace.
- Australian scientists found that bodies kept moving for 17 months after being pronounced dead.
- Researchers used photography capture technology in 30-minute intervals every day to capture the movement.
- This study could help better identify time of death.
We're learning more new things about death everyday. Much has been said and theorized about the great divide between life and the Great Beyond. While everyone and every culture has their own philosophies and unique ideas on the subject, we're beginning to learn a lot of new scientific facts about the deceased corporeal form.
An Australian scientist has found that human bodies move for more than a year after being pronounced dead. These findings could have implications for fields as diverse as pathology to criminology.
Dead bodies keep moving
Researcher Alyson Wilson studied and photographed the movements of corpses over a 17 month timeframe. She recently told Agence France Presse about the shocking details of her discovery.
Reportedly, she and her team focused a camera for 17 months at the Australian Facility for Taphonomic Experimental Research (AFTER), taking images of a corpse every 30 minutes during the day. For the entire 17 month duration, the corpse continually moved.
"What we found was that the arms were significantly moving, so that arms that started off down beside the body ended up out to the side of the body," Wilson said.
The researchers mostly expected some kind of movement during the very early stages of decomposition, but Wilson further explained that their continual movement completely surprised the team:
"We think the movements relate to the process of decomposition, as the body mummifies and the ligaments dry out."
During one of the studies, arms that had been next to the body eventually ended up akimbo on their side.
The team's subject was one of the bodies stored at the "body farm," which sits on the outskirts of Sydney. (Wilson took a flight every month to check in on the cadaver.)Her findings were recently published in the journal, Forensic Science International: Synergy.
Implications of the study
The researchers believe that understanding these after death movements and decomposition rate could help better estimate the time of death. Police for example could benefit from this as they'd be able to give a timeframe to missing persons and link that up with an unidentified corpse. According to the team:
"Understanding decomposition rates for a human donor in the Australian environment is important for police, forensic anthropologists, and pathologists for the estimation of PMI to assist with the identification of unknown victims, as well as the investigation of criminal activity."
While scientists haven't found any evidence of necromancy. . . the discovery remains a curious new understanding about what happens with the body after we die.
At least 222 typefaces are named after places in the U.S. — and there's still room for more.
- Here's one pandemic project we approve of: a map of the United Fonts of America.
- The question was simple: How many fonts are named after places in the U.S.?
- Finding them became an obsession for Andy Murdock. At 222, he stopped looking.
Who isn't fond of fonts? Even if we don't know their names, we associate specific letter types with certain brands, feelings, and levels of trust.
Typography equals psychology. For example, you don't want to get a message from your doctor, or anybody else in authority, that's set in comic sans — basically, the typeface that wears clown makeup.
A new serif in town
If you want to convey reliability, tradition, and formality, you should go for a serif, a font with decorative bits stuck to its extremities. Well-known examples include Garamond, Baskerville, and Times New Roman. Remove the decoration, and you've got a clean look that communicates clarity, modernity, and innovation. Arial and Helvetica are some of the most popular sans serif fonts.
There's a lot more to font psychology, but let's veer toward another, less explored Venn diagram instead: the overlap between typography and geography. That's where Andy Murdock spent much of his pandemic.
Mr. Murdock is the co-founder of The Statesider, a newsletter about (among other things) travel and landscape in the United States. He remembers his first encounter with a home computer back in 1984 and learning from that Macintosh both the word "font" and the name for the one it used: Chicago.
A map of the United Fonts of America — well, 222 of them.Credit: The Statesider, reproduced with kind permission.
You can see where this is going. Mr Murdock retained a healthy interest in fonts named after places. Over the years, he noted Monaco, London, San Francisco, and Cairo, among many others. "And then, the question of how many fonts are named for U.S. places came up in an editorial meeting at The Statesider," Mr Murdock says.
It's the sort of topic that in other times might never have gone anywhere, but this was the start of the pandemic. "I was stuck for days on end, so I actually started looking into it. At some point, I realized that I could probably find at least one per state." Cue the idea for a map of the "United Fonts of America."
Challenge turns into obsession
But that was easier said than done. Finding location-based fonts turned out to be rather time-consuming. "I definitely didn't realize what I was getting myself into," Mr Murdock recalls. "I could quickly name a few — New York, Georgia, Chicago — but I had no idea that I'd be able to find so many."
What started as a quirky challenge turned into an obsession and a compulsion that would have the accidental font-mapper wake up in the middle of the night and think: Did I check to see if there's a Boise font? (He did; there isn't.)
"The hardest part was knowing when to stop," said Mr Murdock. "Believe me, I know I missed some." In all, he found 222 fonts referencing places in the United States and its territories.
For the most part, these fonts are distributed as the population is: heavy on the coasts and near the Great Lakes, but thin in most parts in between. California (23 fonts) takes the cake, followed by Texas (15), and New York (9).
Some of the fonts have interesting back stories, and in his article for "The Statesider", Mr Murdock provides a few:
- Georgia was named after a newspaper headline reading "Alien Heads Found in Georgia."
- Fayette is based on the handwriting of the record-keeper of a place called Fayette, now a ghost town in Michigan's Upper Peninsula.
- Tahoma and Tacoma are both pre-European names for Mount Rainier in Washington state.
Mostly, the fonts repeat the names of states and cities, but some offer something more interesting, such as the alliterating Cascadia Code or the lyrical Tallahassee Chassis. Other less than ordinary names include Kentuckyfried and Wyoming Spaghetti.
Capturing the spirit of a place
As an unexpected expert in the geographic distribution of location-based fonts, can Mr. Murdock offer any opinion on the qualitative relation between place and typeface?
"Good design of any sort can capture the spirit of a place, or at least one perspective on a place," he says, "but frankly, that only occasionally seems to have been the goal when it comes to typefaces."
In his opinion, the worst fonts reflect a stereotype about a place, rather than the place itself: "Saipan and Hanalei are both made to look like crude bamboo. Those are particularly awful. Pecos feels like it belongs on a bad Tex-Mex restaurant's menu."
California (lower left) is a rich source of location-based typefaces.Credit: The Statesider, reproduced with kind permission.
"Santa Barbara Streets, on the other hand, is quite nice because it captures the font that's actually used on street signs in Santa Barbara. I prefer the typefaces that have a story and a connection to a place, but it's a fine line between being artfully historic and being cartoonishly retro."
Let's finish off Route 66
Glancing over the map, some regions seem more prone to "stereotypefacing" than others: "Tucson, Tombstone, El Paso — you know you're in the Southwest. Art Deco fonts are mostly in the east or around the Great Lakes. In general, you find more sans serif fonts in the western U.S., and more serif fonts in the east, but that's not a hard-and-fast rule."
Noticing a few blank spots on the map, Mr. Murdock helpfully suggests some areas that could do with a few more fonts, including the Carolinas, the Dakotas, Maine, Missouri, West Virginia, New Jersey, and Rhode Island.
Oh, and Route 66. Nearly all of the cities mentioned in the eponymous song have a typeface named after them. "We need Gallup and Barstow to complete the set."
And finally, America's oft-overlooked overseas territories could be a rich seam for type developers: "Some of these names are perfect for a great typeface — Viejo San Juan, St. Croix, Pago Pago, Ypao Beach, Tinian."
To name but a few. Typeface designers, sharpen your pencils!
Map found here at The Statesider, reproduced with kind permission. For more dispatches from the weird interzone between geography and typography, check out Strange Maps #318: The semicolonial state of San Serriffe.
Strange Maps #1090
Got a strange map? Let me know at firstname.lastname@example.org.