from the world's big
Why A.I. is a big fat lie
The Dr. Data Show is a new web series that breaks the mold for data science infotainment, captivating the planet with short webisodes that cover the very best of machine learning and predictive analytics.
- All the hype around artificial intelligence misunderstands what intelligence really is.
- And A.I. is definitely, definitely not going to kill you, ever.
- Machine learning as a process and a concept, however, holds more promise.
A.I. is a big fat lie
A.I. is a big fat lie. Artificial intelligence is a fraudulent hoax — or in the best cases it's a hyped-up buzzword that confuses and deceives. The much better, precise term would instead usually be machine learning – which is genuinely powerful and everyone oughta be excited about it.
On the other hand, AI does provide some great material for nerdy jokes. So put on your skepticism hat, it's time for an AI-debunkin', slam-dunkin', machine learning-lovin', robopocalypse myth-bustin', smackdown jamboree – yeehaw!
3 main points
1) Unlike AI, machine learning's totally legit. I gotta say, it wins the Awesomest Technology Ever award, forging advancements that make ya go, "Hooha!". However, these advancements are almost entirely limited to supervised machine learning, which can only tackle problems for which there exist many labeled or historical examples in the data from which the computer can learn. This inherently limits machine learning to only a very particular subset of what humans can do – plus also a limited range of things humans can't do.
2) AI is BS. And for the record, this naysayer taught the Columbia University graduate-level "Artificial Intelligence" course, as well as other related courses there.
AI is nothing but a brand. A powerful brand, but an empty promise. The concept of "intelligence" is entirely subjective and intrinsically human. Those who espouse the limitless wonders of AI and warn of its dangers – including the likes of Bill Gates and Elon Musk – all make the same false presumption: that intelligence is a one-dimensional spectrum and that technological advancements propel us along that spectrum, down a path that leads toward human-level capabilities. Nuh uh. The advancements only happen with labeled data. We are advancing quickly, but in a different direction and only across a very particular, restricted microcosm of capabilities.
The term artificial intelligence has no place in science or engineering. "AI" is valid only for philosophy and science fiction – and, by the way, I totally love the exploration of AI in those areas.
3) AI isn't gonna kill you. The forthcoming robot apocalypse is a ghost story. The idea that machines will uprise on their own volition and eradicate humanity holds no merit.
Neural networks for the win
In the movie "Terminator 2: Judgment Day," the titular robot says, "My CPU is a neural net processor, a learning computer." The neural network of which that famous robot speaks is actually a real kind of machine learning method. A neural network is a way to depict a complex mathematical formula, organized into layers. This formula can be trained to do things like recognize images for self-driving cars. For example, watch several seconds of a neural network performing object recognition.
What you see it doing there is truly amazing. The network's identifying all those objects. With machine learning, the computer has essentially programmed itself to do this. On its own, it has worked out the nitty gritty details of exactly what patterns or visual features to look for. Machine learning's ability to achieve such things is awe-inspiring and extremely valuable.
The latest improvements to neural networks are called deep learning. They're what make this level of success in object recognition possible. With deep learning, the network is quite literally deeper – more of those layers. However, even way way back in 1997, the first time I taught the machine learning course, neural networks were already steering self-driving cars, in limited contexts, and we even had our students apply them for face recognition as a homework assignment.
The achitecture for a simple neural network with four layers
But the more recent improvements are uncanny, boosting its power for many industrial applications. So, we've even launched a new conference, Deep Learning World, which covers the commercial deployment of deep learning. It runs alongside our long-standing machine learning conference series, Predictive Analytics World.
Supervised machine learning requires labeled data
So, with machines just getting better and better at humanlike tasks, doesn't that mean they're getting smarter and smarter, moving towards human intelligence?
No. It can get really, really good at certain tasks, but only when there's the right data from which to learn. For the object recognition discussed above, it learned to do that from a large number of example photos within which the target objects were already correctly labeled. It needed those examples to learn to recognize those kinds of objects. This is called supervised machine learning: when there is pre-labeled training data. The learning process is guided or "supervised" by the labeled examples. It keeps tweaking the neural network to do better on those examples, one incremental improvement at a time. That's the learning process. And the only way it knows the neural network is improving or "learning" is by testing it on those labeled examples. Without labeled data, it couldn't recognize its own improvements so it wouldn't know to stick with each improvement along the way. Supervised machine learning is the most common form of machine learning.
Here's another example. In 2011, IBM's Watson computer defeated the two all-time human champions on the TV quiz show Jeopardy. I'm a big fan. This was by far the most amazing thing I've seen a computer do – more impressive than anything I'd seen during six years of graduate school in natural language understanding research. Here's a 30-second clip of Watson answering three questions.
To be clear, the computer didn't actually hear the spoken questions but rather was fed each question as typed text. But its ability to rattle off one answer after another – given the convoluted, clever wording of Jeopardy questions, which are designed for humans and run across any and all topics of conversation – feels to me like the best "intelligent-like" thing I've ever seen from a computer.
But the Watson machine could only do that because it had been given many labeled examples from which to learn: 25,000 questions taken from prior years of this TV quiz show, each with their own correct answer.
At the core, the trick was to turn every question into a yes/no prediction: "Will such-n-such turn out to be the answer to this question?" Yes or no. If you can answer that question, then you can answer any question – you just try thousands of options out until you get a confident "yes." For example, "Is 'Abraham Lincoln' the answer to 'Who was the first president?'" No. "Is 'George Washington'?" Yes! Now the machine has its answer and spits it out.
Computers that can talk like humans
And there's another area of language use that also has plentiful labeled data: machine translation. Machine learning gobbles up a feast of training data for translating between, say, English and Japanese, because there are tons of translated texts out there filled with English sentences and their corresponding Japanese translations.
In recent years, Google Translate – which anyone can use online – swapped out the original underlying solution for a much-improved one driven by deep learning. Go try it out – translate a letter to your friend or relative who has a different first language than you. I use it a lot myself.
On the other hand, general competence with natural languages like English is a hallmark of humanity – and only humanity. There's no known roadmap to fluency for our silicon sisters and brothers. When we humans understand one another, underneath all the words and somewhat logical grammatical rules is "general common sense and reasoning." You can't work with language without that very particular human skill. Which is a broad, unwieldy, amorphous thing we humans amazingly have.
So our hopes and dreams of talking computers are dashed because, unfortunately, there's no labeled data for "talking like a person." You can get the right data for a very restricted, specific task, like handling TV quiz show questions, or answering the limited range of questions people might expect Siri to be able to answer. But the general notion of "talking like a human" is not a well-defined problem. Computers can only solve problems that are precisely defined.
So we can't leverage machine learning to achieve the typical talkative computer we see in so many science fiction movies, like the Terminator, 2001's evil HAL computer, or the friendly, helpful ship computer in Star Trek. You can converse with those machines in English very much like you would with a human. It's easy. Ya just have to be a character in a science fiction movie.
Intelligence is subjective, so A.I. has no real definition
Now, if you think you don't already know enough about AI, you're wrong. There is nothing to know, because it isn't actually a thing. There's literally no meaningful definition whatsoever. AI poses as a field, but it's actually just a fanciful brand. As a supposed field, AI has many competing definitions, most of which just boil down to "smart computer." I must warn you, do not look up "self-referential" in the dictionary. You'll get stuck in an infinite loop.
Many definitions are even more circular than "smart computer," if that's possible. They just flat out use the word "intelligence" itself within the definition of AI, like "intelligence demonstrated by a machine."
If you've assumed there are more subtle shades of meaning at hand, surprise – there aren't. There's no way to resolve how utterly subjective the word "intelligence" is. For computers and engineering, "intelligence" is an arbitrary concept, irrelevant to any precise goal. All attempts to define AI fail to solve its vagueness.
Now, in practice the word is often just – confusingly – used as a synonym for machine learning. But as for AI as its own concept, most proposed definitions are variations of the following three:
1) AI is getting a computer to think like a human. Mimic human cognition. Now, we have very little insight into how our brains pull off what they pull off. Replicating a brain neuron-by-neuron is a science fiction "what if" pipe dream. And introspection – when you think about how you think – is interesting, big time, but ultimately tells us precious little about what's going on in there.
2) AI is getting a computer to act like a human. Mimic human behavior. Cause if it walks like a duck and talks like a duck... But it doesn't and it can't and we're way too sophisticated and complex to fully understand ourselves, let alone translate that understanding into computer code. Besides, fooling people into thinking a computer in a chatroom is actually a human – that's the famous Turing Test for machine intelligence – is an arbitrary accomplishment and it's a moving target as we humans continually become wiser to the trickery used to fool us.
3) AI is getting computers to solve hard problems. Get really good at tasks that seem to require "intelligence" or "human-level" capability, such as driving a car, recognizing human faces, or mastering chess. But now that computers can do them, these tasks don't seem so intelligent after all. Everything a computer does is just mechanical and well understood and in that way mundane. Once the computer can do it, it's no longer so impressive and it loses its charm. A computer scientist named Larry Tesler suggested we define intelligence as "whatever machines haven't done yet." Humorous! A moving-target definition that defines itself out of existence.
By the way, the points in this article also apply to the term "cognitive computing," which is another poorly-defined term coined to allege a relationship between technology and human cognition.
The logical fallacy of believing in A.I.'s innevitability
The thing is, "artificial intelligence" itself is a lie. Just evoking that buzzword automatically insinuates that technological advancement is making its way toward the ability to reason like people. To gain humanlike "common sense." That's a powerful brand. But it's an empty promise. Your common sense is more amazing – and unachievable – than your common sense can sense. You're amazing. Your ability to think abstractly and "understand" the world around you might feel simple in your moment-to-moment experience, but it's incredibly complex. That experience of simplicity is either a testament to how adept your uniquely human brain is or a great illusion that's intrinsic to the human condition – or probably both.
Now, some may respond to me, "Isn't inspired, visionary ambition a good thing? Imagination propels us and unknown horizons beckon us!" Arthur C. Clarke, the author of 2001, made a great point: "Any sufficiently advanced technology is indistinguishable from magic." I agree. However, that does not mean any and all "magic" we can imagine – or include in science fiction – could eventually be achieved by technology. Just 'cause it's in a movie doesn't mean it's gonna happen. AI evangelists often invoke Arthur's point – but they've got the logic reversed. My iPhone seems very "Star Trek" to me, but that's not an argument everything on Star Trek is gonna come true. The fact that creative fiction writers can make shows like Westworld is not at all evidence that stuff like that could happen.
Now, maybe I'm being a buzzkill, but actually I'm not. Let me put it this way. The uniqueness of humans and the real advancements of machine learning are each already more than amazing and exciting enough to keep us entertained. We don't need fairy tales – especially ones that mislead.
Sophia: A.I.'s most notoriously fraudulent publicity stunt
The star of this fairy tale, the leading role of "The Princess" is played by Sophia, a product of Hanson Robotics and AI's most notorious fraudulent publicity stunt. This robot has applied her artificial grace and charm to hoodwink the media. Jimmy Fallon and other interviewers have hosted her – it, I mean have hosted it. But when it "converses," it's all scripts and canned dialogue – misrepresented as spontaneous conversation – and in some contexts, rudimentary chatbot-level responsiveness.
Believe it or not, three fashion magazines have featured Sophia on their cover, and, ever goofier and sillier, the country Saudi Arabia officially granted it citizenship. For real. The first robot citizen. I'm actually a little upset about this, 'cause my microwave and pet rock have also applied for citizenship but still no word.
Sophia is a modern-day Mechanical Turk – which was an 18th century hoax that fooled the likes of Napoleon Bonaparte and Benjamin Franklin into believing they'd just lost a game of chess to a machine. A mannequin would move the chess pieces and the victims wouldn't notice there was actually a small human chess expert hidden inside a cabinet below the chess board.
In a modern day parallel, Amazon has an online service you use to hire workers to perform many small tasks that require human judgement, like choosing the nicest looking of several photographs. It's named Amazon Mechanical Turk, and its slogan, "Artificial Artificial Intelligence." Which reminds me of this great vegetarian restaurant with "mock mock duck" on the menu – I swear, it tastes exactly like mock duck. Hey, if it talks like a duck, and it tastes like a duck...
Yes indeed, the very best fake AI is humans. In 1965, when NASA was defending the idea of sending humans to space, they put it this way: "Man is the lowest-cost, 150-pound, nonlinear, all-purpose computer system which can be mass-produced by unskilled labor." I dunno. I think there's some skill in it. ;-)
The myth of dangerous superintelligence
Anyway, as for Sophia, mass hysteria, right? Well, it gets worse: Claims that AI presents an existential threat to the human race. From the most seemingly credible sources, the most elite of tech celebrities, comes a doomsday vision of homicidal robots and killer computers. None other than Bill Gates, Elon Musk, and even the late, great Stephen Hawking have jumped on the "superintelligence singularity" bandwagon. They believe machines will achieve a degree of general competence that empowers the machines to improve their own general competence – so much so that this will then quickly escalate past human intelligence, and do so at the lightning speed of computers, a speed the computers themselves will continue to improve by virtue of their superintelligence, and before you know it you have a system or entity so powerful that the slightest misalignment of objectives could wipe out the human race. Like if we naively commanded it to manufacture as many rubber chickens as possible, it might invent an entire new high-speed industry that can make 40 trillion rubber chickens but that happens to result in the extinction of Homo sapiens as a side effect. Well, at least it would be easier to get tickets for Hamilton.
There are two problems with this theory. First, it's so compellingly dramatic that it's gonna ruin movies. If the best bad guy is always a robot instead of a human, what about Nurse Ratched and Norman Bates? I need my Hannibal Lecter! "The best bad guy," by the way, is an oxymoron. And so is "artificial intelligence." Just sayin'.
But it is true: Robopocalypse is definitely coming. Soon. I'm totally serious, I swear. Based on a novel by the same name, Michael Bay – of the "Transformers" movies – is currently directing it as we speak. Fasten your gosh darn seatbelts people, 'cause, if "Robopocalypse" isn't in 3D, you were born in the wrong parallel universe.
Oh yeah, and the second problem with the AI doomsday theory is that it's ludicrous. AI is so smart it's gonna kill everyone by accident? Really really stupid superintelligence? That sounds like a contradiction.
To be more precise, the real problem is that the theory presumes that technological advancements move us along a path toward humanlike "thinking" capabilities. But they don't. It's not headed in that direction. I'll come back to that point again in a minute – first, a bit more on how widely this apocalyptic theory has radiated.
A widespread belief in superintelligence
The Kool-Aid these high-tech royalty drink, the go-to book that sets the foundation, is the New York Times bestseller "Superintelligence," by Nick Bostrom, who's a professor of applied ethics at Oxford University. The book mongers the fear and fans the flames, if not igniting the fire in the first place for many people. It explores how we might "make an intelligence explosion survivable." The Guardian newspaper ran a headline, "Artificial intelligence: 'We're like children playing with a bomb'," and Newsweek: "Artificial Intelligence Is Coming, and It Could Wipe Us Out," both headlines obediently quoting Bostrom himself.
Bill Gates "highly recommends" the book, Elon Musk said AI is "vastly more risky than North Korea" – as Fortune Magazine repeated in a headline – and, quoting Stephen Hawking, the BBC ran a headline, "'AI could spell end of the human race'."
In a Ted talk that's been viewed 5 million times (across platforms), the bestselling author and podcast intellectual Sam Harris states with supreme confidence, "At a certain point, we will build machines that are smarter than we are, and once we have machines that are smarter than we are, they will begin to improve themselves."
Both he and Bostrom show the audience an intelligence spectrum during their Ted talks – here's the one by Bostrom:
What happens when our computers get smarter than we are? | Nick Bostrom
You can see as we move along the path from left to right we pass a mouse, a chimp, a village idiot, and then the very smart theoretical physicist Ed Witten. He's relatively close to the idiot, because even an idiot human is much smarter than a chimp, relatively speaking. You can see the arrow just above the spectrum showing that "AI" progresses in that same direction, along to the right. At the very rightmost position is Bostrom himself, which is either just an accident of photography, or proof that he himself is an AI robot.
Oops, that was the wrong clip – uh, that was Dr. Frankenstein, but, ya know, same scenario.
A falsely conceived "spectrum of intelligence"
Anyway, that falsely-conceived intelligence spectrum is the problem. I've read the book and many of the interviews and watched the talks and pretty much all the believers intrinsically build on an erroneous presumption that "smartness" or "intelligence" falls more or less along a single, one-dimensional spectrum. They presume that the more adept machines become at more and more challenging tasks, the higher they will rank on this scale, eventually surpassing humans.
But machine learning has us marching along a different path. We're moving fast, and we'll likely go very far, but we're going in a different direction, only tangentially related to human capabilities.
The trick is to take a moment to think about this difference. Our own personal experiences of being one of those smart creatures called a human is what catches us in a thought trap. Our very particular and very impressive capabilities are hidden from ourselves beneath a veil of a conscious experience that just kind of feels like "clarity." It feels simple, but under the surface, it's oh so complex. Replicating our "general common sense" is a fanciful notion that no technological advancements have ever moved us towards in any meaningful way.
Thinking abstractly often feels uncomplicated. We draw visuals in our mind, like a not-to-scale map of a city we're navigating, or a "space" of products that two large companies are competing to sell, with each company dominating in some areas but not in others... or, when thinking about AI, the mistaken vision that increasingly adept capabilities – both intellectual and computational – all fall along the same, somewhat narrow path.
Now, Bostrom rightly emphasizes that we should not anthropomorphize what intelligent machines may be like in the future. It's not human, so it's hard to speculate on the specifics and perhap it will seem more like a space alien's intelligence. But what Bostrom and his followers aren't seeing is that, since they believe technology advances along a spectrum that includes and then transcends human cognition, the spectrum itself as they've conceived it is anthropomorphic. It has humanlike qualities built in. Now, your common sense reasoning may seem to you like a "natural stage" of any sort of intellectual development, but that's a very human-centric perspective. Your common sense is intricate and very, very particular. It's far beyond our grasp – for anyone – to formally define a "spectrum of intelligence" that includes human cognition on it. Our brains are spectacularly multi-faceted and adept, in a very arcane way.
Machines progress along a different spectrum
Machine learning actually does work by defining a kind of spectrum, but only for an extremely limited sort of trajectory – only for tasks that have labeled data, such as identifying objects in images. With labeled data, you can compare and rank various attempts to solve the problem. The computer uses the data to measure how well it does. Like, one neural network might correctly identify 90% of the trucks in the images and then a variation after some improvements might get 95%.
Getting better and better at a specific task like that obviously doesn't lead to general common sense reasoning capabilities. We're not on that trajectory, so the fears should be allayed. The machine isn't going to get to a human-like level where it then figures out how to propel itself into superintelligence. No, it's just gonna keep getting better at identifying objects, that's all.
Intelligence isn't a Platonic ideal that exists separately from humans, waiting to be discovered. It's not going to spontaneously emerge along a spectrum of better and better technology. Why would it? That's a ghost story.
It might feel tempting to believe that increased complexity leads to intelligence. After all, computers are incredibly general-purpose – they can basically do any task, if only we can figure out how to program them to do that task. And we're getting them to do more and more complex things. But just because they could do anything doesn't mean they will spontaneously do everything we imagine they might.
No advancements in machine learning to date have provided any hint or inkling of what kind of secret sauce could get computers to gain "general common sense reasoning." Dreaming that such abilities could emerge is just wishful thinking and rogue imagination, no different now, after the last several decades of innovations, than it was back in 1950, when Alan Turing, the father of computer science, first tried to define how the word "intelligence" might apply to computers.
Don't sell, buy, or regulate on A.I.
Machines will remain fundamentally under our control. Computer errors will kill – people will die from autonomous vehicles and medical automation – but not on a catastrophic level, unless by the intentional design of human cyber attackers. When a misstep does occur, we take the system offline and fix it.
Now, the aforementioned techno-celebrity believers are true intellectuals and are truly accomplished as entrepreneurs, engineers, and thought leaders in their respective fields. But they aren't machine learning experts. None of them are. When it comes to their AI pontificating, it would truly be better for everyone if they published their thoughts as blockbuster movie scripts rather than earnest futurism.
It's time for term "AI" to be "terminated." Mean what you say and say what you mean. If you're talking about machine learning, call it machine learning. The buzzword "AI" is doing more harm than good. It may sometimes help with publicity, but to at least the same degree, it misleads. AI isn't a thing. It's vaporware. Don't sell it and don't buy it.
And most importantly, do not regulate on "AI"! Technology greatly needs regulation in certain arenas, for example, to address bias in algorithmic decision-making and the development of autonomous weapons – which often use machine learning – so clarity is absolutely critical in these discussions. Using the imprecise, misleading term "artificial intelligence" is gravely detrimental to the effectiveness and credibility of any initiative that regulates technology. Regulation is already hard enough without muddying the waters.
Want more of Dr. Data?
Higher education faces challenges that are unlike any other industry. What path will ASU, and universities like ASU, take in a post-COVID world?
- Everywhere you turn, the idea that coronavirus has brought on a "new normal" is present and true. But for higher education, COVID-19 exposes a long list of pernicious old problems more than it presents new problems.
- It was widely known, yet ignored, that digital instruction must be embraced. When combined with traditional, in-person teaching, it can enhance student learning outcomes at scale.
- COVID-19 has forced institutions to understand that far too many higher education outcomes are determined by a student's family income, and in the context of COVID-19 this means that lower-income students, first-generation students and students of color will be disproportionately afflicted.
What conditions of the new normal were already appreciated widely?<p>First, we understand that higher education is unique among industries. Some industries are governed by markets. Others are run by governments. Most operate under the influence of both markets and governments. And then there's higher education. Higher education as an "industry" involves public, private, and for-profit universities operating at small, medium, large, and now massive scales. Some higher education industry actors are intense specialists; others are adept generalists. Some are fantastically wealthy; others are tragically poor. Some are embedded in large cities; others are carefully situated near farms and frontiers.</p> <p>These differences demonstrate just some of the complexities that shape higher education. Still, we understand that change in the industry is underway, and we must be active in directing it. Yet because of higher education's unique (and sometimes vexing) operational and structural conditions, many of the lessons from change management and the science of industrial transformation are only applicable in limited or highly modified ways. For evidence of this, one can look at various perspectives, including those that we have offered, on such topics as <a href="https://www.insidehighered.com/digital-learning/blogs/rethinking-higher-education/lessons-disruption" target="_blank">disruption</a>, <a href="https://www.nytimes.com/2020/02/20/education/learning/education-technology.html" target="_blank">technology management</a>, and so-called "<a href="https://www.insidehighered.com/sites/default/server_files/media/Excerpt_IHESpecialReport_Growing-Role-of-Mergers-in-Higher-Ed.pdf" target="_blank">mergers and acquisitions</a>" in higher education. In each of these spaces, the "market forces" and "market rules" for higher education are different than they are in business, or even in government. This has always been the case and it is made more obvious by COVID-19.</p> <p>Second, with so much excitement about innovation in higher education, we sometimes lose sight of the fact that students are—and should remain—the core cause for innovation. Higher education's capacity to absorb new ideas is strong. But the ideas that endure are those designed to benefit students, and therefore society. This is important to remember because not all innovations are designed with students in mind. The recent history of innovation in higher education includes several cautionary tales of what can happen when institutional interests—or worse, <a href="https://www.insidehighered.com/news/2016/02/09/apollos-new-owners-seek-fresh-start-beleaguered-company" target="_blank">shareholder</a> interests—are placed above student well-being.</p>
Photo: Getty Images<p>Third, it is abundantly apparent that universities must leverage technology to increase educational quality and access. The rapid shift to delivering an education that complies with social distancing guidelines speaks volumes about the adaptability of higher education institutions, but this transition has also posed unique difficulties for colleges and universities that had been slow to adopt digital education. The last decade has shown that online education, implemented effectively, can meet or even surpass the quality of in-person <a href="https://link-springer-com.ezproxy1.lib.asu.edu/article/10.1007/s10639-019-10027-z" target="_blank">instruction</a>.</p><p>Digital instruction, broadly defined, leverages online capabilities and integrates adaptive learning methodologies, predictive analytics, and innovations in instructional design to enable increased student engagement, personalized learning experiences, and improved learning outcomes. The ability of these technologies to transcend geographic barriers and to shrink the marginal cost of educating additional students makes them essential for delivering education at scale.</p><p>As a bonus, and it is no small thing given that they are the core cause for innovation, students embrace and enjoy digital instruction. It is their preference to learn in a format that leverages technology. This should not be a surprise; it is now how we live in all facets of life.</p><p>Still, we have only barely begun to conceive of the impact digital education will have. For example, emerging virtual and augmented reality technologies that facilitate interactive, hands-on learning will transform the way that learners acquire and apply new knowledge. Technology-enabled learning cannot replace the traditional college experience or ensure the survival of any specific college, but it can enhance student learning outcomes at scale. This has always been the case, and it is made more obvious by COVID-19.</p>
What conditions of the new normal were emerging suspicions?<p>Our collective thinking about the role of institutional or university-to-university collaboration and networking has benefitted from a new clarity in light of COVID-19. We now recognize more than ever that colleges and universities must work together to ensure that the American higher education system is resilient and sufficiently robust to meet the needs of students and their families.</p> <p>In recent weeks, various commentators have suggested that higher education will face a wave of institutional <a href="https://www.businessinsider.com/scott-galloway-predicts-colleges-will-close-due-to-pandemic-2020-5" target="_blank">closures</a> and consolidations and that large institutions with significant online instruction capacity will become dominant.</p> <p>While ASU is the largest public university in the United States by enrollment and among the most well-equipped in online education, we strongly oppose "let them fail" mindsets. The strength of American higher education relies on its institutional diversity, and on the ability of colleges and universities to meet the needs of their local communities and educate local students. The needs of learners are highly individualized, demanding a wide range of options to accommodate the aspirations and learning styles of every kind of student. Education will become less relevant and meaningful to students, and less responsive to local needs, if institutions of higher learning are allowed to fail. </p> <p>Preventing this outcome demands that colleges and universities work together to establish greater capacity for remote, distributed education. This will help institutions with fewer resources adapt to our new normal and continue to fulfill their mission of serving students, their families, and their communities. Many had suspected that collaboration and networking were preferable over letting vulnerable colleges fail. COVID-19's new normal seems to be confirming this.</p>
President Barack Obama delivers the commencement address during the Arizona State University graduation ceremony at Sun Devil Stadium May 13, 2009 in Tempe, Arizona. Over 65,000 people attended the graduation.
Photo by Joshua Lott/Getty Images<p>A second condition of the new normal that many had suspected to be true in recent years is the limited role that any one university or type of university can play as an exemplar to universities more broadly. For decades, the evolution of higher education has been shaped by the widespread imitation of a small number of elite universities. Most public research universities could benefit from replicating Berkeley or Michigan. Most small private colleges did well by replicating Williams or Swarthmore. And all universities paid close attention to Harvard, Princeton, MIT, Stanford, and Yale. It is not an exaggeration to say that the logic of replication has guided the evolution of higher education for centuries, both in the US and abroad.</p><p>Only recently have we been able to move beyond replication to new strategies of change, and COVID-19 has confirmed the legitimacy of doing so. For example, cases such as <a href="https://www.washingtonpost.com/education/2020/03/10/harvard-moves-classes-online-advises-students-stay-home-after-spring-break-response-covid-19/" target="_blank">Harvard's</a> eviction of students over the course of less than one week or <a href="https://www.nhregister.com/news/coronavirus/article/Mayor-New-Haven-asks-for-coronavirus-help-Yale-15162606.php" target="_blank">Yale's apparent reluctance</a> to work with the city of New Haven, highlight that even higher education's legacy gold standards have limits and weaknesses. We are hopeful that the new normal will include a more active and earnest recognition that we need many types of universities. We think the new normal invites us to rethink the very nature of "gold standards" for higher education.</p>
A graduate student protests MIT's rejection of some evacuation exemption requests.
Photo: Maddie Meyer/Getty Images<p>Finally, and perhaps most importantly, we had started to suspect and now understand that America's colleges and universities are among the many institutions of democracy and civil society that are, by their very design, incapable of being sufficiently responsive to the full spectrum of modern challenges and opportunities they face. Far too many higher education outcomes are determined by a student's family income, and in the context of COVID-19 this means that lower-income students, first-generation students and students of color will be disproportionately afflicted. And without new designs, we can expect postsecondary success for these same students to be as elusive in the new normal, as it was in the <a href="http://pellinstitute.org/indicators/reports_2019.shtml" target="_blank">old normal</a>. This is not just because some universities fail to sufficiently recognize and engage the promise of diversity, this is because few universities have been designed from the outset to effectively serve the unique needs of lower-income students, first-generation students and students of color.</p>
Where can the new normal take us?<p>As colleges and universities face the difficult realities of adapting to COVID-19, they also face an opportunity to rethink their operations and designs in order to respond to social needs with greater agility, adopt technology that enables education to be delivered at scale, and collaborate with each other in order to maintain the dynamism and resilience of the American higher education system.</p> <p>COVID-19 raises questions about the relevance, the quality, and the accessibility of higher education—and these are the same challenges higher education has been grappling with for years. </p> <p>ASU has been able to rapidly adapt to the present circumstances because we have spent nearly two decades not just anticipating but <em>driving</em> innovation in higher education. We have adopted a <a href="https://www.asu.edu/about/charter-mission-and-values" target="_blank">charter</a> that formalizes our definition of success in terms of "who we include and how they succeed" rather than "<a href="https://www.washingtonpost.com/opinions/2019/10/17/forget-varsity-blues-madness-lets-talk-about-students-who-cant-afford-college/" target="_blank">who we exclude</a>." We adopted an entrepreneurial <a href="https://president.asu.edu/read/higher-logic" target="_blank">operating model</a> that moves at the speed of technological and social change. We have launched initiatives such as <a href="https://www.instride.com/how-it-works/" target="_blank">InStride</a>, a platform for delivering continuing education to learners already in the workforce. We developed our own robust technological capabilities in ASU <a href="https://edplus.asu.edu/" target="_blank">EdPlus</a>, a hub for research and development in digital learning that, even before the current crisis, allowed us to serve more than 45,000 fully online students. We have also created partnerships with other forward-thinking institutions in order to mutually strengthen our capabilities for educational accessibility and quality; this includes our role in co-founding the <a href="https://theuia.org/" target="_blank">University Innovation Alliance</a>, a consortium of 11 public research universities that share data and resources to serve students at scale. </p> <p>For ASU, and universities like ASU, the "new normal" of a post-COVID world looks surprisingly like the world we already knew was necessary. Our record breaking summer 2020 <a href="https://asunow.asu.edu/20200519-sun-devil-life-summer-enrollment-sets-asu-record" target="_blank">enrollment</a> speaks to this. What COVID demonstrates is that we were already headed in the right direction and necessitates that we continue forward with new intensity and, we hope, with more partners. In fact, rather than "new normal" we might just say, it's "go time." </p>
The brains of two genetically edited babies born last year in China might have enhanced memory and cognition, but that doesn't mean the scientific community is pleased.
- In November, Chinese scientist He Jiankui reported that he'd used the CRISPR tool to edit the embryos of two girls.
- He deleted a gene called CCR5, which allows humans to contract HIV, the virus which causes AIDS.
- In addition to blocking AIDS, deleting this gene might also have positive effects on memory and cognition. Still, virtually all scientists say we're not ready to use gene-editing technology on babies.
The CRISPR co-founder's response to He<p>Jennifer Doudna, a professor of chemistry and molecular and cell biology at UC Berkeley and co-inventor of CRISPR, published a <a href="https://news.berkeley.edu/2018/11/26/doudna-responds-to-claim-of-first-crispr-edited-babies/" target="_blank">statement</a> in November saying the public should consider the following points on the use of gene-editing technology:<br></p><ul> <li>The clinical report has not been published in the peer-reviewed scientific literature.<span></span></li></ul><ul><li>Because the data has not been peer reviewed, the fidelity of the gene editing process cannot be evaluated.<span></span></li></ul><ul><li>The work, as described to date, reinforces the urgent need to confine the use of gene editing in human embryos to cases where a clear unmet medical need exists, and where no other medical approach is a viable option, as recommended by the National Academy of Sciences.</li></ul><p>In 2017, Doudna spoke to Big Think about the tricky regulatory and philosophical questions we might soon wrestle with if genetically designing babies becomes an option for parents.</p>
Manly Bands wanted to improve on mens' wedding bands. Mission accomplished.
- Manly Bands was founded in 2016 to provide better options and customer service in men's wedding bands.
- Unique materials include antler, dinosaur bones, meteorite, tungsten, and whiskey barrels.
- The company donates a portion of profits to charity every month.
Iranian Tolkien scholar finds intriguing parallels between subcontinental geography and famous map of Middle-earth.
- J.R.R. Tolkien hinted that his stories are set in a really ancient version of Europe.
- But a fantasy realm can be inspired by a variety of places; and perhaps so is Tolkien's world.
- These intriguing similarities with Asian topography show that it may be time to 'decolonise' Middle-earth.
Mental decolonisation<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMzQzMDM0OS9vcmlnaW4ucG5nIiwiZXhwaXJlc19hdCI6MTY1MDU4Mjg3N30.pKS1PLxKYeJ6WDPAcleg7NCxzDn7Pddcg9rSJaul6no/img.png?width=980" id="56ee5" class="rm-shortcode" data-rm-shortcode-id="1d2ba98946accd12f7e0070c8d10154d" data-rm-shortcode-name="rebelmouse-image" alt="Menu page for Arda.ir, the website of the Persian Tolkien Society." />
Menu page for Arda.ir, the website of the Persian Tolkien Society.
Image: Arda.ir<p>Where on earth was Middle-earth? Based on a few hints by Tolkien himself, we've always sort-of assumed that his stories of "The Hobbit" and "The Lord of the Rings" were centered on Europe, but so long ago that the shape of the coasts and the land has changed. </p><p>But perhaps that's too easy and too Eurocentric an assumption; perhaps, like so many other things these days, Tolkien's fantasy realm too is in dire need of mental decolonisation.</p><p>And here's an excellent occasion: an Iranian Tolkienologist has found intriguing hints that the writer based some of Middle-earth's topography on mountains, rivers, and islands located in and near present-day Pakistan. </p><p>As mentioned in a previous article – recently reposted on the <a href="https://www.facebook.com/VeryStrangeMaps" target="_blank">Strange Maps Facebook page</a> on the occasion of the death of Ian Holm – Tolkien admitted that "The Shire is based on rural England, and not on any other country in the world," and that "the action of the story takes place in the North-West of 'Middle-earth', equivalent in latitude to the coastlands of Europe and the north shores of the Mediterranean."<br></p>
Non-European topography<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMzQzMDM1MC9vcmlnaW4ucG5nIiwiZXhwaXJlc19hdCI6MTY1NTQ4MzcyMX0.891LPW42L78fdrwUhXdgOab7cbhs3YOqZK4ukIQx-Rw/img.png?width=980" id="6741c" class="rm-shortcode" data-rm-shortcode-id="2b50c57cb3b8a3a1cc8a4696c89ad954" data-rm-shortcode-name="rebelmouse-image" alt="Map of Tian-shan, the Himalayas, and the Pamirs" />
If you look at it like that, yes: that does resemble Mordor...
Image: Mohammad Reza Kamali, reproduced with kind permission<p>Extrapolating from the location of the Shire in Middle-earth and from other clues dropped by Tolkien, geophysics and geology professor Peter Bird matched the geography of Middle-earth with that of Europe (more about that in the <a href="https://bigthink.com/strange-maps/121-where-on-earth-was-middle-earth?utm_medium=Social&utm_source=Facebook&fbclid=IwAR0ZFYK1EXrf4J3B3X5_U4hSAgidgBs24ZNTYV9QEFbz2qI34OA_DpZsn70#Echobox=1592583835" target="_blank">aforementioned article</a>).</p><p>However, seeing Middle-earth as a mere palimpsest for present-day Europe is to place an undue limit on the imagination of its creator. As Tolkien also said about the shape of his world: "[It] was devised 'dramatically' rather than geologically or paleontologically."</p><p>In other words, certain parts of Middle-earth may very well have been inspired by other places than European ones. It is telling that it took a non-European connoisseur of Tolkien's topography to find some examples. <br></p>
"Seen that map before"<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMzQzMDM1MS9vcmlnaW4ucG5nIiwiZXhwaXJlc19hdCI6MTY1MTQ3Njc3NH0.azDO1_NWm9q9FwMpmqBOV2troOX0ajAXS4lP2bLstJI/img.png?width=980" id="1b193" class="rm-shortcode" data-rm-shortcode-id="21c3d38b14503ba8edac18c0ef1cceb0" data-rm-shortcode-name="rebelmouse-image" alt="Map of Indus river" />
The Indus river is a prominent geographical feature of Pakistan. Its course is similar to that of the Anduin, the Great River of Middle-earth.
Image: Mohammad Reza Kamali, reproduced with kind permission<p>In an article published on <a href="https://arda.ir/" target="_blank">Arda.ir</a>, the web page for the Persian Tolkien Society, Mohammad Reza Kamali writes that during several years of cartographic study, "I found that maybe there are real lands [that] could have inspired Professor Tolkien, and some of them are not in Europe."</p><p>Around 2012, Kamali's eye stopped when it came across a Google Map of Central Asia that showed the mountain chain of the Himalayas, the peaks of the Pamirs bunched together in an almost circular area, and the huge, flat oval of the Takla Makan desert, bounded to the north by the Tian-Shan mountains. </p><p>"I had seen that map before," he writes. "This is of course Mordor, the land of Sauron and the dark powers of Middle-earth, where Frodo and Sam destroy the One Ring." </p><p>In <a href="http://lotrproject.com/map" target="_blank">Tolkien's world</a>, the Himalayas transform into Ephel Duath, the Mountains of Shadow; and the Tian Shan into Ered Lithui, the Ash Mountains. And the circle-shaped Pamirs "are the same shape and in exactly the same corner as the Udûn of Mordor, where Frodo and Sam originally tried getting into Mordor, via the Black Gate."<br></p>
Similar shapes<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMzQzMDM1Mi9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYxMDQyODMzNX0.KHrY7rDCNNaKKJQz-xn431APM2TqxGPCaMsqNvBe1xA/img.jpg?width=980" id="7a9fa" class="rm-shortcode" data-rm-shortcode-id="e87f1af97902201abc042640255606b2" data-rm-shortcode-name="rebelmouse-image" alt="Marine Corps helicopter flying over Tarbela Dam" />
A US Marine Corps helicopter flying over the Tarbela Dam on the Indus river in Pakistan. At its center: a former river island which may have been the inspiration for Cair Andros, a ship-shaped island in Middle-earth's Anduin river.
Image: Paul Duncan (USMC), public domain<p>Mulling over these similarities, Kamali became convinced that Tolkien's map work was heavily inspired by Asia. Looking further, he found more evidence. Consider Anduin, the Great River of Middle-earth, in whose waters the One Ring was lost for more than two thousand years. </p><p>On Tolkien's map, the Anduin bends toward the sea in a shape similar to that of another great river: the Indus, which runs the length of Pakistan. Like the Anduin, it flows to the west of a major mountain chain. A prominent feature of the Anduin is the river island of Cair Andros, just north of Osgiliath. Its name means 'Ship of Long Foam', a reference to its long and narrow shape, and the sharpness of its rocks, which split the waters of the Anduin like a prow. <br></p><p>Kamali is not entirely sure, but proposes that Tolkien may have been inspired by a similar-shaped island in the Indus. Now integrated into the Tarbela Dam, which was inaugurated in 1976, it would still have been a separate island in the 1930s and '40s, when Tolkien dreamed up his map.</p>
Kutch as Tolfalas Island<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMzQzMDM1NC9vcmlnaW4ucG5nIiwiZXhwaXJlc19hdCI6MTYwOTU5NjcyNn0.869W8iiowQb9_T3laFKOUe5o5UMXuMlSITb1VxRlC2g/img.png?width=980" id="9c49e" class="rm-shortcode" data-rm-shortcode-id="548bafc6042cc7515e07f77657aa161c" data-rm-shortcode-name="rebelmouse-image" alt="Map of Kutch" />
During the rainy season, the coastal region of Kutch, near the mouth of the Indus, turns into an island that resembles Tolfalas Island, near the mouth of the Anduin.
Image: Mohammad Reza Kamali, reproduced with kind permission<p>Turning our eyes to the mouth of the Anduin and Indus, we see another pair of islands, and Kamali is more certain about the real one having inspired the fictional one. The fictional one is Tolfalas Island, the largest island in Belfalas Bay. <br></p><p>At first glance, it doesn't seem to have a real-life counterpart near where the Indus joins the Arabian Sea. But take a look at the coastal part of the Indian state of Gujarat. It is known as <em>Kutch</em>, a name which apparently refers to its alternately wet and dry states. In the rainy season, the shallow wetlands flood and Kutch becomes an island – the biggest island in the Gulf of Kutch, and not too dissimilar to Tolfalas Island. </p>
General knowledge<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMzQzMDM1NS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYyMDIwODkyOH0.aInJedv3tiQo1LmW-M6D5LV699oeWNltxeYcVKWwtF0/img.jpg?width=980" id="9bc6e" class="rm-shortcode" data-rm-shortcode-id="01d97d3941f9ba732b4df35c3aedd977" data-rm-shortcode-name="rebelmouse-image" alt="British Indian Empire 1909 Imperial Gazetteer of India" />
1909 map showing British India in pink (direct British control) and yellow (princely states). Circled: Kutch, clearly recognisable as an island.
Image: Edinburgh Geographical Institute; J. G. Bartholomew and Sons, public domain<p>But are these similarities really more than coincidences? Why would Tolkien, who was based in Cambridge and steeped in English lore and Germanic mythology, turn to the Indian subcontinent for topographical inspiration? Perhaps because cartographic knowledge of that part of the world was far more general in Britain then than it is now. Until the late 1940s, the countries we know today as India and Pakistan were part of the British Empire. Detailed maps of the region would have been standard fare for British atlases. </p><p>Kamali is convinced that the topographical features on Tolkien's map of Middle-earth are not mere fantasy, but derive from actual places in our world, and were 'riddled' onto the map. In that case, we may look forward to more discoveries of Tolkien's real-world inspiration. <br></p>
From Frodingham to Frodo<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMzQzMDM1Ni9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTU5NzgzMzE2OH0.uMd43VxS9WQSWr1Z0IQ-UxIhBYkERhxTU7hoPvNachk/img.jpg?width=980" id="05037" class="rm-shortcode" data-rm-shortcode-id="ff9aace7fc7c111df3639a276cedf63c" data-rm-shortcode-name="rebelmouse-image" alt="Photograph of J. R. R. Tolkien in army uniform" />
J.R.R. Tolkien in 1916, when he was 24. Around that time, he was stationed near the village of Frodingham, which may have given him the inspiration for the name of the main protagonist in Lord of the Rings.
Image: public domain<p>Here's one example of Tolkienography—if that's what we can call the effect of actual geography on this particular writer's imagination—which I gleaned myself, some years ago in East Yorkshire. A local historian told me that Tolkien had been stationed in the area during the First World War, and had apparently stored away some local place names for later use. The name Frodo, he said, derived from a town where he had attended a few dances – Frodingham, a village across the Humber in northern Lincolnshire, not far from Scunthorpe (<em>Scunto</em>? We dodged a bullet there). </p><p>Whether that story is entirely true or not is beside the point. As fantasy fans know, any grail quest is ultimately about the quest, not the grail. In fact, to quote Mr Kamali, the treasure is important only because it's well hidden, "by a clever professor who enjoys riddles."</p><p><em>Unless otherwise indicated, illustrations are from Mr Kamali's <a href="https://arda.ir/the-tale-of-the-annotated-map-and-tolkien-hidden-riddles/?fbclid=IwAR3RmtU0ZdyzQGlK-iCsUjho4LA2W279fwO9dt8vv90FX2IeO3zrfMuMToU" target="_blank">article</a> on <a href="https://arda.ir/" target="_blank">Arda.ir</a>, reproduced with kind permission. </em><br></p><p><strong>Strange Maps #1036</strong></p><p><em>Got a strange map? Let me know at </em><a href="mailto:firstname.lastname@example.org">email@example.com</a><em>.</em></p>