Once a week.
Subscribe to our weekly newsletter.
Why A.I. is a big fat lie
The Dr. Data Show is a new web series that breaks the mold for data science infotainment, captivating the planet with short webisodes that cover the very best of machine learning and predictive analytics.
- All the hype around artificial intelligence misunderstands what intelligence really is.
- And A.I. is definitely, definitely not going to kill you, ever.
- Machine learning as a process and a concept, however, holds more promise.
A.I. is a big fat lie
A.I. is a big fat lie. Artificial intelligence is a fraudulent hoax — or in the best cases it's a hyped-up buzzword that confuses and deceives. The much better, precise term would instead usually be machine learning – which is genuinely powerful and everyone oughta be excited about it.
On the other hand, AI does provide some great material for nerdy jokes. So put on your skepticism hat, it's time for an AI-debunkin', slam-dunkin', machine learning-lovin', robopocalypse myth-bustin', smackdown jamboree – yeehaw!
3 main points
1) Unlike AI, machine learning's totally legit. I gotta say, it wins the Awesomest Technology Ever award, forging advancements that make ya go, "Hooha!". However, these advancements are almost entirely limited to supervised machine learning, which can only tackle problems for which there exist many labeled or historical examples in the data from which the computer can learn. This inherently limits machine learning to only a very particular subset of what humans can do – plus also a limited range of things humans can't do.
2) AI is BS. And for the record, this naysayer taught the Columbia University graduate-level "Artificial Intelligence" course, as well as other related courses there.
AI is nothing but a brand. A powerful brand, but an empty promise. The concept of "intelligence" is entirely subjective and intrinsically human. Those who espouse the limitless wonders of AI and warn of its dangers – including the likes of Bill Gates and Elon Musk – all make the same false presumption: that intelligence is a one-dimensional spectrum and that technological advancements propel us along that spectrum, down a path that leads toward human-level capabilities. Nuh uh. The advancements only happen with labeled data. We are advancing quickly, but in a different direction and only across a very particular, restricted microcosm of capabilities.
The term artificial intelligence has no place in science or engineering. "AI" is valid only for philosophy and science fiction – and, by the way, I totally love the exploration of AI in those areas.
3) AI isn't gonna kill you. The forthcoming robot apocalypse is a ghost story. The idea that machines will uprise on their own volition and eradicate humanity holds no merit.
Neural networks for the win
In the movie "Terminator 2: Judgment Day," the titular robot says, "My CPU is a neural net processor, a learning computer." The neural network of which that famous robot speaks is actually a real kind of machine learning method. A neural network is a way to depict a complex mathematical formula, organized into layers. This formula can be trained to do things like recognize images for self-driving cars. For example, watch several seconds of a neural network performing object recognition.
What you see it doing there is truly amazing. The network's identifying all those objects. With machine learning, the computer has essentially programmed itself to do this. On its own, it has worked out the nitty gritty details of exactly what patterns or visual features to look for. Machine learning's ability to achieve such things is awe-inspiring and extremely valuable.
The latest improvements to neural networks are called deep learning. They're what make this level of success in object recognition possible. With deep learning, the network is quite literally deeper – more of those layers. However, even way way back in 1997, the first time I taught the machine learning course, neural networks were already steering self-driving cars, in limited contexts, and we even had our students apply them for face recognition as a homework assignment.
The achitecture for a simple neural network with four layers
But the more recent improvements are uncanny, boosting its power for many industrial applications. So, we've even launched a new conference, Deep Learning World, which covers the commercial deployment of deep learning. It runs alongside our long-standing machine learning conference series, Predictive Analytics World.
Supervised machine learning requires labeled data
So, with machines just getting better and better at humanlike tasks, doesn't that mean they're getting smarter and smarter, moving towards human intelligence?
No. It can get really, really good at certain tasks, but only when there's the right data from which to learn. For the object recognition discussed above, it learned to do that from a large number of example photos within which the target objects were already correctly labeled. It needed those examples to learn to recognize those kinds of objects. This is called supervised machine learning: when there is pre-labeled training data. The learning process is guided or "supervised" by the labeled examples. It keeps tweaking the neural network to do better on those examples, one incremental improvement at a time. That's the learning process. And the only way it knows the neural network is improving or "learning" is by testing it on those labeled examples. Without labeled data, it couldn't recognize its own improvements so it wouldn't know to stick with each improvement along the way. Supervised machine learning is the most common form of machine learning.
Here's another example. In 2011, IBM's Watson computer defeated the two all-time human champions on the TV quiz show Jeopardy. I'm a big fan. This was by far the most amazing thing I've seen a computer do – more impressive than anything I'd seen during six years of graduate school in natural language understanding research. Here's a 30-second clip of Watson answering three questions.
To be clear, the computer didn't actually hear the spoken questions but rather was fed each question as typed text. But its ability to rattle off one answer after another – given the convoluted, clever wording of Jeopardy questions, which are designed for humans and run across any and all topics of conversation – feels to me like the best "intelligent-like" thing I've ever seen from a computer.
But the Watson machine could only do that because it had been given many labeled examples from which to learn: 25,000 questions taken from prior years of this TV quiz show, each with their own correct answer.
At the core, the trick was to turn every question into a yes/no prediction: "Will such-n-such turn out to be the answer to this question?" Yes or no. If you can answer that question, then you can answer any question – you just try thousands of options out until you get a confident "yes." For example, "Is 'Abraham Lincoln' the answer to 'Who was the first president?'" No. "Is 'George Washington'?" Yes! Now the machine has its answer and spits it out.
Computers that can talk like humans
And there's another area of language use that also has plentiful labeled data: machine translation. Machine learning gobbles up a feast of training data for translating between, say, English and Japanese, because there are tons of translated texts out there filled with English sentences and their corresponding Japanese translations.
In recent years, Google Translate – which anyone can use online – swapped out the original underlying solution for a much-improved one driven by deep learning. Go try it out – translate a letter to your friend or relative who has a different first language than you. I use it a lot myself.
On the other hand, general competence with natural languages like English is a hallmark of humanity – and only humanity. There's no known roadmap to fluency for our silicon sisters and brothers. When we humans understand one another, underneath all the words and somewhat logical grammatical rules is "general common sense and reasoning." You can't work with language without that very particular human skill. Which is a broad, unwieldy, amorphous thing we humans amazingly have.
So our hopes and dreams of talking computers are dashed because, unfortunately, there's no labeled data for "talking like a person." You can get the right data for a very restricted, specific task, like handling TV quiz show questions, or answering the limited range of questions people might expect Siri to be able to answer. But the general notion of "talking like a human" is not a well-defined problem. Computers can only solve problems that are precisely defined.
So we can't leverage machine learning to achieve the typical talkative computer we see in so many science fiction movies, like the Terminator, 2001's evil HAL computer, or the friendly, helpful ship computer in Star Trek. You can converse with those machines in English very much like you would with a human. It's easy. Ya just have to be a character in a science fiction movie.
Intelligence is subjective, so A.I. has no real definition
Now, if you think you don't already know enough about AI, you're wrong. There is nothing to know, because it isn't actually a thing. There's literally no meaningful definition whatsoever. AI poses as a field, but it's actually just a fanciful brand. As a supposed field, AI has many competing definitions, most of which just boil down to "smart computer." I must warn you, do not look up "self-referential" in the dictionary. You'll get stuck in an infinite loop.
Many definitions are even more circular than "smart computer," if that's possible. They just flat out use the word "intelligence" itself within the definition of AI, like "intelligence demonstrated by a machine."
If you've assumed there are more subtle shades of meaning at hand, surprise – there aren't. There's no way to resolve how utterly subjective the word "intelligence" is. For computers and engineering, "intelligence" is an arbitrary concept, irrelevant to any precise goal. All attempts to define AI fail to solve its vagueness.
Now, in practice the word is often just – confusingly – used as a synonym for machine learning. But as for AI as its own concept, most proposed definitions are variations of the following three:
1) AI is getting a computer to think like a human. Mimic human cognition. Now, we have very little insight into how our brains pull off what they pull off. Replicating a brain neuron-by-neuron is a science fiction "what if" pipe dream. And introspection – when you think about how you think – is interesting, big time, but ultimately tells us precious little about what's going on in there.
2) AI is getting a computer to act like a human. Mimic human behavior. Cause if it walks like a duck and talks like a duck... But it doesn't and it can't and we're way too sophisticated and complex to fully understand ourselves, let alone translate that understanding into computer code. Besides, fooling people into thinking a computer in a chatroom is actually a human – that's the famous Turing Test for machine intelligence – is an arbitrary accomplishment and it's a moving target as we humans continually become wiser to the trickery used to fool us.
3) AI is getting computers to solve hard problems. Get really good at tasks that seem to require "intelligence" or "human-level" capability, such as driving a car, recognizing human faces, or mastering chess. But now that computers can do them, these tasks don't seem so intelligent after all. Everything a computer does is just mechanical and well understood and in that way mundane. Once the computer can do it, it's no longer so impressive and it loses its charm. A computer scientist named Larry Tesler suggested we define intelligence as "whatever machines haven't done yet." Humorous! A moving-target definition that defines itself out of existence.
By the way, the points in this article also apply to the term "cognitive computing," which is another poorly-defined term coined to allege a relationship between technology and human cognition.
The logical fallacy of believing in A.I.'s innevitability
The thing is, "artificial intelligence" itself is a lie. Just evoking that buzzword automatically insinuates that technological advancement is making its way toward the ability to reason like people. To gain humanlike "common sense." That's a powerful brand. But it's an empty promise. Your common sense is more amazing – and unachievable – than your common sense can sense. You're amazing. Your ability to think abstractly and "understand" the world around you might feel simple in your moment-to-moment experience, but it's incredibly complex. That experience of simplicity is either a testament to how adept your uniquely human brain is or a great illusion that's intrinsic to the human condition – or probably both.
Now, some may respond to me, "Isn't inspired, visionary ambition a good thing? Imagination propels us and unknown horizons beckon us!" Arthur C. Clarke, the author of 2001, made a great point: "Any sufficiently advanced technology is indistinguishable from magic." I agree. However, that does not mean any and all "magic" we can imagine – or include in science fiction – could eventually be achieved by technology. Just 'cause it's in a movie doesn't mean it's gonna happen. AI evangelists often invoke Arthur's point – but they've got the logic reversed. My iPhone seems very "Star Trek" to me, but that's not an argument everything on Star Trek is gonna come true. The fact that creative fiction writers can make shows like Westworld is not at all evidence that stuff like that could happen.
Now, maybe I'm being a buzzkill, but actually I'm not. Let me put it this way. The uniqueness of humans and the real advancements of machine learning are each already more than amazing and exciting enough to keep us entertained. We don't need fairy tales – especially ones that mislead.
Sophia: A.I.'s most notoriously fraudulent publicity stunt
The star of this fairy tale, the leading role of "The Princess" is played by Sophia, a product of Hanson Robotics and AI's most notorious fraudulent publicity stunt. This robot has applied her artificial grace and charm to hoodwink the media. Jimmy Fallon and other interviewers have hosted her – it, I mean have hosted it. But when it "converses," it's all scripts and canned dialogue – misrepresented as spontaneous conversation – and in some contexts, rudimentary chatbot-level responsiveness.
Believe it or not, three fashion magazines have featured Sophia on their cover, and, ever goofier and sillier, the country Saudi Arabia officially granted it citizenship. For real. The first robot citizen. I'm actually a little upset about this, 'cause my microwave and pet rock have also applied for citizenship but still no word.
Sophia is a modern-day Mechanical Turk – which was an 18th century hoax that fooled the likes of Napoleon Bonaparte and Benjamin Franklin into believing they'd just lost a game of chess to a machine. A mannequin would move the chess pieces and the victims wouldn't notice there was actually a small human chess expert hidden inside a cabinet below the chess board.
In a modern day parallel, Amazon has an online service you use to hire workers to perform many small tasks that require human judgement, like choosing the nicest looking of several photographs. It's named Amazon Mechanical Turk, and its slogan, "Artificial Artificial Intelligence." Which reminds me of this great vegetarian restaurant with "mock mock duck" on the menu – I swear, it tastes exactly like mock duck. Hey, if it talks like a duck, and it tastes like a duck...
Yes indeed, the very best fake AI is humans. In 1965, when NASA was defending the idea of sending humans to space, they put it this way: "Man is the lowest-cost, 150-pound, nonlinear, all-purpose computer system which can be mass-produced by unskilled labor." I dunno. I think there's some skill in it. ;-)
The myth of dangerous superintelligence
Anyway, as for Sophia, mass hysteria, right? Well, it gets worse: Claims that AI presents an existential threat to the human race. From the most seemingly credible sources, the most elite of tech celebrities, comes a doomsday vision of homicidal robots and killer computers. None other than Bill Gates, Elon Musk, and even the late, great Stephen Hawking have jumped on the "superintelligence singularity" bandwagon. They believe machines will achieve a degree of general competence that empowers the machines to improve their own general competence – so much so that this will then quickly escalate past human intelligence, and do so at the lightning speed of computers, a speed the computers themselves will continue to improve by virtue of their superintelligence, and before you know it you have a system or entity so powerful that the slightest misalignment of objectives could wipe out the human race. Like if we naively commanded it to manufacture as many rubber chickens as possible, it might invent an entire new high-speed industry that can make 40 trillion rubber chickens but that happens to result in the extinction of Homo sapiens as a side effect. Well, at least it would be easier to get tickets for Hamilton.
There are two problems with this theory. First, it's so compellingly dramatic that it's gonna ruin movies. If the best bad guy is always a robot instead of a human, what about Nurse Ratched and Norman Bates? I need my Hannibal Lecter! "The best bad guy," by the way, is an oxymoron. And so is "artificial intelligence." Just sayin'.
But it is true: Robopocalypse is definitely coming. Soon. I'm totally serious, I swear. Based on a novel by the same name, Michael Bay – of the "Transformers" movies – is currently directing it as we speak. Fasten your gosh darn seatbelts people, 'cause, if "Robopocalypse" isn't in 3D, you were born in the wrong parallel universe.
Oh yeah, and the second problem with the AI doomsday theory is that it's ludicrous. AI is so smart it's gonna kill everyone by accident? Really really stupid superintelligence? That sounds like a contradiction.
To be more precise, the real problem is that the theory presumes that technological advancements move us along a path toward humanlike "thinking" capabilities. But they don't. It's not headed in that direction. I'll come back to that point again in a minute – first, a bit more on how widely this apocalyptic theory has radiated.
A widespread belief in superintelligence
The Kool-Aid these high-tech royalty drink, the go-to book that sets the foundation, is the New York Times bestseller "Superintelligence," by Nick Bostrom, who's a professor of applied ethics at Oxford University. The book mongers the fear and fans the flames, if not igniting the fire in the first place for many people. It explores how we might "make an intelligence explosion survivable." The Guardian newspaper ran a headline, "Artificial intelligence: 'We're like children playing with a bomb'," and Newsweek: "Artificial Intelligence Is Coming, and It Could Wipe Us Out," both headlines obediently quoting Bostrom himself.
Bill Gates "highly recommends" the book, Elon Musk said AI is "vastly more risky than North Korea" – as Fortune Magazine repeated in a headline – and, quoting Stephen Hawking, the BBC ran a headline, "'AI could spell end of the human race'."
In a Ted talk that's been viewed 5 million times (across platforms), the bestselling author and podcast intellectual Sam Harris states with supreme confidence, "At a certain point, we will build machines that are smarter than we are, and once we have machines that are smarter than we are, they will begin to improve themselves."
Both he and Bostrom show the audience an intelligence spectrum during their Ted talks – here's the one by Bostrom:
What happens when our computers get smarter than we are? | Nick Bostrom
You can see as we move along the path from left to right we pass a mouse, a chimp, a village idiot, and then the very smart theoretical physicist Ed Witten. He's relatively close to the idiot, because even an idiot human is much smarter than a chimp, relatively speaking. You can see the arrow just above the spectrum showing that "AI" progresses in that same direction, along to the right. At the very rightmost position is Bostrom himself, which is either just an accident of photography, or proof that he himself is an AI robot.
Oops, that was the wrong clip – uh, that was Dr. Frankenstein, but, ya know, same scenario.
A falsely conceived "spectrum of intelligence"
Anyway, that falsely-conceived intelligence spectrum is the problem. I've read the book and many of the interviews and watched the talks and pretty much all the believers intrinsically build on an erroneous presumption that "smartness" or "intelligence" falls more or less along a single, one-dimensional spectrum. They presume that the more adept machines become at more and more challenging tasks, the higher they will rank on this scale, eventually surpassing humans.
But machine learning has us marching along a different path. We're moving fast, and we'll likely go very far, but we're going in a different direction, only tangentially related to human capabilities.
The trick is to take a moment to think about this difference. Our own personal experiences of being one of those smart creatures called a human is what catches us in a thought trap. Our very particular and very impressive capabilities are hidden from ourselves beneath a veil of a conscious experience that just kind of feels like "clarity." It feels simple, but under the surface, it's oh so complex. Replicating our "general common sense" is a fanciful notion that no technological advancements have ever moved us towards in any meaningful way.
Thinking abstractly often feels uncomplicated. We draw visuals in our mind, like a not-to-scale map of a city we're navigating, or a "space" of products that two large companies are competing to sell, with each company dominating in some areas but not in others... or, when thinking about AI, the mistaken vision that increasingly adept capabilities – both intellectual and computational – all fall along the same, somewhat narrow path.
Now, Bostrom rightly emphasizes that we should not anthropomorphize what intelligent machines may be like in the future. It's not human, so it's hard to speculate on the specifics and perhap it will seem more like a space alien's intelligence. But what Bostrom and his followers aren't seeing is that, since they believe technology advances along a spectrum that includes and then transcends human cognition, the spectrum itself as they've conceived it is anthropomorphic. It has humanlike qualities built in. Now, your common sense reasoning may seem to you like a "natural stage" of any sort of intellectual development, but that's a very human-centric perspective. Your common sense is intricate and very, very particular. It's far beyond our grasp – for anyone – to formally define a "spectrum of intelligence" that includes human cognition on it. Our brains are spectacularly multi-faceted and adept, in a very arcane way.
Machines progress along a different spectrum
Machine learning actually does work by defining a kind of spectrum, but only for an extremely limited sort of trajectory – only for tasks that have labeled data, such as identifying objects in images. With labeled data, you can compare and rank various attempts to solve the problem. The computer uses the data to measure how well it does. Like, one neural network might correctly identify 90% of the trucks in the images and then a variation after some improvements might get 95%.
Getting better and better at a specific task like that obviously doesn't lead to general common sense reasoning capabilities. We're not on that trajectory, so the fears should be allayed. The machine isn't going to get to a human-like level where it then figures out how to propel itself into superintelligence. No, it's just gonna keep getting better at identifying objects, that's all.
Intelligence isn't a Platonic ideal that exists separately from humans, waiting to be discovered. It's not going to spontaneously emerge along a spectrum of better and better technology. Why would it? That's a ghost story.
It might feel tempting to believe that increased complexity leads to intelligence. After all, computers are incredibly general-purpose – they can basically do any task, if only we can figure out how to program them to do that task. And we're getting them to do more and more complex things. But just because they could do anything doesn't mean they will spontaneously do everything we imagine they might.
No advancements in machine learning to date have provided any hint or inkling of what kind of secret sauce could get computers to gain "general common sense reasoning." Dreaming that such abilities could emerge is just wishful thinking and rogue imagination, no different now, after the last several decades of innovations, than it was back in 1950, when Alan Turing, the father of computer science, first tried to define how the word "intelligence" might apply to computers.
Don't sell, buy, or regulate on A.I.
Machines will remain fundamentally under our control. Computer errors will kill – people will die from autonomous vehicles and medical automation – but not on a catastrophic level, unless by the intentional design of human cyber attackers. When a misstep does occur, we take the system offline and fix it.
Now, the aforementioned techno-celebrity believers are true intellectuals and are truly accomplished as entrepreneurs, engineers, and thought leaders in their respective fields. But they aren't machine learning experts. None of them are. When it comes to their AI pontificating, it would truly be better for everyone if they published their thoughts as blockbuster movie scripts rather than earnest futurism.
It's time for term "AI" to be "terminated." Mean what you say and say what you mean. If you're talking about machine learning, call it machine learning. The buzzword "AI" is doing more harm than good. It may sometimes help with publicity, but to at least the same degree, it misleads. AI isn't a thing. It's vaporware. Don't sell it and don't buy it.
And most importantly, do not regulate on "AI"! Technology greatly needs regulation in certain arenas, for example, to address bias in algorithmic decision-making and the development of autonomous weapons – which often use machine learning – so clarity is absolutely critical in these discussions. Using the imprecise, misleading term "artificial intelligence" is gravely detrimental to the effectiveness and credibility of any initiative that regulates technology. Regulation is already hard enough without muddying the waters.
Want more of Dr. Data?
"You dream about these kinds of moments when you're a kid," said lead paleontologist David Schmidt.
- The triceratops skull was first discovered in 2019, but was excavated over the summer of 2020.
- It was discovered in the South Dakota Badlands, an area where the Triceratops roamed some 66 million years ago.
- Studying dinosaurs helps scientists better understand the evolution of all life on Earth.
David Schmidt, a geology professor at Westminster College, had just arrived in the South Dakota Badlands in summer 2019 with a group of students for a fossil dig when he received a call from the National Forest Service. A nearby rancher had discovered a strange object poking out of the ground. They wanted Schmidt to take a look.
"One of the very first bones that we saw in the rock was this long cylindrical bone," Schmidt told St. Louis Public Radio. "The first thing that came out of our mouths was, 'That kind of looks like the horn of a triceratops.'"
After authorities gave the go-ahead, Schmidt and a small group of students returned this summer and spent nearly every day of June and July excavating the skull.
Credit: David Schmidt / Westminster College
"We had to be really careful," Schmidt told St. Louis Public Radio. "We couldn't disturb anything at all, because at that point, it was under law enforcement investigation. They were telling us, 'Don't even make footprints,' and I was thinking, 'How are we supposed to do that?'"
Another difficulty was the mammoth size of the skull: about 7 feet long and more than 3,000 pounds. (For context, the largest triceratops skull ever unearthed was about 8.2 feet long.) The skull of Schmidt's dinosaur was likely a Triceratops prorsus, one of two species of triceratops that roamed what's now North America about 66 million years ago.
Credit: David Schmidt / Westminster College
The triceratops was an herbivore, but it was also a favorite meal of the Tyrannosaurus rex. That probably explains why the Dakotas contain many scattered triceratops bone fragments, and, less commonly, complete bones and skulls. In summer 2019, for example, a separate team on a dig in North Dakota made headlines after unearthing a complete triceratops skull that measured five feet in length.
Michael Kjelland, a biology professor who participated in that excavation, said digging up the dinosaur was like completing a "multi-piece, 3-D jigsaw puzzle" that required "engineering that rivaled SpaceX," he jokingly told the New York Times.
Morrison Formation in Colorado
James St. John via Flickr
The Badlands aren't the only spot in North America where paleontologists have found dinosaurs. In the 1870s, Colorado and Wyoming became the first sites of dinosaur discoveries in the U.S., ushering in an era of public fascination with the prehistoric creatures — and a competitive rush to unearth them.
Since, dinosaur bones have been found in 35 states. One of the most fruitful locations for paleontologists has been the Morrison formation, a sequence of Upper Jurassic sedimentary rock that stretches under the Western part of the country. Discovered here were species like Camarasaurus, Diplodocus, Apatosaurus, Stegosaurus, and Allosaurus, to name a few.
|Credit: Nobu Tamura/Wikimedia Commons|
As for "Shady" (the nickname of the South Dakota triceratops), Schmidt and his team have safely transported it to the Westminster campus. They hope to raise funds for restoration, and to return to South Dakota in search of more bones that once belonged to the triceratops.
Studying dinosaurs helps scientists gain a more complete understanding of our evolution, illuminating a through-line that extends from "deep time" to present day. For scientists like Schmidt, there's also the simple joy of coming to face-to-face with a lost world.
"You dream about these kinds of moments when you're a kid," Schmidt told St. Louis Public Radio. "You don't ever think that these things will ever happen."
A socially minded franchise model makes money while improving society.
- A social enterprise in California makes their franchises affordable with low interest loans and guaranteed salaries.
- The loans are backed by charitable foundations.
- If scaled up, the model could support tens of thousands of entrepreneurs who are currently financially incapable of entering franchise agreements.
The underdog challenging McDonald’s & Wall Street | Hard Reset by Freethink www.youtube.com
Social responsibility is becoming a major focus of many businesses. While turning a profit is always the ultimate goal — nobody can eat good intentions, after all — having a positive impact on society is becoming an equally important goal.
A restaurant chain in California, already focused on providing healthy food at a competitive cost, is testing a new way to create more entrepreneurs. Specifically, it is working with charitable foundations to provide business opportunities to those who normally would not have access.
When a company wants to expand without paying all of the upfront costs itself or taking on the entire risk of operating in a new market, it can enter into a franchise agreement with an entrepreneur. In exchange for a share of the profits (as well as some fees and adherence to certain quality standards), the entrepreneur — now a franchisee — can open their own branch of a larger brand. The entrepreneur enjoys the benefits of owning a business, while the brand owner can cash in on intellectual property.
This model is wildly successful. There is a reason you can find fast food joints like McDonald's everywhere from Times Square to Prague (next to the Museum of Communism, no less). According to the International Franchise Association, there were more than 733,000 franchised business establishments in the United States in 2018, accounting for nearly 3 percent of GDP.
The franchise model — in which a local agent keeps some earnings while handing over a portion to a central authority — isn't new. Indeed, variations have been around since the Middle Ages, though it only took off after WWII. Franchising is now a recognized system in many countries and is used in all manner of industries, including restaurants, pet supply stores, automotive repair shops, hotels, and even senior care.
The Catch-22: you have to spend money to make money
The biggest problem with franchising is the high cost of becoming a franchisee.
While the costs vary, opening a restaurant as a franchisee can easily cost $500,000. A franchise car repair shop can require $250,000, and opening a hotel under a franchise's banner can set a person back millions. In some cases, the franchiser also will set a minimum net worth requirement or insist that the money that pays their fees not be borrowed. Even if a person can find a way around that, most new businesses do not turn a profit for quite some time after opening. These limitations essentially rule out all but the wealthy from becoming a franchisee.
As a result, there are some social enterprises that are looking to make franchising more accessible to the less affluent.
As a business that hopes to rapidly expand, they looked to franchising. However, the idea of seeking out a bunch of rich people to support a business like theirs struck CEO Sam Polk as out of step with its vision. So, the company came up with a better idea.
Their Social Equity Franchise Program helps tenured Everytable employees open their own franchise locations through free training and assistance in securing low interest loans to finance the store. To help the entrepreneurs survive the difficult early years, participants in the program are assured an income of $40,000 in their first three years of operations. Repayments on the loans do not begin until after the business is turning a profit.
The capital for all these low interest loans comes from a number of foundations such as the California Wellness Foundation (Cal Wellness). Foundations like these are required to give away a small portion of their endowments every year on causes aligned with their missions. However, most of the rest of it is simply invested in the stock market to assure the endowment continues to exist.
People like Cal Wellness CEO Judy Belk have begun to invest that money elsewhere, like in loans to provide the money needed to open an Everytable franchise. As she explained to FreeThink:
"Cal Wellness and many other foundations are saying, 'I think we can do a little better with that [money]. Why not use that capital to invest in the communities that we're supposed to serve?'"
In the end, Everytable gets a new restaurant that expands the brand, foundations get returns on their investment, and the franchisee gets an opportunity that they likely never would have had without the program.
Expanding the Everytable model
If even a small share of the $2 trillion foundations in the U.S. have are invested into this sort of social cause, tens of thousands of loans could be given to those less affluent people who are looking to start a business. While this model likely would lower returns to institutional investors like charities, they could enjoy more tangible results in the communities they exist to serve. According to a report published by the Federal Reserve Bank of Atlanta, local entrepreneurship increases income and employment and decreases poverty.
At the individual level, this would help a lot of people who otherwise never would be able to seriously consider going into business for themselves. By a number of measures, business owners make more than wage workers and can also claim ownership of the assets that comprise the business. Beyond that, many small business owners enjoy the non-financial benefits of their position as well, including the independence and autonomy that often come with business ownership.
When working optimally, good business is good for society.
Fintech companies are using elements of video games to make personal finance more fun. But does it work, and what are the risks?
- Gamification is the process of incorporating elements of video games into a business, organization, or system, with the goal of boosting engagement or performance.
- Gamified personal finance apps aim to help people make better financial decisions, often by redirecting destructive financial behaviors (like playing the lottery) toward positive outcomes.
- Still, gamification has its risks, and scientists are still working to understand how gamification affects our financial behavior.
- YouTube www.youtube.com
The human brain is a pretty lazy organ. Although it's capable of remarkable ingenuity, it's also responsible for nudging us into bad behavioral patterns, such as being impulsive or avoiding difficult but important decisions. These kinds of short-sighted behaviors can hurt our finances.
However, they don't hurt the video game industry. In 2020, video games generated more than $179 billion in revenue, making the industry more valuable than sports and movies combined. A 2021 report from Limelight Network found that gamers worldwide spend an average of 8 hours and 27 minutes per week playing video games.
Good at gaming, bad at saving
It's not necessarily bad that Americans spend millions of dollars and hours on video games. But consider another set of statistics: 25 percent of Americans have no retirement savings at all, while roughly half are either living "on the edge" or "paycheck to paycheck," according to a recent report on the Financial Resilience of Americans from the FINRA Education Foundation. Meanwhile, experts predict that Social Security funds could dry up by 2035.
So, why don't people save more? After all, the benefits of compounding interest aren't exactly a secret: Investing a few hundred bucks every month would make most people millionaires by retirement if they start in their twenties. However, the recent FINRA report found that many Americans have alarmingly low levels of financial literacy, a topic that's not taught in most public schools.
Even for the financially literate, saving money is psychologically difficult
But what if we could infuse the instant gratification of video games into our long-term financial habits? In other words, what if finance looked less like an Excel spreadsheet and more like your favorite video game?
A growing number of finance applications are making that a reality. By using the same strategies video game designers have been optimizing for decades, gamifying personal finance could be one of the most efficient ways to help people save for the future while reaping instant psychological rewards. But it doesn't come without risks.
What is gamification?
In simple terms, gamification takes the motivating power of video games and applies it to other areas of life. The global research company Gartner offers a slightly more technical definition of gamification: "the use of game mechanics and experience design to digitally engage and motivate people to achieve their goals."
The odds are you have encountered gamification already. It's utilized by many popular apps, websites, and devices. For example, LinkedIn displays progress bars representing how much profile information you have filled out. The Apple Watch has a "Close Your Rings" feature that shows how many steps you need to walk to meet your daily goal.
Brands have used gamification to boost customer engagement for decades. For example, McDonald's launched its Monopoly game in 1987, which essentially attached lottery tickets to menu items, while M&M's gained consumer attention with Eye-Spy Pretzel, an online scavenger hunt game that went viral in 2010.
In addition to marketing, gamification is used in social media, fitness, education, crowdfunding, military recruitment, and employee training, just to name a few applications. The Chinese government has even gamified aspects of its Social Credit System, in which citizens perform or refrain from various activities to earn points that represent trustworthiness.
Finance is arguably one of the best-suited fields for gamification. One reason is that financial data can be easily measured and graphed. Perhaps more importantly, financial decisions occur in the background of almost everything we do in modern life, from deciding what we eat for lunch to where we are going to spend our lives.
Gamification doesn't just make boring stuff fun; it's also an effective way to change our behavior. Used properly, it can also disrupt our habits.
The nature of habits
It's tempting to think that we make our way through life by thoughtfully considering the information before us and making sensible choices. That's not really the case. Research suggests that about 40 percent of our daily activities are performed out of habit, a term the American Journal of Psychology defines as a "more or less fixed way of thinking, willing, or feeling acquired through previous repetition of a mental experience."
In other words, we spend much of our lives on autopilot. From an evolutionary perspective, it makes sense that we rely on habits: our brains require a lot of energy, especially when we're faced with tough decisions and complex problems, like financial planning. It's relatively easy to rely on learned behavioral patterns that provide a quick, reliable solution. However, those patterns don't always serve our long-term interests.
Saving money is a good example. Imagine you have $500 with which to do whatever you want. You could invest it. Or you could go on a shopping spree. Unfortunately, the brain doesn't process these two options the same way; in fact, it actually processes the investing option as something like a pain stimulus.
Why gamification works
Saving is painful. But can't people simply choose to be more financially responsible? In short: Yes, but it takes a lot of effort. After all, when it comes to changing behavior, willpower is only part of the equation.
Some psychologists think willpower is a finite resource, or that it's like an emotion whose motivational power ebbs and flows based on what's happening around us. For example, you might establish a monthly budget and stick to it for a couple weeks. But then you get stressed. The next time you're out shopping, you might find it harder to resist making an impulsive purchase in your stressed-out state.
Pixel Art Lootvlasdv via Adobe Stock
"A growing body of research shows that resisting repeated temptations takes a mental toll," the American Psychological Association writes. "Some experts liken willpower to a muscle that can get fatigued from overuse." In the terminology of psychology, this is called ego depletion.
Gamification offers a way to outsource your willpower. That's because games offer psychological rewards that can motivate us to perform certain actions that might otherwise have seemed too boring, taxing, or emotionally draining. What's more, gamifying parts of your life is less of a change of mind and more of a change of environment.
A 2017 study published in Computers in Human Behavior noted that "enriching the environment with game design elements, as gamification does by definition, directly modifies that environment, thereby potentially affecting motivational and psychological user experiences."
The study argued that games are most motivational when they address three key psychological needs: competence, autonomy, and social relatedness. It's easy to imagine how games can tap into these categories. For competence, games can feature badges and performance graphs. For autonomy, games can offer customizable avatars. And for social relatedness, games can feature compelling storylines and multiplayer gameplay.
Gamification and the brain
Games can motivate us by satisfying our psychological needs and giving us a sense of reward. From a neurological perspective, this occurs through the release of "feel-good" neurotransmitters, namely dopamine and oxytocin.
"Two core things have to happen in the brain to influence your decision-making," Paul Zak, a neuroscientist and professor of economic sciences at Claremont Graduate University, told Big Think. "The first is you have to attend to that information. That's driven by the brain's production of dopamine. The second thing, you've got to get my lazy brain to care about the outcomes. And that caring is driven by emotional resonance. And that's associated with the brain's production of oxytocin."
Cheerful Father And Son Competing In Video Games At HomeProstock-studio via Adobe Stock
When released simultaneously, these neurotransmitters can put us into a state that Zak calls "neurologic immersion." In this state, our everyday habits have less control over our behavior, and we're better able to take deliberate action. It's an idea Zak and his colleagues developed over two decades of using brain-imaging technology to study the nature of extraordinary experiences.
As he wrote in an article published by the World Experience Organization, neurologic immersion can occur when experiences, including video games, are unexpected, emotionally charged, narrowing one's focus to the experience itself, easy to remember, and provoking actions.
"The components of the extraordinary come as a package, not in isolation from each other," Zak wrote. "It's the 'action' part that is key to finding immersion. Extraordinary experiences cause people to take an action, whether it's donating to charity, buying a product, posting on social media, or returning to enjoy an experience again."
Games can invoke these types of immersive experiences.. But how exactly are financial organizations using gamification to help people "level up" their financial futures?
Gamifying personal finance
Banks and financial companies have been using gamification for years. What started with simple concepts, like PNC Bank's "Punch the Pig" savings feature, has evolved into a diverse field of games that are helping people stick to budgets, save money, and pay off debt.
What's surprising about the gamification of personal finance is that some of the most successful apps are redirecting destructive financial behaviors, like buying lottery tickets, toward positive outcomes. One example is an app called Long Game, which uses an approach called "lottery savings."
"People actually really love the lottery," Lindsay Holden, co-founder and CEO of Long Game, told Big Think. "The lottery today is a $70-billion-dollar industry in the U.S., and the people that are buying lotto tickets are the people that least should be buying lotto tickets. And so how can we redirect that spend into something that's helping them in their lives?"
Long Game's answer is to encourage users to make automatic or one-time investments into a prize-linked savings account. As users make investments, they earn coins that can be used to play games, some of which offer cash prizes. But unlike the real lottery, the prize money comes from banks that are partnered with Long Game, meaning users can't lose their principal investment.
Blast is a savings app aimed at traditional gamers. The platform lets users connect a savings account to their video game accounts. Users then set performance goals in the video games, such as killing a certain number of enemies. Accomplishing these goals triggers a pre-selected investment into the savings accounts. In addition to earning interest, users can also win prize money by accomplishing certain missions or placing high on public leaderboards.
"Gamers tell us they feel better with the time they spend gaming when they know they are micro-saving or micro-earning in the background," Blast co-founder and CEO Walter Cruttenden said in a statement.
Young gamer playing a video game wearing headphones.sezer66 via Adobe Stock
Fortune City takes a different approach to gamified finance. The app encourages users to track their spending habits, which are represented by visually appealing graphs. As users log expenses, they're able to build buildings in their own virtual city. The expense categories match the types of buildings users can construct; for example, buying food lets users construct a restaurant. It's like "SimCity" meets certified public accountant.
The risks of gamification
Gamifying your finances might help you save money, but it doesn't come without risks. After all, receiving extrinsic rewards when we perform a behavior can affect our intrinsic motivation to repeat that behavior both positively and negatively. It's a phenomenon called the overjustification effect.
In addition, gamified finance apps can also be addictive and encourage risky financial behavior. Robinhood, for example, uses visually appealing performance metrics and lottery-like game elements to incentivize the trading of stocks and cryptocurrencies. But while investing in these assets might be a good financial decision for some people, Robinhood arguably encourages its users to be "players" in the difficult world of trading, not necessarily rational investors.
What's more, gamification doesn't seem to work for everyone.
"From social psychology and behavioural economics, we know that the most likely [result of] gamification [is that you] will motivate some people, will demotivate other people, and for a third group there'll be no effect at all," noted a 2017 study on gamification and mobile banking published in Internet Research.
But given that 14.1 million Americans are unbanked, and millions more struggle with financial literacy, it's reasonable to think that gamified finance apps could help many people work toward financial independence.
"One of the most interesting things we've found is that people want help when it comes to making difficult decisions," Zak told Big Think. "In my view, any app that helps you be a more effective saver is probably a good app. But I think we have to do a lot more work to really understand the underlying neuroscience of gamification. And so we need to continue to design games that teach you more about how to 'level up in life,' not just level up in the game."