Once a week.
Subscribe to our weekly newsletter.
Can the U.S. fix unemployment with 'Universal Basic Jobs'?
What would happen if the U.S. guaranteed every citizen a job with a living wage and benefits?
Stephanie Keith / Getty
- A new book from Pavlina Tcherneva, chair of the economics department at New York's Bard College, makes the case for a "Job Guarantee" federal program.
- The program would grant jobs to every citizen who's willing and able to work.
- A 2019 poll found that a majority of Americans would support a federally funded jobs program.
Since COVID-19 began spreading across the U.S. earlier this year, more than 45 million Americans have filed for unemployment. The Federal government has passed a $2.3 trillion economic stimulus package. And unemployment hit Depression-era levels, with the Federal Reserve projecting that rates will hover around 9.3 percent by the end of 2020.
"This is the biggest economic shock, in the U.S. and the world, really, in living memory," Federal Reserve Chair Jerome Powell said at a news conference in June. "We went from the lowest level of unemployment in 50 years to the highest level in close to 90 years, and we did it in two months."
To economist Pavlina Tcherneva, the pandemic didn't just present the American economy with a unique set of problems, but rather revealed its built-in flaws that have long prevented millions of Americans from securing decent jobs.
In her new book, "The Case for a Job Guarantee", Tcherneva offers an ambitious policy proposal that calls for the federal government to provide living-wage jobs and benefits to anyone willing and able to work.
"At bottom," Tcherneva writes in the book, "the Job Guarantee is a policy of care, one that fundamentally rejects the notion that people in economic distress, communities in disrepair, and an environment in peril are the unfortunate but unavoidable collateral damage of a market economy."
The idea of using federal funding to create jobs isn't new. It's found in the U.N. Declaration of Human Rights, Franklin D. Roosevelt's proposed Economic Bill of Rights, and was again debated during the Civil Rights Movement. It's also a key component of the Green New Deal, a suite of policy proposals that seeks to aggressively tackle climate change and economic inequality.
In Tcherneva's vision, the Job Guarantee would act as a sort of buffer. Here's a bit on how a Job Guarantee might work in the U.S.:
$15 minimum wage and benefits
Jobs granted through the program would offer at least $15 per hour, and this base wage would remain flexible to match inflation over time. The Job Guarantee would also provide workers with health insurance, paid leave, childcare, and possibly fewer hours than the current 40-hour standard work week.
Establishing standards like these, Tcherneva argues, would pressure private firms to treat and pay workers better, considering that now they'd have more employment options and wouldn't have to settle for poor working conditions.
Jobs would be funded federally, administered locally
Across the U.S., unemployment offices would be converted into employment offices. The unemployed would be able to enter these offices and "leave with a list of employment options, public-service opportunities you'll be able to access locally," Tcherneva told Vox.
What would those jobs look like? Tcherneva offered some examples: performing weatherization on a local hardware store, replacing lead pipes on a construction site, helping out at a homeless shelter, or working on local alternative-energy projects.
The federal government would remain mostly hands off, allowing state and local governments to decide which public projects to pursue, and how to allocate resources.
The program would be 'counter-cyclical'
In the current economic system, unemployment spreads like a virus: people lose their jobs, stop spending money, businesses are forced to shut down, and so on.
A Job Guarantee could act as a buffer that absorbs unemployed people before they fall to the bottom rungs of the economic ladder. And this could help to stabilize the economy during recessions, assuming these workers continued to spend money. As the economy improves, workers could move back to their previous jobs, or to other employment options.
How the U.S. might pay for a Job Guarantee
Tcherneva doesn't deny that a Job Guarantee would require massive public investment, but she notes that what's lacking isn't the money, but political will. What's more, she notes the high social costs of having a large swath of the American workforce remain, more or less, permanently unemployed.
"I came to the Jobs Guarantee from a macroeconomic perspective — the realization that we were using unemployed people as a kind of "buffer stock" to control inflation," she told the Los Angeles Times. "Having unemployed people means that when the economy grows, those people would be there to take those jobs."
"But what if we could use employment as a buffer stock? That's obviously the superior option. I realized that you couldn't just argue about this as a macroeconomic policy, you have to bring in the human rights framework, the moral framework. You have to think about the kind of neglect, the health effects, the pain that unemployment inflicts on people who want to work."
According to projections from the Levy Institute, with which Tcherneva is affiliated, the program would cost about 1.5 percent of the U.S. GDP, boost real GDP by half a trillion dollars, and create 3 to 4 million jobs.
California is requiring all new homes to be built with #solar panels, all public buses to be zero emissions, and is… https://t.co/1wvxLQxfN9— Mike Hudema (@Mike Hudema)1593041549.0
The Job Guarantee proposal has no shortage of critics. What's more, these points are just a brief overview of what the program seeks to establish. But, surprisingly, more Americans seem to support the idea than you may realize.
According to a 2019 poll from The Hill-HarrisX, more than 70 percent of Americans said they would "somewhat" or "strongly" support a federal program that created jobs for the unemployed.
"Deepfakes" and "cheap fakes" are becoming strikingly convincing — even ones generated on freely available apps.
- A writer named Magdalene Visaggio recently used FaceApp and Airbrush to generate convincing portraits of early U.S. presidents.
- "Deepfake" technology has improved drastically in recent years, and some countries are already experiencing how it can weaponized for political purposes.
- It's currently unknown whether it'll be possible to develop technology that can quickly and accurately determine whether a given video is real or fake.
After former U.S. President William Henry Harrison delivered his inaugural speech on March 4, 1841, he posed for a daguerreotype, the first widely available photographic technology. It became the first photo taken of a sitting American president.
As for the eight presidents before Harrison, history can see them only through artistic renderings. (The exception is a handful of surviving daguerreotypes of John Quincy Adams, taken after he left office. In his diary, Adams described them as "hideous" and "too true to the original.")
But a recent project offers a glimpse of what early presidents might've looked like if photographed through modern cameras. Using FaceApp and Airbrush, Magdalene Visaggio, author of books such as "Eternity Girl" and "Kim & Kim," generated a collection of convincing portraits of the nation's first presidents, from George Washington to Ulysses S. Grant.
Modern Presidents George Washington https://t.co/CURJQB0kap— Magdalene Visaggio (@Magdalene Visaggio)1611952243.0
What might be surprising is that Visaggio was able to generate the images without a background in graphic design, using freely available tools. She wrote on Twitter:
"A lot of people think I'm a digital artist or whatever, so let me clarify how I work. Everything you see here is done in Faceapp+Airbrush on my phone. On the outside, each takes between 15-30 mins. Washington was a pretty simple one-and-done replacement."
Ulysses S Grant https://t.co/L1IGXLI3Vl— Magdalene Visaggio (@Magdalene Visaggio)1611959480.0
"Other than that? I am not a visual artist in any sense, just a hobbyist using AI tools see what she can make. I'm actually a professional comics writer."
Did another pass at Lincoln. https://t.co/PdT4QVpMbn— Magdalene Visaggio (@Magdalene Visaggio)1611973947.0
Of course, Visaggio isn't the first person to create deepfakes (or "cheap fakes") of politicians.
In 2017, many people got their first glimpse of the technology through a video depicting former President Barack Obama warning: "We're entering an era in which our enemies can make it look like anyone is saying anything at any point in time." The video quickly reveals itself to be fake, with comedian Jordan Peele speaking for the computer-generated Obama.
While deepfakes haven't yet caused significant chaos in the U.S., incidents in other nations may offer clues of what's to come.
The future of deepfakes
In 2018, Gabon's president Ali Bongo had been out of the country for months receiving medical treatment. After Bongo hadn't been seen in public for months, rumors began swirling about his condition. Some suggested Bongo might even be dead. In response, Bongo's administration released a video that seemed to show the president addressing the nation.
But the video is strange, appearing choppy and blurry in parts. After political opponents declared the video to be a deepfake, Gabon's military attempted an unsuccessful coup. What's striking about the story is that, to this day, experts in the field of deepfakes can't conclusively verify whether the video was real.
The uncertainty and confusion generated by deepfakes poses a "global problem," according to a 2020 report from The Brookings Institution. In 2018, the U.S. Department of Defense released some of the first tools able to successfully detect deepfake videos. The problem, however, is that deepfake technology keeps improving, meaning forensic approaches may forever be one step behind the most sophisticated forms of deepfakes.
As the 2020 report noted, even if the private sector or governments create technology to identify deepfakes, they will:
"...operate more slowly than the generation of these fakes, allowing false representations to dominate the media landscape for days or even weeks. "A lie can go halfway around the world before the truth can get its shoes on," warns David Doermann, the director of the Artificial Intelligence Institute at the University of Buffalo. And if defensive methods yield results short of certainty, as many will, technology companies will be hesitant to label the likely misrepresentations as fakes."
The author of 'How We Read' Now explains.
During the pandemic, many college professors abandoned assignments from printed textbooks and turned instead to digital texts or multimedia coursework.
As a professor of linguistics, I have been studying how electronic communication compares to traditional print when it comes to learning. Is comprehension the same whether a person reads a text onscreen or on paper? And are listening and viewing content as effective as reading the written word when covering the same material?
The answers to both questions are often “no," as I discuss in my book “How We Read Now," released in March 2021. The reasons relate to a variety of factors, including diminished concentration, an entertainment mindset and a tendency to multitask while consuming digital content.
Print versus digital reading
The benefits of print particularly shine through when experimenters move from posing simple tasks – like identifying the main idea in a reading passage – to ones that require mental abstraction – such as drawing inferences from a text. Print reading also improves the likelihood of recalling details – like “What was the color of the actor's hair?" – and remembering where in a story events occurred – “Did the accident happen before or after the political coup?"
Studies show that both grade school students and college students assume they'll get higher scores on a comprehension test if they have done the reading digitally. And yet, they actually score higher when they have read the material in print before being tested.
Educators need to be aware that the method used for standardized testing can affect results. Studies of Norwegian tenth graders and U.S. third through eighth graders report higher scores when standardized tests were administered using paper. In the U.S. study, the negative effects of digital testing were strongest among students with low reading achievement scores, English language learners and special education students.
My own research and that of colleagues approached the question differently. Rather than having students read and take a test, we asked how they perceived their overall learning when they used print or digital reading materials. Both high school and college students overwhelmingly judged reading on paper as better for concentration, learning and remembering than reading digitally.
The discrepancies between print and digital results are partly related to paper's physical properties. With paper, there is a literal laying on of hands, along with the visual geography of distinct pages. People often link their memory of what they've read to how far into the book it was or where it was on the page.
But equally important is mental perspective, and what reading researchers call a “shallowing hypothesis." According to this theory, people approach digital texts with a mindset suited to casual social media, and devote less mental effort than when they are reading print.
Podcasts and online video
Given increased use of flipped classrooms – where students listen to or view lecture content before coming to class – along with more publicly available podcasts and online video content, many school assignments that previously entailed reading have been replaced with listening or viewing. These substitutions have accelerated during the pandemic and move to virtual learning.
Surveying U.S. and Norwegian university faculty in 2019, University of Stavanger Professor Anne Mangen and I found that 32% of U.S. faculty were now replacing texts with video materials, and 15% reported doing so with audio. The numbers were somewhat lower in Norway. But in both countries, 40% of respondents who had changed their course requirements over the past five to 10 years reported assigning less reading today.
A primary reason for the shift to audio and video is students refusing to do assigned reading. While the problem is hardly new, a 2015 study of more than 18,000 college seniors found only 21% usually completed all their assigned course reading.
Maximizing mental focus
Researchers found similar results with university students reading an article versus listening to a podcast of the text. A related study confirms that students do more mind-wandering when listening to audio than when reading.
Results with younger students are similar, but with a twist. A study in Cyprus concluded that the relationship between listening and reading skills flips as children become more fluent readers. While second graders had better comprehension with listening, eighth graders showed better comprehension when reading.
Research on learning from video versus text echoes what we see with audio. For example, researchers in Spain found that fourth through sixth graders who read texts showed far more mental integration of the material than those watching videos. The authors suspect that students “read" the videos more superficially because they associate video with entertainment, not learning.
The collective research shows that digital media have common features and user practices that can constrain learning. These include diminished concentration, an entertainment mindset, a propensity to multitask, lack of a fixed physical reference point, reduced use of annotation and less frequent reviewing of what has been read, heard or viewed.
Digital texts, audio and video all have educational roles, especially when providing resources not available in print. However, for maximizing learning where mental focus and reflection are called for, educators – and parents – shouldn't assume all media are the same, even when they contain identical words.
Humans may have evolved to be tribalistic. Is that a bad thing?
- From politics to every day life, humans have a tendency to form social groups that are defined in part by how they differ from other groups.
- Neuroendocrinologist Robert Sapolsky, author Dan Shapiro, and others explore the ways that tribalism functions in society, and discuss how—as social creatures—humans have evolved for bias.
- But bias is not inherently bad. The key to seeing things differently, according to Beau Lotto, is to "embody the fact" that everything is grounded in assumptions, to identify those assumptions, and then to question them.
Ancient corridors below the French capital have served as its ossuary, playground, brewery, and perhaps soon, air conditioning.