Once a week.
Subscribe to our weekly newsletter.
Empirics and Psychology: Eight of the World’s Top Young Economists Discuss Where Their Field Is Going
The past few years have been tough on economics and economists. In a searing indictment written one year after the collapse of Lehman Brothers, Paul Krugman concluded that
the central cause of the profession’s failure was the desire for an all-encompassing, intellectually elegant approach that also gave economists a chance to show off their mathematical prowess. Unfortunately, this romanticized and sanitized vision of the economy led most economists to ignore all the things that can go wrong. They turned a blind eye to the limitations of human rationality…to the problems of institutions that run amok; to the imperfections of markets…and to the dangers created when regulators don’t believe in regulation.
Last August, Graeme Maxton published a book arguing that “modern economics has failed us,” and this April, the New York Times hosted a roundtable “about how the teaching of economics should change in light of the financial crisis.”
This soul-searching has led to the establishment of organizations such as the Institute for New Economic Thinking and invigorated discussions about alternative metrics for gauging countries’ welfare (last July, in fact, the UN General Assembly adopted a resolution asserting that “the gross domestic product indicator by nature was not designed to and does not adequately reflect the happiness and well-being of people in a country”).
To get the pulse of a field in flux, I asked eight of the world’s top young economists to identify the biggest unanswered questions in economics and predict what breakthroughs will define it a decade or two hence.
Stanford University; 39
Why are developing countries poor? In terms of impact on mankind globally, this strikes me as probably the biggest and most important current economic question. I think the answer is complex and linked to a combination of factors around history, geography, luck, etc. I am personally working on management practices: people in developing countries are poor because wages are low, and wages are low because firms are very unproductive, and firms seem to be unproductive in large part because of bad management. An Indian worker makes in one week what an average U.S. worker makes in a half a day. One big factor seems to be that factories in India are frankly very badly managed: equipment is not looked after, materials are wasted, theft is common because inventory is not monitored, defects keep occurring, etc. In a recent project with the World Bank, we found in randomized experiments that giving simple management advice to Indian factories increased productivity by 20%, and I suspect that a number like 200% would be possible in the longer run.
Developed countries’ biggest question now is probably: how do we restart growth? There are a lot of issues here around innovation, curbing entitlement spending, etc. The area I know best is the short-run side of this, controlling policy uncertainty. A big factor that politicians and the media are pushing heavily right now is that growth is getting crushed by how policy has induced uncertainty. Basically, firms and consumers in the U.S. and Europe are holding back from spending until they know what is going to happen with taxes, spending, and (to a lesser extent) regulations over the next year or so. In the U.S., we have the November 2012 election generating a massive cloud of policy uncertainty, and in Europe, a rolling wave of elections and collapsing governments.
I do not think that any one single breakthrough will happen. The progress is likely to be heavily empirical—simply because more and more data is becoming available, and it is easy to analyze with fast computers (so empirics is now advancing faster than theory)—and spread across many hundreds of topics. So economics has gone from Victorian science, where one genius in his shed could invent the steam engine over the weekend, to industrial science, where innovation comes in thousands of tiny steps made by dozens of research teams.
Harvard University; 32
Many economists are concerned with two broad questions: how can we increase the rate of economic growth and overall well-being, and how can we reduce the rate of poverty? Countless policies—taxation, education, healthcare, etc.—have been implemented in an effort to achieve those objectives. One of our biggest challenges is to distill each policy’s unique impact so that we can understand which ones actually work and which ones do not.
The traditional state of economics is captured by the joke about ten economists, each of whom has a different theory of how the world works, none of which is directly tested or verified. Looking ahead, I am most excited about the prospect of having clear, evidence-based answers on which policies have the most beneficial economic impacts. I am especially optimistic that the expansion of access to large administrative datasets, such as earnings data from social-security records or student-achievement data from school districts, will yield sharp, quasi-experimental evidence that allows us to test theories and estimate key parameters of economic models. While theory will play an important role in guiding this research, its assumptions and conclusions will increasingly be empirically founded.
Within this broad area, I plan to pursue research on two sets of projects over the next few years. The first will try to identify the determinants of intergenerational mobility, with an eye towards finding policies that increase equality of opportunity. Should we be focusing on increasing access to higher education? Changing the structure of elementary schooling? Revamping the tax code? A second set of projects will explore the implications of behavioral economics for policymaking. Although we have accumulated considerable evidence showing that people do not always behave rationally, we do not have as good a sense of how they actually do behave and what this means for policy. I hope to make progress on this front, focusing on how we can design cost-effective policies that encourage people to save adequately for their retirement—to give just one example.
Federal Reserve Bank of New York; 37
I think the recent world economic crisis has firmly put back on the map basic macroeconomics: that is, the study of traditional questions, such as how to use monetary and fiscal policy to eliminate unemployment and control inflation. It was actually becoming quite unfashionable within economics to study these types of questions, even though they remain unanswered to a large extent. People even graduated with PhDs in economics with little idea about what role, if any, the government plays in stabilizing business cycles, the role of regulations, and so on. Instead, it was becoming increasingly fashionable to tackle smaller but more manageable questions for which data is rich and answers clear.
My guess, therefore, is that if one looks back 20 years from now, one will notice that a shift occurred towards studying the basic, big-picture, policy-relevant questions of macroeconomics—e.g., optimal currency areas, bank runs, fads and herding in financial markets, and automatic stabilizers—that have the power to change the course of history. I think there have been two comparable events that shaped the field in this way. As a discipline, macroeconomics was born in response to the Great Depression, giving rise to Keynesianism; the rational-expectations revolution in macroeconomics was born in response to the great inflation on the 1970s.
Perhaps somewhat under the radar, the past two decades have witnessed the integration of the macroeconomics that came out of the 1970s and 1980s with basic Keynesian models developed in the wake of the Great Depression. I suspect that the current crisis will accelerate that development, with models integrating financial frictions that were clearly central to its emergence.
New York University; 40
The most central open question in economic theory, as I see it, is how to model realistic economic agents. Traditionally, economists have relied on the rational-actor model, but it is clear that it is just a rough caricature. It has been greatly enriched by behavioral economics in the past 30 years. Still, we are far from a unified, versatile, believable alternative to the rational-actor model. I am hopeful, though, that this might be overcome—in part because of progress in the sister disciplines (psychology and neuroscience) and basic modeling, and also because empirical anomalies are forcing the economic profession to be more open-minded. Contributions by computer scientists and physicists will help inject new perspectives into economics.
The largest concrete questions in economics are, arguably, how to increase growth—particularly in developing countries—and how to avoid economic disasters and financial crises.
Progress in understanding limited rationality will lead to progress on answering the concrete questions. Low levels of growth are in part due to misapplied cognitive heuristics that lead people to be timid, inert, and gullible. Regarding disasters, during the unfolding of the crisis, traditional macro-financial factors (bank runs, deleveraging, etc.) have arguably been more important than behavioral factors. However, behavioral elements seem to have been paramount in the buildup of the current crisis (in particular, the neglect of tail events by financial actors and by the architects of the euro), as perhaps they are in most crises. The modeling of agents with bounded rationality will help us build economic models (in particular, macroeconomic and financial models) and institutions that better take into account the limitations of human reason.
Harvard University; 40
All countries wish to pursue sustainable growth without large boom-bust episodes. How exactly one accomplishes this remains a challenge that has been made starker by the current crisis. In an increasingly globalized world, the search for answers will necessarily require a much deeper understanding of three areas that interest me. One, we need a better understanding of the interlinkages across countries in trade, finance, and macroeconomic policy. The crisis in the Euro area brings this to the forefront. The complex ties across the member countries via trade, via banks, and through a shared monetary policy are central factors behind the ongoing sovereign debt, banking, and growth crises in the region. While trade interactions are better understood, financial flows remain a challenge.
Two, understanding the global economy requires a greater appreciation of the differences across economies. In the past, research mainly focused on analyzing interactions across economies that were similar in terms of their stages of development and their economic institutions. The most interesting questions today, however, concern interactions between developed economies and fast-growing developing economies, and between countries with diverse economic institutions. Questions on so-called global imbalances, currency wars, and capital controls have to do with interactions across diverse countries.
Three, understanding asymmetries in the international monetary system—with the prominence of the dollar in trade and financial transactions—will be crucial to understanding the propagation of shocks across economies. In my research, I find that international prices, regardless of what currency they are set in, respond very little to exchange rates. Since the dollar is the predominant trade currency, this implies that exchange-rate movements have a much smaller impact on U.S. import price inflation than they do on inflation in other countries.
Addressing these areas will require breakthroughs in theory and empirical work, with more micro-level datasets on prices, trade, and capital flows being brought to bear.
George Mason University; 32
My candidate for the biggest unanswered question in economics is the status of the rationality postulate: the decision to analyze actors as utility maximizers with consistent preferences. If we view economics as an “engine” for understanding the world, the rationality postulate was that engine in nearly all of economics until quite recently. The rise of behavioral economics has challenged the usefulness and, in a more subtle but radical way, the legitimacy of the rationality engine. While only a minority of economists would describe themselves as “behavioralists,” behavioralism has affected many more by influencing the kinds of questions economists consider important to ask and influencing the kinds of answers to those questions they consider illuminating. These influences have the potential to profoundly affect the way economics is done, and thus what economics is able offer our understanding of the world.
At the moment, most behavioralism avers merely to “fine tune” the rationality engine rather than replace it. But even such tuning can have and, as I intimated a moment ago, I think has already had, a noticeable impact on how a growing number of economists and those following them interpret society. To the extent that economists’ view of, say, markets as reflecting rational vs. irrational systems—or, more specifically, their interpretation of economic crises as the product of markets responding rationally to poor policy vs. the product of endemic irrational decision-making—either directly or indirectly influences public policy, the way in which the status of the rationality postulate is resolved will not merely shape what economists are doing. It will shape the kind of society we inhabit.
University of Chicago; 27
In his famous 1945 article, “The Use of Knowledge in Society,” F. A. Hayek argued that despite their inequity and inefficiency, free markets were necessary in order to allow the incorporation of information held by dispersed individuals into social decisions. No central planner could hope to collect and process all the information necessary for social decisions; only markets allowed and provided the incentives for disaggregated information processing. Yet, increasingly, information technology is leading individuals to delegate their most “private” decisions to automated processing systems. Choices of movies, one of the last realms of taste one would have guessed could be delegated to centralized expertise, are increasingly shaped by services like Netflix’s recommender system. While these information systems are mostly nongovernmental, they are sufficiently centralized that it is increasingly hard to see how dispersed information poses the challenge it once did to centralized planning.
Information technology thus fundamentally challenges the standard foundations of the market economy. For many years to come, economists will increasingly have to struggle with this challenge. Some will harness the power of the data and computational power provided by information technology to provide increasingly precise and accurate prescriptions for economic planning. Others, who value the libertarian tradition that has often been associated with economics, will be forced to articulate other arguments, perhaps based on privacy, that are not susceptible to erosion by the increasing power of centralized computation.
University of Pennsylvania; 39
Economics is in the midst of a massive and radical change. It used to be that we had little data, and no computing power, so the role of economic theory was to “fill in” for where facts were missing. Today, every interaction we have in our lives leaves behind a trail of data. Whatever question you are interested in answering, the data to analyze it exists on someone’s hard drive, somewhere. This background informs how I think about the future of economics.
Specifically, the tools of economics will continue to evolve and become more empirical. Economic theory will become a tool we use to structure our investigation of the data. Equally, economics is not the only social science engaged in this race: our friends in political science and sociology use similar tools; computer scientists are grappling with “big data” and machine learning; and statisticians are developing new tools. Whichever field adapts best will win. I think it will be economics. And so economists will continue to broaden the substantive areas we study. Since Gary Becker, we have been comfortable looking beyond the purely pecuniary domain, and I expect this trend towards cross-disciplinary work to continue.
Photo Credit: JFunk/Shutterstock.com
Some evidence attributes a certain neurological phenomenon to a near death experience.
Time of death is considered when a person has gone into cardiac arrest. This is the cessation of the electrical impulse that drive the heartbeat. As a result, the heart locks up. The moment the heart stops is considered time of death. But does death overtake our mind immediately afterward or does it slowly creep in?
Some scientists have studied near death experiences (NDEs) to try to gain insights into how death overcomes the brain. What they've found is remarkable, a surge of electricity enters the brain moments before brain death. One 2013 study out of the University of Michigan, which examined electrical signals inside the heads of rats, found they entered a hyper-alert state just before death.
Scientists are beginning to think an NDE is caused by reduced blood flow, coupled with abnormal electrical behavior inside the brain. So the stereotypical tunnel of white light might derive from a surge in neural activity. Dr. Sam Parnia is the director of critical care and resuscitation research, at NYU Langone School of Medicine, in New York City. He and colleagues are investigating exactly how the brain dies.
Our cerebral cortex is likely active 2–20 seconds after cardiac arrest. Credit: Getty Images.
In previous work, he's conducted animal studies looking at the moments before and after death. He's also investigated near death experiences. “Many times, those who have had such experiences talk about floating around the room and being aware of the medical team working on their body," Dr. Parnia told Live Science. “They'll describe watching doctors and nurses working and they'll describe having awareness of full conversations, of visual things that were going on, that would otherwise not be known to them."
Medical staff confirm this, he said. So how could those who were technically dead be cognizant of what's happening around them? Even after our breathing and heartbeat stops, we're conscious for about 2–20 seconds, Dr. Parnia says. That's how long the cerebral cortex is thought to last without oxygen. This is the thinking and decision-making part of the brain. It's also responsible for deciphering the information gathered from our senses.
According to Parnia during this period, "You lose all your brain stem reflexes — your gag reflex, your pupil reflex, all that is gone." Brain waves from the cerebral cortex soon become undetectable. Even so, it can take hours for our thinking organ to fully shut down.
Usually, when the heart stops beating, someone performs CPR (cardiopulmonary resuscitation). This will provide about 15% of the oxygen needed to perform normal brain function. "If you manage to restart the heart, which is what CPR attempts to do, you'll gradually start to get the brain functioning again," Parnia said. “The longer you're doing CPR, those brain cell death pathways are still happening — they're just happening at a slightly slower rate."
CPR may help retain some brain function for longer. Credit: Getty Images.
Dr. Parnia's latest, ongoing study looks at large numbers of Europeans and Americans who have experienced cardiac arrest and survived. "In the same way that a group of researchers might be studying the qualitative nature of the human experience of 'love,'" he said, "we're trying to understand the exact features that people experience when they go through death, because we understand that this is going to reflect the universal experience we're all going to have when we die."
One of the objectives is to observe how the brain acts and reacts during cardiac arrest, through the process of death, and during revival. How much oxygen exactly does it take to reboot the brain? How is the brain affected after revival? Learning where the lines are drawn might improve resuscitation techniques, which could save countless lives per year.
"At the same time, we also study the human mind and consciousness in the context of death," Parnia said, “to understand whether consciousness becomes annihilated or whether it continues after you've died for some period of time — and how that relates to what's happening inside the brain in real time."
For more on the scientific perspective on a near death experience, click here:
The experience of life flashing before one's eyes has been reported for well over a century, but where's the science behind it?
At the age of 16, when Tony Kofi was an apprentice builder living in Nottingham, he fell from the third story of a building. Time seemed to slow down massively, and he saw a complex series of images flash before his eyes.
As he described it, “In my mind's eye I saw many, many things: children that I hadn't even had yet, friends that I had never seen but are now my friends. The thing that really stuck in my mind was playing an instrument". Then Tony landed on his head and lost consciousness.
When he came to at the hospital, he felt like a different person and didn't want to return to his previous life. Over the following weeks, the images kept flashing back into his mind. He felt that he was “being shown something" and that the images represented his future.
Later, Tony saw a picture of a saxophone and recognized it as the instrument he'd seen himself playing. He used his compensation money from the accident to buy one. Now, Tony Kofi is one of the UK's most successful jazz musicians, having won the BBC Jazz awards twice, in 2005 and 2008.
Though Tony's belief that he saw into his future is uncommon, it's by no means uncommon for people to report witnessing multiple scenes from their past during split-second emergency situations. After all, this is where the phrase “my life flashed before my eyes" comes from.
But what explains this phenomenon? Psychologists have proposed a number of explanations, but I'd argue the key to understanding Tony's experience lies in a different interpretation of time itself.
When life flashes before our eyes
The experience of life flashing before one's eyes has been reported for well over a century. In 1892, a Swiss geologist named Albert Heim fell from a precipice while mountain climbing. In his account of the fall, he wrote is was “as if on a distant stage, my whole past life [was] playing itself out in numerous scenes".
More recently, in July 2005, a young woman called Gill Hicks was sitting near one of the bombs that exploded on the London Underground. In the minutes after the accident, she hovered on the brink of death where, as she describes it: “my life was flashing before my eyes, flickering through every scene, every happy and sad moment, everything I have ever done, said, experienced".
In some cases, people don't see a review of their whole lives, but a series of past experiences and events that have special significance to them.
Explaining life reviews
Perhaps surprisingly, given how common it is, the “life review experience" has been studied very little. A handful of theories have been put forward, but they're understandably tentative and rather vague.
For example, a group of Israeli researchers suggested in 2017 that our life events may exist as a continuum in our minds, and may come to the forefront in extreme conditions of psychological and physiological stress.
Another theory is that, when we're close to death, our memories suddenly “unload" themselves, like the contents of a skip being dumped. This could be related to “cortical disinhibition" – a breaking down of the normal regulatory processes of the brain – in highly stressful or dangerous situations, causing a “cascade" of mental impressions.
But the life review is usually reported as a serene and ordered experience, completely unlike the kind of chaotic cascade of experiences associated with cortical disinhibition. And none of these theories explain how it's possible for such a vast amount of information – in many cases, all the events of a person's life – to manifest themselves in a period of a few seconds, and often far less.
Thinking in 'spatial' time
An alternative explanation is to think of time in a “spatial" sense. Our commonsense view of time is as an arrow that moves from the past through the present towards the future, in which we only have direct access to the present. But modern physics has cast doubt on this simple linear view of time.
Indeed, since Einstein's theory of relativity, some physicists have adopted a “spatial" view of time. They argue we live in a static “block universe" in which time is spread out in a kind of panorama where the past, the present and the future co-exist simultaneously.
The modern physicist Carlo Rovelli – author of the best-selling The Order of Time – also holds the view that linear time doesn't exist as a universal fact. This idea reflects the view of the philosopher Immanuel Kant, who argued that time is not an objectively real phenomenon, but a construct of the human mind.
This could explain why some people are able to review the events of their whole lives in an instant. A good deal of previous research – including my own – has suggested that our normal perception of time is simply a product of our normal state of consciousness.
In many altered states of consciousness, time slows down so dramatically that seconds seem to stretch out into minutes. This is a common feature of emergency situations, as well as states of deep meditation, experiences on psychedelic drugs and when athletes are “in the zone".
The limits of understanding
But what about Tony Kofi's apparent visions of his future? Did he really glimpse scenes from his future life? Did he see himself playing the saxophone because somehow his future as a musician was already established?
There are obviously some mundane interpretations of Tony's experience. Perhaps, for instance, he became a saxophone player simply because he saw himself playing it in his vision. But I don't think it's impossible that Tony did glimpse future events.
If time really does exist in a spatial sense – and if it's true that time is a construct of the human mind – then perhaps in some way future events may already be present, just as past events are still present.
Admittedly, this is very difficult to make sense of. But why should everything make sense to us? As I have suggested in a recent book, there must be some aspects of reality that are beyond our comprehension. After all, we're just animals, with a limited awareness of reality. And perhaps more than any other phenomenon, this is especially true of time.
Might as well face it, you're addicted to love.
- Many writers have commented on the addictive qualities of love. Science agrees.
- The reward system of the brain reacts similarly to both love and drugs
- Someday, it might be possible to treat "love addiction."
Since people started writing, they've written about love. The oldest love poem known dates back to the 21st century BCE. For most of that time, writers also apparently have been of two (or more) minds about it, announcing that love can be painful, impossible to quit, or even addictive — while also mentioning how nice it is.
The idea of love as an addiction is one that is both familiar and unsettling. Surely it can't be the case that our mutual love with our partner — a thing that can produce euphoria, consumes a great deal of our time, and which we fear losing — can be compared to a drug habit? But indeed, many scientists have turned their attention to the idea of "love addiction" and how your brain on drugs might resemble your brain in love.
Love and other drugs
In a 2017 article published in the journal Philosophy, Psychiatry, & Psychology, a team of neuroethicists considered the idea that love is addicting and held the idea up to science for scrutiny.
They point out that the leading model of addiction rests on the notion of a drug causing the brain to release an unnatural level of reward chemicals, such as dopamine, effectively hijacking the brain's reward system. This phenomenon isn't strictly limited to drugs, though they are more effective at this process than other things. Rats can get a similar rush from sugar as from cocaine, and they can have terrible withdrawal symptoms when the sugar crash kicks in.
On the structural level, there is a fair amount of overlap between the parts of the brain that handle love and pair-bonding and the parts that deal with addiction and reward processing. When inside an MRI machine and asked to think about the person they love romantically, the reward centers of people's brains light up like Broadway.
Love as an addiction
These facts lead the authors to consider two ideas, dubbed the "narrow" and "broad" views of love as an addiction.
The narrow view holds that addiction is the result of abnormal brain processes that simply don't exist in non-addicts. Under this paradigm, "food-seeking or love-seeking behaviors are not truly the result of addiction, no matter how addiction-like they may outwardly appear." It could be that abnormal processes cause the brain's reward system to misfire when exposed to love and to react to it excessively.
If this model is accurate, love addiction would be a rare thing — one study puts it around five to ten percent of the population — but could be considered a disorder similar to others and caused by faulty wiring in the brain. As with other addictions, this malfunction of the reward system could lead to an inability to fully live a typical life, difficulty having healthy relationships, and a number of other negative consequences.
The broad view looks at addiction differently, perhaps even radically.
It begins with the idea that addiction exists on a spectrum of motivations. All of our appetites, including those for food and water, exist on this spectrum and activate similar parts of the brain when satisfied. We can have appetites for anything that taps into our reward system, including food, gambling, sex, drugs, and love. For most people most of the time, our appetites are fairly temperate, if recurring. I might be slightly "addicted" to food — I do need some a few times per day — but that "addiction" doesn't have any negative effects on my health.
An appetite for cocaine, however, is rarely temperate and usually dangerous. Likewise, a person's appetite for love could reach addiction levels, and a person could be considered "hooked" on relationships (or on a particular person). This would put love addiction at the extreme end of the spectrum.
None of this is to say that the authors think that love is bad for you just because it can resemble an addiction. Love addiction is not the same as cocaine addiction at the neurological level: important differences, like how long it takes for the desire for another "hit" to occur, do exist. Rather, the authors see this as an opportunity to reconsider our approach to addiction in general and to think about how we can help the heartsick when they just can't seem to get over their last relationship.
Is "love addiction" a treatable disorder?
Hypothetically, a neurological basis for an addiction to love could point toward interventions that "correct" for it. If the narrow view of addiction is accurate, perhaps some people will be able to seek treatment for love addiction in the same way that others seek help to quit smoking. If the broad view of addiction is correct, the treatment of love addiction would be unlikely as it may be difficult to properly identify where the cutoff of acceptability on a spectrum should be.
Either way, since love is generally held in high regard by all cultures and doesn't quite seem to be in the same category as a bad cocaine habit in terms of social undesirability, the authors doubt we'll be treating anyone for "love addiction" anytime soon.
A brief passage from a recent UN report describes what could be the first-known case of an autonomous weapon, powered by artificial intelligence, killing in the battlefield.