from the world's big
Don’t Blame Lehman
Taylor’s academic fields of expertise are macroeconomics, monetary economics, and international economics. He is known for his research on the foundations of modern monetary theory and policy, which has been applied by central banks and financial market analysts around the world. He has an active interest in public policy. Taylor is currently a member of the California Governor’s Council of Economic Advisors, where he also previously served from 1996 to 1998. In the past, he served as senior economist on the President’s Council of Economic Advisers from 1976 to 1977, as a member of the President’s Council of Economic Advisers from 1989 to 1991. He was also a member of the Congressional Budget Office’s Panel of Economic Advisers from 1995 to 2001. For four years from 2001 to 2005, Taylor served as Under Secretary of Treasury for International Affairs where he was responsible for U.S. policies in international finance, which includes currency markets, trade in financial services, foreign investment, international debt and development, and oversight of the International Monetary Fund and the World Bank. He was also responsible for coordinating financial policy with the G-7 countries, was chair of the working party on international macroeconomics at the OECD, and was a Member of the Board of the Overseas Private Investment Corporation. His book Global Financial Warriors: The Untold Story of International Finance in the Post-9/11 World chronicles his years as head of the international division at Treasury.
His recent book Getting Off Track: How Government Actions and Interventions Caused, Prolonged, and Worsened the Financial Crisis was one of the first on the financial crisis, and he has since followed up with two books on preventing future crises, co-editing The Road ahead for the Fed and Ending Government Bailouts As We Know Them in which leading experts examine and debate proposals for financial reform and exit strategies.
Taylor was awarded the Alexander Hamilton Award for his overall leadership in
international finance at the U.S. Treasury. He was also awarded the Treasury
Distinguished Service Award for designing and implementing the currency reforms in Iraq, and the Medal of the Republic of Uruguay for his work in resolving the 2002 financial crisis. In 2005, he was awarded the George P. Shultz Distinguished Public Service Award. Taylor has also won many teaching awards; he was awarded the Hoagland Prize for excellence in undergraduate teaching and the Rhodes Prize for his high teaching ratings in Stanford’s introductory economics course. He also received a Guggenheim Fellowship for his research, and he is a fellow of the American Academy of Arts and Sciences and the Econometric Society; he formerly served as vice president of the American Economic Association.
Before joining the Stanford faculty in 1984, Taylor held positions of professor of
economics at Princeton University and Columbia University. Taylor received a B.A. in economics summa cum laude from Princeton University in 1968 and a Ph.D. in economics from Stanford University in 1973.
Question: In what ways could policy measures have been more predictable in this crisis?
John Taylor: There's several ways policy could have been more predictable in this crisis. I think first of all, by sticking to the predictable policy that worked well in the '80's and '90's with respect to the setting of interest rates. That's something that I think we have a lot of evidence for, it was unpredictable, a surprise if you'd like, to take rates too low for so long even though there was an effort to explain it, it was hard for people to understand how long that was going to be. Was it a new policy of actual low interest rates, or not. So that's an element of unpredictability that I think has caused damage.
I think a second thing I would stress is the response to the problems in the financial institutions. First of all, misdiagnosing it as a liquidity problem, so more liquidity was pumped into the economy, new facilities were set up. That added unpredictability rather than looking at the evidence that there was a problem in the banks due to the mortgages and the other toxic assets.
And then I'd say the third element of unpredictability, and probably the most important for the severe panic. And I would go back to, for example, the Bear Sterns intervention. Let's just take as a given, although it is debatable that that was the right thing to do in the heat of the moment, very difficult under pressure. I have been in policymaking jobs. I know what that's like. So, the decision was to intervene and bailout Bear Sterns creditors.
It seems to me that that was the moment, as clear as possible, about what would be the policy in case another institution had some problems to articulate it, but there was very little description of it and in fact, policies tended to vary from case to case. You had the Indymac, you had WaMu, you had of course the Lehman Brothers, AIG, Fannie and Freddie, and in each case there was a difference and each one seemed to be approached independently, if you like, ad hoc without an overall strategy.
That's probably the biggest degree of unpredictability. Equity holders, preferred equity holders, debt holders, bank holding company versus bank, various kinds of senior creditors, counterparties, there was a huge amount of uncertainty about how they were going to be treated. And of course, when the ultimate policy was port forth, the so-called TARP was put forth in a way that was quite confusing. The Treasury Secretary just had 2 ½ pages , testified with the Chairman of the Fed of the Banking Committee, and couldn't answer, in my view, the questions very well. A huge irate response from the Congress at that point and it really then became very clear, I think, that there wasn't really policy. They had not been thinking about it, at least as evidenced from the testimony and the other actions. And so, that was probably the most unpredictable, most damaging part of this whole episode.
Question: Did the financial panic come about because of the Lehman Brothers’ bankruptcy, or was it the government response?
John Taylor: This is a question about the timing of the panic and whether it was associated with the Lehman Brothers bankruptcy, or with the responses of the government more generally. Basically, as soon as that occurred, I looked at the data and I saw much more evidence that the panic in the markets were associated with the government's responses a week or ten days later. That's the time that say, the S&P 500 fell by nearly 30%, 10 times greater than what happened at the time of Lehman. The S&P 500 was higher the Friday after the Lehman bankruptcy than the Friday before.
And there's lots of other data to look at to show you that. There is much more investigation. For example, not one counterparty, derivative counterparty to Lehman, filed for bankruptcy after the Lehman case. The major creditors who did not fail. So it's hard to find a direct knock on effects from that in the data. The more I look at it, the more it seems to me the other explanation makes sense.
Now, I'd say, now you need to think about what the counter factual would have been. And I guess I'd go back to something I've stressed before that if there had been a clear policy put forth whereby people could have expected the possibility at least of Lehman being put through bankruptcy, then there could have been some preparation. There wasn't any. The day of the announcement in September, 2008, people were just beginning to think about how it works. It was really, I think, a surprise that had been no preparation. And so, it was a jolt. Let's be sure, it was a jolt. But it seems to me that the direct connections that people talk about with expect a cascading and domino effects were not there and it really occurs later. There's more evidence for this. John Cochran, Louie ***** in Chicago have put more out. I think the former FDIC Chair, Bill Isaac has studied it and has come to the same kind of conclusion.
So, while it was originally controversial when I put this idea forth, over a year ago actually in a speech at the Bank of Canada, in November, 2008, was viewed as controversial. But I was just looking at the data then, but more and more you look at what happened, internally talk to people, and study the aftermath, it seems to me there is at least as much -- let me put it evenly, at least as much evidence that the panic was caused by the response as distinct from that particular event at Lehman Brothers. And I think it is more evidence, it's actually becoming overwhelming, but this is still controversial, and it will be looking at it more and more.
Looking at the data carefully, at spreads in the markets at what's happening to the equity markets and also looking at what's happening in the aftermath of the bankruptcy of Lehman Brothers, those are the kinds, if you like, nitty gritty details that we need to be studying to understand what happened. We're doing that work here at Stanford at the Hoover Institution. A lot of focus on, we have a whole working group that's delving into it. We've talked to people, former policymakers, and also people in the private sector to try to understand this and match it up with the data. And I think that's the most important thing to do now. Come to some conclusion about what actually happens so that we can then take the appropriate remedies, reforms, to make sure it doesn't happen again
Question: Why have you disapproved of the stimulus of the large counter cyclical Keynesian policies?
John Taylor: Why have I disapproved of the stimulus of the large counter cyclical Keynesian policies? Because I don't think we have ever had much evidence that they work. So, let me try to be specific. The first stimulus was in early 2008, and that was largely in the form of one-time rebate checks sent to individuals. A lot of money went out and basic economic theory would tell you that that would not jump start consumption. People would save most of that.
So, unlike what was advertised, it shouldn't stimulate the economy and in fact, if you go back and look at it carefully. Look at the numbers as I have done, you don't see an impact. You see that big rebate just went into people's pockets and didn't touch on jump start consumption. In fact, consumption is going the other way at that point.
And then you had this stimulus of 2009 to the extent that that was based on sending checks to people. You see exactly the same thing happening. One-time payments, temporary tax cuts don't stimulate spending or consumption. And again, we knew that for years and years. That's why policy in most of the '80's and '90's didn't take those kinds of approaches.
And so that brings you to the question about government purchases, can that be more effective? Well, yes, in principle, it can be if it's timed right, if it's done in time. But this stimulus package of 2008, the government purchase is part of it, is spread out over a long period of time, most of them haven't even come in line yet. The government purchases side. So, I don't see that that's stimulating the economy. The rebound in the economy has been due to things like inventory changes, recognition that the panic is over in investment side, and so you can't find the impacts of it.
And it's unfortunate because that means that we’ve got a huge increase in our debt and that's ultimately harmful. We didn't get much out of it in terms of stimulus. So, I'm convinced -- what I like to do is not just look at the models. I looked at a new Keynesian model, I looked at a model I built many years ago, I looked at other people's models. And they're good for assessing impacts, but ultimately what we need to do now that we've had these packages, is to look at what happened. You go away from the models, look at what happened, did jump start – did consumption get jump started, I say no. Was the recovery due to the stimulus? It seems not because it was in the form of investment.
So, I think, ultimately, we can learn from this experience, I don't know exactly why we moved away from the policies that were working, which is really aversion to these large countercyclical policies. It began in early 2008 and has continued and it may continue further. And I think let's start learning from what happened and not be ideological about this. Not consider what school went in, but look at the facts. When I look at the facts, I don't see much impact of these policies.
Question: How responsible is Wall Street for the financial crisis?
John Taylor: I'd say the responsibility of Wall Street in this crisis, and it still hasn't been resolved is the dependence of the financial institutions on the government for bailouts and interventions. In other words, the institutions had become large enough and complicated enough that at least that led people to say we needed to have these bailouts, these interventions. Each time it was –people said there would be systemic risk it was like saying there was fire in the theater, crying fire in the theater, and it led to these responses which I say were quite chaotic and not systematic. They really led to problems.
So, I think what has to happen in the future is, Wall Street, the large firms, have to find a way to say, "We are not going to depend on the government for bailouts." Just like a startup firm out here in Silicon Valley, if it fails it's not going to be bailed out by the government. The same should be true for the large financial institutions. They should find a way to do that. It seems to me that in some sense they are responsible for doing that. They want to be responsible citizens.
But I think, in the meantime, government needs to find a way for it to get out of this bailout business. There's an example I would point to where there has been a change. In the 1990's, the IMF did a lot of bailouts. The United States participated in some of those in the case of Mexico. But it got quite chaotic in the case of the Southeast Asia crisis. They were sometimes in and sometimes not in. In the case of Russia, they were in for a while and they pulled out and, of course, we had a lot of contagion globally. But it wasn't until maybe 2002, 2003 that the policy became more predictable and more understandable. The IMF led out some procedures called The Exceptional Access Framework. It clarified what their operations would be. So, the expectation of a bailout of a country in this case, sovereign debt, was reduced substantially. And I think you saw a huge improvement in the emerging markets. Some of them made an extra effort to build up their reserves, some of them reduced their deficits, they made the adjustments and they've been stronger as a result. And that shows you what can happen if the policy become more predictable and less expected bailout there are in the system.
Recorded on December 21, 2009
The market’s panic wasn’t due to the institution’s bankruptcy; it was more about the government’s response a week later.
If machines develop consciousness, or if we manage to give it to them, the human-robot dynamic will forever be different.
- Does AI—and, more specifically, conscious AI—deserve moral rights? In this thought exploration, evolutionary biologist Richard Dawkins, ethics and tech professor Joanna Bryson, philosopher and cognitive scientist Susan Schneider, physicist Max Tegmark, philosopher Peter Singer, and bioethicist Glenn Cohen all weigh in on the question of AI rights.
- Given the grave tragedy of slavery throughout human history, philosophers and technologists must answer this question ahead of technological development to avoid humanity creating a slave class of conscious beings.
- One potential safeguard against that? Regulation. Once we define the context in which AI requires rights, the simplest solution may be to not build that thing.
Duke University researchers might have solved a half-century old problem.
- Duke University researchers created a hydrogel that appears to be as strong and flexible as human cartilage.
- The blend of three polymers provides enough flexibility and durability to mimic the knee.
- The next step is to test this hydrogel in sheep; human use can take at least three years.
Duke researchers have developed the first gel-based synthetic cartilage with the strength of the real thing. A quarter-sized disc of the material can withstand the weight of a 100-pound kettlebell without tearing or losing its shape.
Photo: Feichen Yang.<p>That's the word from a team in the Department of Chemistry and Department of Mechanical Engineering and Materials Science at Duke University. Their <a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/adfm.202003451" target="_blank">new paper</a>, published in the journal,<em> Advanced Functional Materials</em>, details this exciting evolution of this frustrating joint.<br></p><p>Researchers have sought materials strong and versatile enough to repair a knee since at least the seventies. This new hydrogel, comprised of three polymers, might be it. When two of the polymers are stretched, a third keeps the entire structure intact. When pulled 100,000 times, the cartilage held up as well as materials used in bone implants. The team also rubbed the hydrogel against natural cartilage a million times and found it to be as wear-resistant as the real thing. </p><p>The hydrogel has the appearance of Jell-O and is comprised of 60 percent water. Co-author, Feichen Yang, <a href="https://today.duke.edu/2020/06/lab-first-cartilage-mimicking-gel-strong-enough-knees" target="_blank">says</a> this network of polymers is particularly durable: "Only this combination of all three components is both flexible and stiff and therefore strong." </p><p> As with any new material, a lot of testing must be conducted. They don't foresee this hydrogel being implanted into human bodies for at least three years. The next step is to test it out in sheep. </p><p>Still, this is an exciting step forward in the rehabilitation of one of our trickiest joints. Given the potential reward, the wait is worth it. </p><p><span></span>--</p><p><em>Stay in touch with Derek on <a href="http://www.twitter.com/derekberes" target="_blank">Twitter</a>, <a href="https://www.facebook.com/DerekBeresdotcom" target="_blank">Facebook</a> and <a href="https://derekberes.substack.com/" target="_blank">Substack</a>. His next book is</em> "<em>Hero's Dose: The Case For Psychedelics in Ritual and Therapy."</em></p>
What would it be like to experience the 4th dimension?
Physicists have understood at least theoretically, that there may be higher dimensions, besides our normal three. The first clue came in 1905 when Einstein developed his theory of special relativity. Of course, by dimensions we’re talking about length, width, and height. Generally speaking, when we talk about a fourth dimension, it’s considered space-time. But here, physicists mean a spatial dimension beyond the normal three, not a parallel universe, as such dimensions are mistaken for in popular sci-fi shows.
An algorithm may allow doctors to assess PTSD candidates for early intervention after traumatic ER visits.
- 10-15% of people visiting emergency rooms eventually develop symptoms of long-lasting PTSD.
- Early treatment is available but there's been no way to tell who needs it.
- Using clinical data already being collected, machine learning can identify who's at risk.
The psychological scars a traumatic experience can leave behind may have a more profound effect on a person than the original traumatic experience. Long after an acute emergency is resolved, victims of post-traumatic stress disorder (PTSD) continue to suffer its consequences.
In the U.S. some 30 million patients are annually treated in emergency departments (EDs) for a range of traumatic injuries. Add to that urgent admissions to the ED with the onset of COVID-19 symptoms. Health experts predict that some 10 percent to 15 percent of these people will develop long-lasting PTSD within a year of the initial incident. While there are interventions that can help individuals avoid PTSD, there's been no reliable way to identify those most likely to need it.
That may now have changed. A multi-disciplinary team of researchers has developed a method for predicting who is most likely to develop PTSD after a traumatic emergency-room experience. Their study is published in the journal Nature Medicine.
70 data points and machine learning
Image source: Creators Collective/Unsplash
Study lead author Katharina Schultebraucks of Columbia University's Department Vagelos College of Physicians and Surgeons says:
"For many trauma patients, the ED visit is often their sole contact with the health care system. The time immediately after a traumatic injury is a critical window for identifying people at risk for PTSD and arranging appropriate follow-up treatment. The earlier we can treat those at risk, the better the likely outcomes."
The new PTSD test uses machine learning and 70 clinical data points plus a clinical stress-level assessment to develop a PTSD score for an individual that identifies their risk of acquiring the condition.
Among the 70 data points are stress hormone levels, inflammatory signals, high blood pressure, and an anxiety-level assessment. Says Schultebraucks, "We selected measures that are routinely collected in the ED and logged in the electronic medical record, plus answers to a few short questions about the psychological stress response. The idea was to create a tool that would be universally available and would add little burden to ED personnel."
Researchers used data from adult trauma survivors in Atlanta, Georgia (377 individuals) and New York City (221 individuals) to test their system.
Of this cohort, 90 percent of those predicted to be at high risk developed long-lasting PTSD symptoms within a year of the initial traumatic event — just 5 percent of people who never developed PTSD symptoms had been erroneously identified as being at risk.
On the other side of the coin, 29 percent of individuals were 'false negatives," tagged by the algorithm as not being at risk of PTSD, but then developing symptoms.
Image source: Külli Kittus/Unsplash
Schultebraucks looks forward to more testing as the researchers continue to refine their algorithm and to instill confidence in the approach among ED clinicians: "Because previous models for predicting PTSD risk have not been validated in independent samples like our model, they haven't been adopted in clinical practice." She expects that, "Testing and validation of our model in larger samples will be necessary for the algorithm to be ready-to-use in the general population."
"Currently only 7% of level-1 trauma centers routinely screen for PTSD," notes Schultebraucks. "We hope that the algorithm will provide ED clinicians with a rapid, automatic readout that they could use for discharge planning and the prevention of PTSD." She envisions the algorithm being implemented in the future as a feature of electronic medical records.
The researchers also plan to test their algorithm at predicting PTSD in people whose traumatic experiences come in the form of health events such as heart attacks and strokes, as opposed to visits to the emergency department.