Are We Spending Our Limited Fiscal Resources Wisely?
Using benefit-cost analysis (or its equivalent, cost-benefit analysis) to evaluate projects not only increases the value of government spending, but increases equity as well.
Richard O. Zerbe is Daniel J. Evans Professor of Public Affairs at the Evans School of Public Affairs at the University of Washington. He joined the Evans School faculty in 1981, and his areas of specialization include law and economics, benefit-cost analysis, antitrust, environmental economics, and economic history. Zerbe previously served on the faculty at York University in Toronto and the University of Chicago. He also received a visiting appointment at Northwestern University, and a fellowship at Yale Law School. He is the author of more than 100 publications and editor of the Research in Law and Economics journal. Zerbe holds a Ph.D. in economics from Duke University, and an AB from the University of Oklahoma, where he studied mathematics, general science, and political science.
Republicans believe (mostly) that government money should be spent wisely, if spent at all. Democrats, on the other hand, believe (mostly) that it’s better to spend wisely than not.
To satisfy this desire for efficiency in government, as well as equity, requires that the rate of return on government spending be as great as the investment value of the money it displaces.
So, the question is: “Can we ensure that the rate of return on government spending represents wise investment?”
If budget decisions were made on this basis, I believe that wasteful government spending would be curtailed, and more socially useful spending would occur.
Despite the rational logic, why isn’t this happening?
In most cases, it’s politics.
A government representative will spend other peoples’ money for his or her district or state, even if it’s wasteful. And his or her colleague from another district or state will vote to approve the spending, as long as the first representative agrees to vote for similar wasteful programs that benefit the second representative’s district or state.
The legislative branch as a whole could agree to take politics out of government spending by subjecting appropriations proposals to independent financial evaluation.
But this makes for hard work.
Indeed, when the Center for Benefit-Cost Analysis asked local governments why they didn’t do more investment analysis, they were frank enough to tell us that “it interfered with politics.”
To be sure, once an independent analysis is done, supporters and critics of a spending proposal are forced to address the financial issues in much greater detail—which is work. And what’s the use of having political power if you can’t reward your supporters so that you are reelected?
Putting politics aside, though—and I acknowledge that this is certainly difficult—it’s worth bringing a fundamental concept of social well being into this discussion. And here I’m talking about Pareto optimal improvements. Named for a 19th century Italian economist, a Pareto improvement is one where at least someone gains and no one loses.
It’s difficult to think of spending projects that satisfy the Pareto criteria. But it’s not difficult to conceive of policy approaches that do.
In recent work here at the Center for Benefit-Cost Analysis at the Evans School of Public Affairs, University of Washington, we’re asking what impact a financial evaluation requirement would have on government investment.
We’ve found that using benefit-cost analysis (or its equivalent, cost-benefit analysis) to evaluate projects not only increases the value of government spending, but increases equity as well.
Benefit-cost analysis (BCA) involves a return on investment calculation made by looking at benefits and costs of projects. Our evaluation of this technique—as opposed to looking at particular projects—indicates that almost no one loses from it.
Those who pay for a valuable government project today, but do not gain from it, will, in all probability, gain from some other government project in the future. Fully 90 percent of all taxpayers ultimately gain; the only losers are those in the higher income tax brackets, who would, in fact, pay more in tax dollars to support spending that would benefit those less well off. Even here, however, these taxpayers do not lose if they support the gains in social equity.
The federal government uses benefit-cost analysis through the Office of Management and Budget to evaluate new proposed government regulations. The U.S. Army Corps of Engineers and the Bureau of Reclamation also use benefit-cost analysis. In addition, the current administration is increasingly using BCA, in part, to evaluate its education spending. This is a start, but it leaves vast areas of government spending unanalyzed—in the transportation, housing, and environmental areas, for example.
In truth, there are important spending items for which investment returns can only be imperfectly calculated; a good example is defense spending. And then there’s politics. We can’t—and shouldn’t—totally eliminate politics. Yet politics that reduces our national income through waste is simply not good politics.
What should be done?
I propose that BCA be employed at the earliest possible point in the congressional appropriations process. The Congressional Budget Office (CBO) currently offers “scores” for congressional bills in an effort to help lawmakers practice fiscal prudence. In addition to these scores, we should add BCA evaluations. And we should have a bipartisan agreement that government-spending bills can’t go forward without scores and BCA evaluations. Finally, we should require that both the scores and BCA evaluations are made public prior to congressional votes on spending bills.
The use of BCA is, of course, costly—because it requires the collection and evaluation of data. But I strongly believe that the savings will far exceed the costs over time, because this will help us spend our limited fiscal resources more wisely.
Editor's Note: This article is part of a publication by the Evans School of Public Affairs at the University of Washington entitled "Making a World of Difference in a Very Different World."
It's just the current cycle that involves opiates, but methamphetamine, cocaine, and others have caused the trajectory of overdoses to head the same direction
- It appears that overdoses are increasing exponentially, no matter the drug itself
- If the study bears out, it means that even reducing opiates will not slow the trajectory.
- The causes of these trends remain obscure, but near the end of the write-up about the study, a hint might be apparent
Through computationally intensive computer simulations, researchers have discovered that "nuclear pasta," found in the crusts of neutron stars, is the strongest material in the universe.
- The strongest material in the universe may be the whimsically named "nuclear pasta."
- You can find this substance in the crust of neutron stars.
- This amazing material is super-dense, and is 10 billion times harder to break than steel.
Superman is known as the "Man of Steel" for his strength and indestructibility. But the discovery of a new material that's 10 billion times harder to break than steel begs the question—is it time for a new superhero known as "Nuclear Pasta"? That's the name of the substance that a team of researchers thinks is the strongest known material in the universe.
Unlike humans, when stars reach a certain age, they do not just wither and die, but they explode, collapsing into a mass of neurons. The resulting space entity, known as a neutron star, is incredibly dense. So much so that previous research showed that the surface of a such a star would feature amazingly strong material. The new research, which involved the largest-ever computer simulations of a neutron star's crust, proposes that "nuclear pasta," the material just under the surface, is actually stronger.
The competition between forces from protons and neutrons inside a neutron star create super-dense shapes that look like long cylinders or flat planes, referred to as "spaghetti" and "lasagna," respectively. That's also where we get the overall name of nuclear pasta.
Caplan & Horowitz/arXiv
Diagrams illustrating the different types of so-called nuclear pasta.
The researchers' computer simulations needed 2 million hours of processor time before completion, which would be, according to a press release from McGill University, "the equivalent of 250 years on a laptop with a single good GPU." Fortunately, the researchers had access to a supercomputer, although it still took a couple of years. The scientists' simulations consisted of stretching and deforming the nuclear pasta to see how it behaved and what it would take to break it.
While they were able to discover just how strong nuclear pasta seems to be, no one is holding their breath that we'll be sending out missions to mine this substance any time soon. Instead, the discovery has other significant applications.
One of the study's co-authors, Matthew Caplan, a postdoctoral research fellow at McGill University, said the neutron stars would be "a hundred trillion times denser than anything on earth." Understanding what's inside them would be valuable for astronomers because now only the outer layer of such starts can be observed.
"A lot of interesting physics is going on here under extreme conditions and so understanding the physical properties of a neutron star is a way for scientists to test their theories and models," Caplan added. "With this result, many problems need to be revisited. How large a mountain can you build on a neutron star before the crust breaks and it collapses? What will it look like? And most importantly, how can astronomers observe it?"
Another possibility worth studying is that, due to its instability, nuclear pasta might generate gravitational waves. It may be possible to observe them at some point here on Earth by utilizing very sensitive equipment.
The team of scientists also included A. S. Schneider from California Institute of Technology and C. J. Horowitz from Indiana University.
Check out the study "The elasticity of nuclear pasta," published in Physical Review Letters.
Scientists think constructing a miles-long wall along an ice shelf in Antarctica could help protect the world's largest glacier from melting.
- Rising ocean levels are a serious threat to coastal regions around the globe.
- Scientists have proposed large-scale geoengineering projects that would prevent ice shelves from melting.
- The most successful solution proposed would be a miles-long, incredibly tall underwater wall at the edge of the ice shelves.
The world's oceans will rise significantly over the next century if the massive ice shelves connected to Antarctica begin to fail as a result of global warming.
To prevent or hold off such a catastrophe, a team of scientists recently proposed a radical plan: build underwater walls that would either support the ice or protect it from warm waters.
In a paper published in The Cryosphere, Michael Wolovick and John Moore from Princeton and the Beijing Normal University, respectively, outlined several "targeted geoengineering" solutions that could help prevent the melting of western Antarctica's Florida-sized Thwaites Glacier, whose melting waters are projected to be the largest source of sea-level rise in the foreseeable future.
An "unthinkable" engineering project
"If [glacial geoengineering] works there then we would expect it to work on less challenging glaciers as well," the authors wrote in the study.
One approach involves using sand or gravel to build artificial mounds on the seafloor that would help support the glacier and hopefully allow it to regrow. In another strategy, an underwater wall would be built to prevent warm waters from eating away at the glacier's base.
The most effective design, according to the team's computer simulations, would be a miles-long and very tall wall, or "artificial sill," that serves as a "continuous barrier" across the length of the glacier, providing it both physical support and protection from warm waters. Although the study authors suggested this option is currently beyond any engineering feat humans have attempted, it was shown to be the most effective solution in preventing the glacier from collapsing.
Source: Wolovick et al.
An example of the proposed geoengineering project. By blocking off the warm water that would otherwise eat away at the glacier's base, further sea level rise might be preventable.
But other, more feasible options could also be effective. For example, building a smaller wall that blocks about 50% of warm water from reaching the glacier would have about a 70% chance of preventing a runaway collapse, while constructing a series of isolated, 1,000-foot-tall columns on the seafloor as supports had about a 30% chance of success.
Still, the authors note that the frigid waters of the Antarctica present unprecedently challenging conditions for such an ambitious geoengineering project. They were also sure to caution that their encouraging results shouldn't be seen as reasons to neglect other measures that would cut global emissions or otherwise combat climate change.
"There are dishonest elements of society that will try to use our research to argue against the necessity of emissions' reductions. Our research does not in any way support that interpretation," they wrote.
"The more carbon we emit, the less likely it becomes that the ice sheets will survive in the long term at anything close to their present volume."
A 2015 report from the National Academies of Sciences, Engineering, and Medicine illustrates the potentially devastating effects of ice-shelf melting in western Antarctica.
"As the oceans and atmosphere warm, melting of ice shelves in key areas around the edges of the Antarctic ice sheet could trigger a runaway collapse process known as Marine Ice Sheet Instability. If this were to occur, the collapse of the West Antarctic Ice Sheet (WAIS) could potentially contribute 2 to 4 meters (6.5 to 13 feet) of global sea level rise within just a few centuries."
SMARTER FASTER trademarks owned by The Big Think, Inc. All rights reserved.