The Wire

  • New tunnel, premium RV section at Talladega Superspeedway on schedule despite weather


    Construction of a new oversized vehicle tunnel and premium RV infield parking section at Talladega Superspeedway is still on schedule to be completed in time for the April NASCAR race, despite large amounts of rainfall and unusual groundwater conditions underneath the track.

    Track Chairman Grant Lynch, during a news conference Wednesday at the track, said he’s amazed the general contractor, Taylor Corporation of Oxford, has been able to keep the project on schedule.

    “The amount of water they have pumped out of that and the extra engineering they did from the original design, basically to keep that tunnel from floating up out of the earth, was remarkable,” Lynch said.

  • Alabama workers built 1.6M engines in 2018 to add auto horsepower


    Alabama’s auto workers built nearly 1.6 million engines last year, as the state industry continues to carve out a place in global markets with innovative, high-performance parts, systems and finished vehicles.

    Last year also saw major new developments in engine manufacturing among the state’s key players, and more advanced infrastructure is on the way in the coming year.

    Hyundai expects to complete a key addition to its engine operations in Montgomery during the first half of 2019, while Honda continues to reap the benefits of a cutting-edge Alabama engine line installed several years ago.

  • Groundbreaking on Alabama’s newest aerospace plant made possible through key partnerships


    Political and business leaders gathered for a groundbreaking at Alabama’s newest aerospace plant gave credit to the formation of the many key partnerships that made it possible.

    Governor Kay Ivey and several other federal, state and local officials attended the event which celebrated the construction of rocket engine builder Blue Origin’s facility in Huntsville.

3 days ago

Herds and the policy response to COVID-19


Governments implemented strict policies to stem the spread of the novel coronavirus. The widespread response suggests that governors and presidents saw COVID-19 as an unprecedented public health threat. Or did it? The economics of herding suggests possibly not.

The “Wisdom of Crowds,” also the title of James Suroweicki’s excellent book on this subject, implies this interpretation. Experts in each state reviewed knowledge on the virus, its potential lethality, and vulnerabilities of their state. Each lockdown decision provides evidence of a perceived threat.

Independent, informed evaluations represent our best way to approach the truth. The argument is not that voting establishes truth; experts can be wrong even if they all agree. The consensus of experts is more likely to be correct.


The policy response could reflect other factors. We should remember that safety is a luxury good; as people and nations become wealthier, we spend more on safety. The potential for say 100,000 deaths from a pandemic will be far less acceptable today than 50 years ago. Yet crowds are not always wise; the “Madness of Crowds” is another possibility. The independence of expert judgments affects whether we gain wisdom or create a herd.

Training in public health affects experts’ independence. Experts in any field receive years of specialized, intensive training, in law school, graduate school, or medical school. Academic disciplines have a dominant paradigm or way of making sense of the world. Different public health experts may share the same way of thinking and make the same mistake on COVID-19.

“Information cascades” pose another problem, often seen in business. A group of managers assembles to discuss opening a new retail store. After independently assessing the merits and demerits, most of the managers see the new store as a mistake. Yet the first manager argues that the new store will be wildly successful, and the others agree. After the store fails, the managers all recall their initial misgivings.

What happened? Each manager knows her personal assessment of the venture could be wrong and revises her assessment based on others’ opinions. Managers do not want to appear incompetent – the only one unable to see the new store’s great value.

The visibility of errors also matters. There’s (allegedly) a saying among investment advisors that “no one ever got fired for recommending IBM.” Suppose an advisor recommends a stock no one else likes. If correct, the advisor’s clients make lots of money. If wrong, the advisor will need to find a new job. By making the same common recommendation, no advisor signals below average investment acumen.

An economy or business needs to encourage occasional deviations from the herd. We need contrarian investors and thinkers. In markets, profit rewards correct contrarians. And some people are naturally contrarian. As Henry David Thoreau wrote, “If a man does not keep pace with his companions, perhaps it is because he hears the beat of a different drummer.”

Does the policy response to COVID-19 reveal herding? The policies involved – business and school closings, stay-at-home orders – are called nonpharmaceutical interventions (NPI). NPI have their critics; a 2019 World Health Organization review found the evidence for the effectiveness most “limited.”

A divergence of opinion suggests herding was unlikely. If proponents of NPI won out in debate, this suggests that governors and presidents found them more promising. Vigorous debate usually improves decisions.

Our elected executives, I think, face a bias to action, worsened by the 24-hour news cycle and running tallies of COVID-19 cases and deaths. Yet the nearly 50 million jobs lost since March are also highly visible. Our inability to observe deaths without a lockdown ironically makes the benefits appear larger; perhaps millions have been saved.

Eight states never issued stay-at-home orders and nations like Sweden eschewed lockdown policies, so we have not witnessed complete herding. More likely the bias to take action resulted in excessive policies, and lockdowns imposed too early in some states.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

1 week ago

Defund the CDC


The Centers for Disease Control and Prevention (CDC) have made numerous mistakes during the COVID-19 pandemic. Mission creep at the CDC has left America vulnerable to a communicable disease. We need a new agency solely dedicated to battling infectious diseases.

Let’s start with the mistakes. A lack of early testing let the outbreak spiral out of control. Some have criticized the CDC for not using a German test approved by the World Health Organization (WHO). I will give the CDC a pass here because the CDC is required to develop a test for the U.S. during pandemics. Contamination of the test kits, however, falls squarely on the CDC, a mistake resulting from a breach of basic protocols.

Another miscalculation was preparing only 200 test kits. Although each kit could test 700 specimens, this was still extremely limited capacity to contain an easily transmissible virus.


The CDC equivocated and eventually flipped its position on masks. Throughout March, the agency claimed that only the sick or persons caring for the sick should wear masks. Dr. Anthony Fauci recently confirmed that this deliberate misinformation was intended to reserve masks for health care workers but was still misinformation.

Recently, The Atlantic reported that the CDC has been combining blood and nasal test results. Blood serum antibody tests only show that a person has had COVID-19 at some point. As a result, the case reports being watched so carefully as states reopen may include older cases.

Some commentators blame the CDC’s failures on Trump administration budget cuts. Yet as the Cato Institute’s Chris Edwards demonstrated, proposed cuts were never enacted; the CDC budget has remained steady (adjusting for inflation) since 2010 with a 12% increase in employees.

The CDC has spent money and time on efforts unrelated to infectious diseases. This is the essence of “mission creep,” or an expansion of tasks beyond an agency’s core competency. Bureaucrats seek out new tasks for additional funding, and sometimes Congress orders an agency to study something. As Cato’s Mr. Edwards writes, “How is CDC Director [Dr. Robert] Redfield supposed to remain alert to emerging epidemics when he is also supposed to manage programs on tiny teeth, colon cancer, opioids, child abuse, diabetes, workers’ compensation, lead-based paints, mold in buildings, and lifting heavy objects on construction sites?”

States imposed draconian “nonpharmaceutical interventions” (NPI) to slow COVID-19. Evidence demonstrating these policies’ effectiveness was very limited. A 2019 WHO review observed, “The evidence base on the effectiveness of NPIs in community settings is limited, and the overall quality of evidence was very low for most interventions.” Furthermore, “much of the evidence base is from observational studies and computer simulations.”

The role of computer models deserves further discussion. Computer models produced the “worst-case” scenarios predicting 1.7 million to 2.2 million deaths in an unconstrained pandemic and claiming that measures like closing schools and shutting down business could avoid most of these deaths. The projections were based on assumptions, not evidence, that NPIs would reduce interactions between people.

Lockdown policies might not reduce interactions due to offsetting personal actions. For example, children not interacting at school may interact instead at parks, day care centers, or malls. Grandparents’ taking on babysitting duties could potentially increase transmissions to the elderly. Evidence from actual school closings would be required to validate assumed reductions in interactions.

Instead of researching vaping and gun violence, the CDC could have extensively examined NPIs. We did not need to take a shot in the dark with polices that cost 40 million American jobs.

Limited government not only keeps government within its proper scope, it avoids mission creep and ensures that critical tasks get done well. Governments generally underprepare for rare events like disasters and pandemics. This makes an agency focusing exclusively on communicable diseases invaluable.

After the COVID-19 pandemic is behind us, we should seriously consider replacing the CDC with an agency focused exclusively on infectious diseases. Other CDC functions (like collecting vital statistics) can be moved elsewhere within the Department of Health and Human Services. We need an approach to ensure preparedness for the next pandemic.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

3 weeks ago

Sports and COVID risk

(Pixabay, YHN)

Sports across the world are beginning restarts. Germany’s top soccer league resumed in May, while NASCAR, the UFC and the PGA tour are back underway. Yet are sports truly worth risking a deadly illness?

The assumption of risk provides a guiding principle. Adults should be free to engage in risky activities. Voluntary, informed consent makes many activities acceptable; boxers and UFC fighters would be committing felonies against unwilling participants. Sports participation involves numerous risks, of which COVID-19 could be one.

Pro athletes are well compensated, so a stronger case exists for resuming pro than college sports. In 2019, baseball’s minimum and average salaries exceeded $500,000 and $4 million respectively. Yet because we allow 18-year olds to join the military, I think college athletes should be able to accept COVID-19 risk if they choose.


Sports are group activities, raising a thorny issue. A league will either return to action or not. Dealing with risks of death is a fundamental and personal part of life. In America. we believe that people should get to make fundamental life decisions for themselves. For instance, as restaurants reopen, people can choose whether to accept the risk.

All players must live with a league’s decision. For some, the league decision will differ from what they would choose based on their personal values. The league’s decision may appear coercive; for example, NBA players may feel “forced” to report for the league’s restart. Yet this is not truly coercion. No player will face jail time for refusing to play; they can give up their contract and roster spot.

As leagues grapple with resuming play, remember that safety is a luxury good. People generally choose more safety as they become richer. Star players worth millions of dollars may be unwilling to accept much risk. We should expect leagues to exercise abundant caution and some stars to step away.

Events with few or no fans seem likely for the near term. Consequently, leagues relying more on gate receipts than television broadcast rights should be less likely to restart. College football depends more on ticket revenues (including donations tied to tickets) than the NFL, and the NCAA seems unenthusiastic about playing games with empty stadiums.

Televised sports will recover one of the pandemic’s significant economic losses. Economists call the difference between the value of a good or experience to people and the price they must pay consumer surplus. Millions of Americans watch and follow sports while paying very little to do so; sports generate a lot of consumer surplus. The cost of the sports shutdown far exceeded its contribution to GDP.

One factor receiving surprisingly little attention is athletes’ benefits from competition. For example, athletes train for years to compete in the Olympics. Missing out on the experience of a lifetime seems like a significant loss. The Summer Olympics have not been canceled, just postponed to 2021.

The potential availability of a vaccine or cure matters significantly for restarting sports. President Trump’s Operation Warp Speed promises a vaccine by December. As a Troy Trojans football season ticket holder, I would favor starting the season in January if we will have a vaccine.

States’ business closure orders this spring exempted “essential” businesses. Perhaps sports should be essential. Workers in health care, meatpacking, grocery stores, and package delivery have been expected to work despite COVID-19. Meatpacking plants have proven particularly vulnerable yet were ordered by President Trump to remain open. Many essential workers probably never realized they were working such potentially dangerous jobs.

Other Americans, myself included, have been working safely from home. Yet we rely on essential workers putting themselves at risk; college classes can only go online if someone keeps the power grid and internet functioning. Sports comprise an important part of the lives of many Americans. A resumption of sports could be viewed as essential for the well-being of essential workers.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

1 month ago

Dr. Daniel Sutter: Will things ever change?

(Minnesota Police Department/Contributed, Benjamin L. Crump, Esq./Facebook, YHN)

The killing of George Floyd by Minneapolis police officer Derek Chauvin ignited nationwide protests. While we advise jurors to withhold judgment until presentation of all the evidence, video of the incident seems definitive. Mr. Floyd joins a much too long list of minority victims of police violence.

Justice may be served in Minneapolis. The four officers involved were fired the next day and Mr. Chauvin charged with third-degree murder within a week (which has since been upgraded to second-degree murder). Does this render the protests moot? Not necessarily. Mr. Chauvin was not charged with first-degree murder, and the charges could be reduced when attention focuses elsewhere.

I am something of an anomaly, a law-and-order libertarian. I have great respect for the police because of the injustice of crime. When someone takes your belongings – whether milk money or a car – we naturally feel the injustice. Bullies and criminals violate social peace. The police respond to our calls for help. Bullies and criminals terrify many of us, but not the police.


Police use of excessive force is a danger for all Americans. Minorities, however, have far more such encounters. Jason Riley is a member of the Wall Street Journal’s editorial board guest. In “Please Stop Helping Us,” Mr. Riley details his encounters with the police as a law-abiding youth, often for nothing more than “driving while black.” He observes, “Was I profiled based on negative stereotypes about young black men? Almost certainly. But then everyone profiles based on limited knowledge, including me.”

I have never faced such discrimination nor experienced the ensuing reactions. Mr. Riley did not let profiling poison his life view, and this is admirable. I also appreciate that some young men will show resentment, which might provoke police wrath. Minorities bear the brunt of police mistakes, like Breonna Taylor, killed in a botched police raid this March.

We should hold police officers to an extremely high standard because they can use deadly force. We should also remember how police officers experience encounters with us. While the majority of traffic stops will be routine, an officer never knows when a confrontation might occur.

Minimizing inevitable tragic accidents provides the first place for change. Yale law professor Stephen Carter tells his first-year students to never push for a law they would not want people killed to enforce. Mr. Floyd was apprehended for spending a counterfeit $20; Eric Garner was killed in 2014 while evading New York’s cigarette taxes. We should not criminalize so many things.

I believe that the Derek Chauvins are a minuscule fraction of police officers. We lack institutional controls on misbehavior. Police officers have a common interest in disciplining their bad apples, but this rarely happens.

Misbehavior is likely tolerated because police officers, like firefighters or soldiers, depend on each other in matters of life-and-death. I have never served in such positions and may not appreciate this need to trust colleagues. Nevertheless, bad apples abuse toleration; Mr. Chauvin ruined the other officers’ lives in addition to ending Mr. Floyd’s life.

Police unions vigorously defend and enforce privacy rules shielding rogue officers. A retired New York Police commander wrote, “The unions, at least in New York City, outright just protect, protect, protect the cops.” Minnesota Attorney General Keith Ellison cites the Minneapolis police union as contributing to the department’s problems.

Asset forfeiture laws and militarization also contribute. Police departments can seize and keep cars, money and other property from people not convicted of crimes, often minorities unable to contest the seizure. For decades, police departments have received surplus military equipment. Militarization and policing for profit must make officers feel like part of an army of occupation, not public servants.

Law enforcement is a noble profession when the police “protect and serve” citizens. Police should get the benefit of the doubt when using force but this is only possible if departments fire miscreant officers.

Some encouraging incidents have occurred this past week. In Genesee County, Michigan, Sheriff Chris Swanson took off his riot gear and walked and talked with protestors. The cycle of violence will never end if police and citizens view each other as adversaries.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

1 month ago

Did the lockdown save lives?

(Pixabay, YHN)

In March, states undertook dramatic and unprecedented measures to stem the spread of the SARS2-COV virus. And yet COVID-19 has claimed 100,000 lives in the United States. Was the lockdown effective? Economists frequently address such questions in our research.

Seeing the unseen, or the path that we did not choose, is the key here. It is the fundamental challenge of economics, as illustrated by Frederic Bastiat’s parable of the broken window. A shopkeeper must replace a broken window. A neighbor, perhaps offering solace, points out that if windows never got broken, the town glazier would starve. To avoid believing that broken windows boost the economy, we must recognize what the shopkeeper did not buy due to replacing the window.


Economists visualize the alternative paths we could choose. What would have happened if we didn’t pass NAFTA, or hadn’t bailed out banks during the financial crisis, or if we raised the minimum wage to $15 per hour? The term counterfactual refers to the unchosen path.

Economists devise principles for constructing counterfactuals. Scenarios must be logically coherent and consistent with the available evidence. We must avoid overly optimistic or pessimistic alternatives.

I have never estimated potential deaths in an outbreak of a disease but have researched tornado warnings and “worst case” tornadoes. Like most economists, I recognize the challenges in evaluating the lockdown.

Here’s a first challenge. WalletHub has scored the strictness of states’ COVID protection measures. The average COVID fatality rate for the 10 states with the strictest lockdown policies is 686 per million residents, versus a fatality rate of 68 for the 10 least strict states, or one-tenth as much. The three highest fatality rate states are among the ten strictest states.

Does this show that lockdowns cause COVID-19 deaths? No. The states suffering the worst outbreaks will impose the strictest measures. This is the endogeneity of policy problem. Ignoring this issue would lead us to conclude that hospitals cause death because many people die there. Controlling for policy endogeneity is a major research focus.

Another problem arises because states imposed policies and Americans realized that COVID-19 was a serious health threat at about the same time. The NBA suspended its season March 11, people sharply reduced travel around March 15, and the first state stay-at-home order took effect March 19. We have very few data points to tease out the effect of various policies from behavioral changes.

The United States was slow in rolling out testing for COVID-19, creating another challenge. If we compared the number of COVID-19 cases in the month before and after lockdowns to test effectiveness, the total would rise simply because many more people were tested. Can we detect a decline in infections during a period of expanding testing?

Even if March’s lockdown was effective, the policies may not be effective in another time or place. Policy effects may not transfer for several reasons. For the COVID lockdown, an important factor is peoples’ willingness to comply. If Americans do not favor shutting down the economy for a second wave of the virus, stay-at-home orders may prove ineffective when reimplemented.

Researchers at Columbia University have evaluated the lockdown, based on computer simulations with travel data between cities and reported cases and deaths. The policies appear to have stemmed the illness; indeed implementation of the same policies two weeks earlier could have avoided 83% of U.S. deaths through May 3.

The sophisticated technical analysis here, I think, obscures a bigger point. “Nonpharmaceutical interventions,” as epidemiologists call such policies, do not prevent COVID-19 deaths. Americans who did not get COVID this spring can still get sick next fall. Only a vaccine or effective treatment will truly prevent deaths.

Whether school closings and stay-at-home orders slow an outbreak is an important and really challenging research question. This question must be answered before we compare economic costs and health benefits. Ultimately a lockdown is merely a delaying action. Delaying actions are only worth fighting as part of a larger strategy.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

1 month ago

The future of college


COVID-19 has disrupted almost all aspects of life, including higher education. Colleges moved classes online during the spring semester and some observers believe that this will permanently change higher education. I think this will create new focus on how college creates value.

Online education has existed for years. Arguably though, the willingness of the nation’s most prestigious universities to shift online affirms the quality of online instruction. I would caution about reading too much into any response to this unprecedented pandemic.

Higher education’s predicament becomes much greater if the 2020-21 year ends up online. I will not try to forecast the progression of COVID-19 here, but the California State University system recently announced online classes for fall. An online year would produce an immediate financial crisis and a longer-term viability challenge.


Universities take on considerable debt for classrooms, dorms, dining halls, and recreation centers. Tuition may pay for classroom buildings, but room and board payments service the bonds for dorms and dining halls. Similarly, many football schools have financed stadium improvements using revenue from long term television contracts. Universities will almost certainly create a need for a government bailout.

The longer-term issue would begin when campuses reopen. Will students return in the new normal? Focusing on college’s value proposition for students helps here.

Most traditional academics believe that online education is low quality, but this may simply reflect our biases. I see student learning styles as more relevant; some students’ can learn readily online. A parallel I think is the large state university versus a smaller college. Some students can succeed with the anonymity of the giant lecture hall; others need a personal connection with professors and classmates.

Why employers value college degrees is also relevant and there are three competing sources here. First and most prominent is human capital. In this view, classes teach skills and knowledge used in jobs. A second explanation is signaling, in which a college degree provides valuable information about a student’s talents even though course content is not used in jobs. Finally we have legal restrictions; laws, primarily licensure, require a person hired for certain jobs to have a specified degree.

Online education can most readily supply legally required degrees. When job seekers and employers view the degree as merely checking a legal box, both will want to meet the requirement with minimal cost.

The signaling function might be the most difficult to replicate online. Education works as a signal when only students possessing certain traits (e.g., the ability to learn challenging material quickly) earn a degree or high grades. Credible signaling requires a level of familiarity only face-to-face interactions have traditionally afforded.

The usefulness of online education for human capital depends on the skill or knowledge. Consider learning to play a musical instrument (something I know only from reading about). Such instruction is usually one-on-one or in very small groups; watching a how-to video by one of the world’s leading musicians does not work well. Music teachers have offered lessons on Zoom during the pandemic; perhaps virtual instruction will prove effective.

Higher education involves valuable experiences outside of the classroom. While this might sound like an apology for parties and football games, for many people, college is a valued part of growing up. People make lifelong friends and often meet their spouses. College is about more than just book learning.

A four-year party might seem unnecessary, but life is about more than mere survival. Fine food and elegant dining is not just about efficiently ingesting calories. Clothes for many people are fashion statements. This is normal in a prosperous society; the quality of the journey becomes paramount, not merely getting from A to B.

The economic slump, I think, threatens higher education’s long term viability more than COVID-19. The pandemic may trigger a depression leaving the United States and the world substantially poorer than at the start of 2020. If so, we will be able to afford fewer luxuries, including traditional college.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

2 months ago

Did we give informed consent?

(Pixabay, YHN)

Our federal and state governments implemented unprecedented measures beginning in March to stem the spread of COVID-19. Informed consent provides a foundation of medical ethics. Did our elected officials and public health experts get our informed consent for policies that have put 30 million Americans out of work?

Medical experiments have often been performed on unsuspecting subjects, like the infamous Tuskegee Experiment. The U.S. Public Health Service in 1932 began studying the health effects of syphilis on African-American men recruited with a promise of free health care. Even after penicillin emerged as a treatment, the study participants still only received placebos and went untreated until public revelation in 1972.


Informed consent became the ethical dividing line. According to the American Medical Association, “The process of informed consent occurs when communication between a patient and physician results in the patient’s authorization or agreement to undergo a specific medical intervention.” A patient should be provided information on “the burdens, risks, and expected benefits of all options, including foregoing treatment.”

We canceled sports and public gatherings, closed schools and universities, and shuttered nonessential businesses to “flatten the curve.” The new coronavirus can be transmitted by persons without any symptoms, so isolating the sick is not necessarily effective for COVID-19. Millions of cases over just a few months would overwhelm hospitals; avoidable deaths would result from critically ill patients not receiving the best possible care.

Several epidemiological studies offered frightening worst-case scenarios. The highly influential study from Imperial College in London projected that 81% of Americans would get the illness with 2.2 million deaths. Stay-at-home orders seemed prudent to prevent such a disaster.

Yet, even extreme social distancing will not prevent COVID-19 cases and deaths, only delay them. Everyone is potentially susceptible to a brand-new virus; staying home to keep from getting sick does not change this fact. That two weeks or two months of lockdown would prevent the feared deaths from ever occurring was a false hope.

The epidemiological models did not hide this. The Imperial College study warned that with relaxation of suppression measures, “transmission will rapidly rebound, potentially producing an epidemic comparable in scale to what would have been seen had no interventions been adopted.” To avoid these 2.2 million deaths, our current policies would “need to be maintained until a vaccine becomes available (potentially 18 months or more).”

Herein lies the potential lack of informed consent. Was it ever clearly explained that our policies were merely going to delay the health crisis? Would we have chosen to bear such immense economic pain for only a stay of execution?

The policies implemented in March will likely prove unsustainable. The nationwide lockdown was inevitably going to either be relaxed or simply collapse as Americans began ignoring the orders, and long before a vaccine or cure would be available. The policy debate has been couched as a choice between public health or the economy, an unconstrained pandemic or a depression. Our unsustainable policies might deliver a depression and a pandemic.

Our delaying action though has bought us time. We have learned more about the foe. We have controlled trial evidence that Remdesivir effectively treats COVID-19 (it is not a cure, but it helps). Doctors have learned that some healthy young persons who have fallen seriously ill were having an immune system overreaction to COVID. And some patients may have been ventilated too quickly.

We have also expanded health care system capacity. Temporary hospitals have been established and ICU beds added. We can test many more people for the virus now and have antibody blood tests as well. We should soon have adequate supplies of protective equipment for health care and nursing home workers.

Knowledge and preparedness should save lives in a potential “second wave.” We can use lessons learned to help reopen businesses and schools safely. Buying time may prove to be the shutdown’s greatest benefit.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

2 months ago

Dr. Daniel Sutter: Challenges and a coronavirus vaccine


President Trump recently announced “Operation Warp Speed,” a plan for a novel coronavirus vaccine by the end of 2020. I welcome the announcement because the greatest impediment to a vaccine now is the Food and Drug Administration’s (FDA) approval process. Speedy development of a vaccine is not without precedent.

The president’s comparison to the Manhattan Project, however, seems excessive. A vaccine is knowledge, which we may already possess. At least three candidate vaccines are in human safety testing, with perhaps thirty more in development. We hopefully have the recipe for an effective vaccine.

The FDA vaccine approval process starts with animal testing to evaluate safety and the production of antibodies followed by three phases of human testing. First is a very small sample to test safety; vaccines sometimes induce immune system reactions which damage organs. If judged safe, two rounds of randomized control trials ensue involving hundreds and then thousands of participants.


Only a safe and effective vaccine will benefit Americans. A vaccine will likely be part of returning life to normal. An ineffective vaccine may unleash another COVID outbreak and thousands of deaths.

Can this be accomplished quickly? Dr. Anthony Fauci has repeatedly stated that developing a vaccine will take 12 to 18 months; other experts suggest years. History shows otherwise, at least in one case. In 1957, a new strain of influenza, the Asian flu, was detected in Hong Kong in April. Dr. Maurice Hilleman, chief of respiratory diseases at Walter Reed Army Institute of Research, wanted a vaccine ready before it reached the United States. When the flu arrived four months later, 40 million Americans had been inoculated. The pandemic claimed 70,000 American lives, but the total likely would have been much higher without a vaccine.

Dr. Hilleman devised both an effective vaccine and a plan for manufacturing it. In devising his plan, according to, Dr. Hilleman “bypassed regulatory agencies in his efforts to push the vaccine forward because he worried those agencies would slow the process down.”

How could we expedite the process today? I am not a medical researcher, but we could probably combine the two phases of effectiveness testing and test several candidates with a common control group.

We could also employ human challenge testing and deliberately infect participants with the virus. We would administer participants a vaccine, give their immune systems time to develop antibodies, and then infect them with the coronavirus. Trials normally rely on participants running into a virus during their daily routines.

Human challenge testing offers several advantages. A trial would require fewer participants since all get exposed. The test concludes faster as exposure occurs immediately after the vaccine has had a chance to work. And researchers can control the exact exposure.

A significant ethical issue arises, as control group participants receive a placebo. Medical researchers would be intentionally infecting unprotected people with a deadly virus.

A paper in The Journal of Infectious Diseases, however, argues that human challenge testing could be ethical. We could include only young adults with no known risk factors, who face a low risk of death from COVID-19, provide them with the best healthcare available if needed, and seriously test informed consent.

A moral society is based on voluntary interaction. Consent distinguishes gifts from theft and democracy from dictatorship. I believe that informed consent justifies deliberate exposure to the coronavirus. History sadly provides examples of despicable, unethical medical experiments, like the Tuskegee Experiment. We will not be doing anything remotely similar here.

As an economist, I would further suggest compensating participants with fame if not money. Any participants who unfortunately die in the testing should know that we will commemorate their lives and sacrifice in the COVID pandemic museum.

Operation Warp Speed has let Americans know that we can guide the process of vaccine testing, not bureaucratic rules. Let’s now devise an expedited process to get an effective vaccine to Americans as soon as possible.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

2 months ago

Markets and medicines

(Pixabay, YHN)

Would you try an unproven drug to treat COVID-19? The Food and Drug Administration (FDA) has understandably not approved any drug for this brand-new illness. A willingness to try suggests seriously rethinking FDA regulation of drug effectiveness.

The FDA authority to regulate new drug safety dates to the 1930s, with effectiveness regulation added in 1962. Regulation is pre-market: a company must prove safety and effectiveness to the FDA before marketing in the United States.

Doctors can prescribe drugs already on the market, which have been demonstrated safe, to a patient for any purpose. Drugs introduced for one purpose will sometimes work on other ailments. For instance, Rituxan was introduced for one type of non-Hodgkin’s lymphoma but oncologists discovered it works on many lymphomas and some cancers. Companies cannot promote any such “off-label” uses of drugs.


FDA regulation exists because many Americans fear that companies would market useless drugs to desperately ill patients. The fear makes sense. People lack the expertise to know if a medicine can deliver as promised. Doctors may not treat enough patients to judge effectiveness themselves, and drug companies may offer financial incentives for prescribing. Marketing could possibly offset research in boosting profits.

Yet economists have found otherwise. Very few drugs on the market before 1962 were ineffective. Why? Insurance companies and hospitals provide an important check. We may not remember how drug X was overhyped, but insurers will and may not pay for the manufacturer’s next drug.

One out of five U.S. prescriptions may be written off-label. As economist Alex Tabarrok notes, off-label usage further demonstrates that the market can evaluate effectiveness without the FDA.

Proving effectiveness, however, is difficult and costly. Two-thirds of the $3 billion average cost of a new drug approval is spent showing effectiveness. Some “effective” drugs might only work for 20 percent of patients, making a demonstration challenging. Drugs eventually approved by the FDA are often available in Europe months earlier, demonstrating that our regulation generates unnecessary delays.

FDA regulation has slowed our response to COVID-19, beginning with tests for the virus. U.S. law charges the Centers for Disease Control with developing a test for an emergency; non-CDC tests must receive FDA approval. The FDA did not approve any others until late February and only approved the first serum anti-body test on April 3.

At least a dozen drugs are being tested for treating COVID-19. The FDA has allowed many Americans into these trials using emergency authorizations. Machine learning using the novel coronavirus’ DNA and laboratory testing identified the most promising drugs, which quickly went into testing on patients. This is part of a worldwide effort to find a treatment, not bilk desperate patients and their insurers.

FDA approval represents a major hurdle for any vaccine. We have been told that a vaccine will not be available for one or two years, but this is largely due to the FDA’s approval process. President George W. Bush planned to deploy a vaccine against a new influenza strain within six months; a swine flu vaccine was available in nine months. Advances in biomedical research have developed potential vaccines for COVID-19 in record time, with at least three already in human safety testing.

Only a safe and effective vaccine will benefit us. Nonetheless, an expedited approval process could still yield results. One innovative proposal involves intentionally exposing immunized subjects to the novel coronavirus in “human challenge” testing, speeding up the process significantly. The candidate vaccines may not work, but any delay of an effective vaccine due to bureaucratic paperwork should be intolerable.

COVID-19 starkly highlights two competing worldviews among economists and policymakers. One sees the pursuit of profit imperiling people unless checked by government regulation. The other believes that businesses earn profits over time by benefitting customers. The former view drives FDA regulation. If we trust doctors in treating COVID-19, we should reexamine the case for FDA effectiveness regulation.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

2 months ago

Is this a recession or a holiday?

(Pixabay, Wikicommons, YHN)

We have experienced unprecedented economic effects of the COVID-19 pandemic and social distancing policies. Twenty-two million Americans lost jobs in four weeks. The Federal Reserve Bank of St. Louis projects potentially 30% unemployment and a 50% decline in GDP by June. This looks like a depression, but is it really?

A recession or depression is visible – idle factories, reduced investment, and unemployed workers – but the causes are typically numerous and elusive. An economy in a depression is like a motor that has stopped working. Economists also note that recessions extend across most of the economy, as opposed to being a slump in one sector. The oil bust of the mid-1980s, for example, did not produce a recession.


Our current slump meets the breadth requirement. While sectors like tourism and entertainment have been particularly hard hit, the 30% decline in global oil demand demonstrates widely reduced economic activity. The stock market tumbled over 30% as well, consistent with a broad slump.

Yet in a very important way, the COVID-19 slump, dubbed by some the Great Suppression, differs from recessions and depressions; the decline has resulted from closing businesses to stem the virus’ spread. Christmas Day, when GDP craters as most factories, stores and restaurants close, perhaps provides a more appropriate economic analogy. The Christmas shutdown is intentional; people do not want to work and businesses oblige. Is the COVID-19 slump a lengthy Christmas break?

If so, we could expect a vigorous rebound when we end government closures, just as on December 26 (or perhaps January 2). We do not fear the economy not returning to normal after the Christmas shutdown.

The aftermath of World War II provides another hopeful example. Many of America’s factories produced tanks, planes, ships, and munitions for the war. Measured GDP was high but consumption of goods and services was modest. Economists were unsure the economy would transition to peacetime production after fifteen years of limited consumer production due to the Depression and the War, but it did.

One important difference exists between the COVID slump and the Christmas crash: we plan and prepare for Christmas. The COVID crash was unanticipated and of uncertain duration. Households did not stock up on supplies or accumulate extra savings. The Federal government did not run budget surpluses ahead of the crisis.

In addition to Christmas, small businesses often close when the owner goes on vacation, and seasonal businesses survive months-long closures. How and when will the Great Suppression turn into a recession? Most likely when currently shuttered companies go out of business, or their employees take other jobs.

Closed businesses have no revenue to pay employees, their rent, or for leased equipment. They may potentially hibernate and come back to life. Laid-off employees may have few other job options and landlords may lack new paying tenants. The Payroll Protection Plan and supplemented unemployment will hopefully help businesses hibernate and not go bankrupt.

The pandemic has two distinct components: reduced economic activity as people try to stay safe, and government closure of non-essential businesses. These two actions occurred nearly simultaneously and now complicate business owners’ calculations. When states lift stay-at-home orders, will customers return to restaurants and gyms? The existence of COVID-19 will significantly alter our economy. Previously successful business may be unprofitable until we have a vaccine or cure.

Business owners must also evaluate a political uncertainty. In March we chose public health over the economy. If COVID-19 cases increase after states ease restrictions, will we choose public health again? If so, then owners may squander their savings reopening businesses which get closed a second time.

Business failures will have ripple effects. Landlords will feel financial hardship as businesses and tenants are unable to pay rent. Loans made to these businesses will go unpaid. The ensuing rounds of spending reductions will not be directly connected to the closed businesses. The shutdown will have become a recession or depression and it will be too late to “reopen” the economy.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

3 months ago

Should we sue china over COVID?

(Pixabay, YHN)

Several lawsuits seek monetary damages from the Chinese government for the COVID-19 pandemic. Politicians seem to like the idea, too. South Carolina Senator Lindsey Graham said, “If it were up to me the whole world would send China a bill for the pandemic.” Tennessee Senator Marsha Blackburn thinks China should forgive the portion of our national debt it holds. Should anyone take financial blame here?

The lawsuits face an enormous hurdle in sovereign immunity. With limited exceptions, Americans cannot sue foreign governments based on a doctrine built into international treaties. I am not a lawyer, so I will not prognosticate on the lawsuits.

The new coronavirus originated in bats and began infecting people in Wuhan, China, in late 2019. This does not, to my mind, create liability. Viruses periodically jump from animals to humans; and each time, it happens somewhere. Should we have been financially liable for the 2009 H1N1 swine flu and 1918 Spanish flu pandemics which originated here?


China initially denied the existence of the new virus and downplayed its spread. Yet this is not without precedent. The Spanish flu began in U.S. Army training camps. Even as hundreds of soldiers fell ill, units were sent to France with no warning offered, spreading the illness worldwide. The ensuing pandemic claimed an estimated 50 million lives.

International health experts have also made misstatements. The World Health Organization (WHO) equivocated on person-to-person transmission until late January and did not declare a global pandemic until March 11. The Centers for Disease Control (CDC) thought that isolating symptomatic travelers from China and very limited testing could contain the virus. The CDC only recently acknowledged the value of masks despite weeks of reports about presymptomatic and asymptomatic transmission.

China has almost certainly underreported numbers of cases and deaths. But so have other nations. Medical experts believe that the case fatality rate for COVID-19 is perhaps 1%. The official fatality rates currently are 4.1% in the United States, 12.8% in the United Kingdom and Italy, and 15.3% in France. These rates imply underreporting of cases by factors of four to fifteen. Pneumonia deaths among persons never tested are not necessarily getting attributed to COVID-19.

Health data is very inaccurate, even with our enormous medical establishment. The “fog of war” is thick during a pandemic. Misstatements must go well beyond the pale to rise to intentional distortion.

I think China can be fairly criticized for not fully cooperating immediately with the CDC and WHO. Emergent new viruses challenge humanity’s collective medical knowledge. The novel coronavirus’s DNA was posted on January 10, greatly assisting medical researchers. Still, every day matters with a new virus; the brightest medical minds must get to work as soon as possible.

COVID-19 will result in a fragile environment for international trade and global society. The novel coronavirus will not disappear once the current outbreak is controlled. Pandemic potential will exist until the virus circulates widely or a vaccine is developed. One contagion model suggests that 97% of Americans will not have had COVID-19 after this outbreak. Our current disaster could be repeated many times over.

Reopening the American economy without triggering a new pandemic will require great care. Reopening international trade and travel will require even greater care and trust: an outbreak in any nation could ignite a repeat. I suspect Americans and Europeans will not accept a significant renewed pandemic risk due to trade and travel.

China is very significant in the global economy but the Chinese government’s lack of transparency has eroded the trust necessary for globalization. Will we trust China to be truthful so any renewed outbreak can be contained? I suspect not, with real economic consequences.

The global economy makes us wealthier and our lives richer. Globalization has drawbacks, but overall it is a very positive force for humankind. Loss of the trust globalization requires may be the most significant economic cost of COVID-19.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

3 months ago

Litigation in the public interest?

(Pixabay, YHN)

America needs billions of masks to protect against the coronavirus, particularly high-grade N95 masks for healthcare workers. Nonetheless, fear of litigation delayed delivery of millions of construction masks to healthcare workers. Should the law be slowing our emergency response?

America’s largest mask producer, 3M, will soon be producing 100 million a month. The company normally produces more construction than medical masks; while similarly effective, the medical masks must meet more stringent standards. The construction masks would certainly protect better than bandanas.


As reported by the Washington Post, 3M executives feared potential liability. Even with protective equipment, some healthcare workers will get sick. The performance differences could provide grounds for lawsuits. 3M would not ship the masks without a Federal liability waiver, which Congress approved in mid-March.

Who is to blame for the delay? A professor of bioethics quoted in the Post article states, “Don’t talk to your lawyers if you’re making masks or gowns or ventilators.” Yet 3M executives and lawyers have a duty not to expose the company to potentially ruinous litigation. The federal government could have acted quicker, as mask makers raised the liability issue in early February.

Many might blame the lawyers who might file such lawsuits. Suing a company helping out in a crisis is hardly praise-worthy. But this misses the more significant question: Why should plaintiffs have any chance of winning such a suit?

Plaintiffs’ lawyers commonly take cases on a contingency fee basis. As the ads say, “We don’t get paid unless you get paid.” Consequently, these lawyers must carefully evaluate whether they can win a case and would only sue 3M if they thought they could win.

Plaintiffs should be able to sue and win when companies have done wrong. Litigation helps us learn about corporate misbehavior. I frequently discuss the decentralized nature of knowledge. In a world of decentralized knowledge, we rarely know enough to call new lawsuits frivolous. Once the legal process discovers the relevant facts, we might conclude that the plaintiffs should not win.

If plaintiffs can win when we believe they should not, this is a problem with the law. We should fix the underlying problem instead of hoping lawyers will not file winnable cases.

My interest is not in narrow questions like why liability arises in this specific instance. The difference between law and legislation provides perspective on why law today can produce injustice. Today we think these are the same thing, but historically they differed.

Congress, state legislatures and city councils pass laws today. But as economist Friedrich Hayek observed, law used to differ from government legislation. This was most apparent with England’s common law.

Common law emerged and developed as freedom increased, providing rules to order peoples’ business and personal affairs. Rules help us anticipate how others will act, because people usually follow the rules. Starting a business would be impossibly risky unless an entrepreneur knew the meaning of leasing a building, purchasing supplies, and hiring workers.

The rules emerged out of a common understanding, not acts of Parliament. A relevant analogy today is the difference between a company’s employee handbook and the informal ways to get things done.

The common law evolved as judges decided cases involving new issues. There were multiple judges who were not bound by precedent; they could adopt or modify other judges’ rulings. If a party to a case did not like a judge’s ruling, they could argue their next similar case before a different judge. Through trial and error, decisions were fine-tuned into rules satisfying most parties.

In the 1800s, governments decided to write the common law into legislation. This sounds reasonable: any person could read the text of any law. Yet this also let legislatures change laws, sometimes to advantage special interests.

The law helps people deal with each other in peaceful, socially beneficial ways and should protect us from charlatans who break the rules. And our laws should assist us in responding to emergencies, not create unnecessary obstacles.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

3 months ago

Economic consequences of the pandemic

(Pixabay, YHN)

Our lives and economy have been disrupted on an unprecedented scale by COVID-19. How do we calculate the societal impact? Costs are tricky because they involve actions not chosen. Economics helps bring the consequences of the pandemic into focus.

The full costs involve much more than just monetary impacts. Economics and life are about human satisfaction or happiness; economics calls this utility. Eating food, watching sports on TV, socializing with friends and visiting relatives all generate utility. Money is valuable only because it enables the purchase of goods, services and experiences.

Our purchases typically generate benefits in excess of the price paid. A weekend family trip might cost $500, yet we might judge that the trip, which could produce lifetime memories, generates more than $500 in value. Let’s say we judge the trip as worth $1,200. The difference between the value of the trip and what we pay for it, in this case $700, is called consumer surplus. The cost of COVID-19 must include lost consumer surplus.


Identifying costs requires careful thinking about the alternative, both for our daily life activities and business. Many factories, hotels, casinos and museums have closed. We are losing their production and services, some of which can be offset. Overtime can make up for lost production at a factory; vacations can be taken later. People are impatient so the delay generates a real cost.

The potential to shift production and vacations illustrates an economic law: we live in a world of increasing costs. The cost per week of extreme social distancing will increase with each passing week. One month’s production from a factory might be made up; one year’s lost output is largely gone forever.

The cost is the difference in value created by the new ways relative to the normal. The cost of working from home is the reduction in workers’ productivity. For college classes shifted online, the cost is the reduction in learning. We must also note any offsetting savings; working from home, for example, saves commuting costs.

A couple of patterns, I think, can be observed. First, many disrupted activities have high ratios of consumer surplus to consumer expenditures. Consider the NCAA basketball tournament and the Summer Olympics. Few people attend these events, but millions (or billions) watch them on TV. The value of sports and entertainment far exceed their contribution to GDP.

Second, COVID-19 impacts have been very unequally distributed. For many, the disruption has been relatively minor, perhaps not chatting with coworkers about the March Madness tournament. By contrast, entrepreneurs have had businesses they poured their life, energy, and savings into building ordered closed indefinitely. College basketball players who trained and practiced for months missed out on March Madness.

Can the government offset the economic impact of the pandemic through a bailout? The answer is yes and no.

The proposed Federal coronavirus stimulus can soften the impact on hard-hit businesses and workers. Closed hotels, restaurants and airlines might be kept out of bankruptcy; their employees can continue to get paid and know they will have jobs when life resumes.

Paid sick leave during this emergency measure is also likely beneficial. A person with mild COVID-19 symptoms might decide to work to avoid missing a paycheck. Paid sick leave could let such persons stay home and slow the virus’ transmission.

The best hope for the stimulus is containing the economic impact. If hotels and restaurants go into bankruptcy, the banks which lent to them might be in trouble. Bankruptcies and layoffs could produce a collapse requiring years to recover from.

Yet a real limit to government assistance exists. Giving shuttered businesses and idled workers money does not help produce the goods and services which ultimately generate utility. Getting a check from the government does not make toilet paper available.

Economics teaches us that life involves tradeoffs. COVID-19 significantly threatens public health, while shuttering large parts of our incredibly complicated market economy threatens our prosperity. We need to soberly evaluate these tradeoffs to minimize the impact of the coronavirus.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

4 months ago

Sports cancellations explained by economics

(Pixabay, YHN)

Our world has changed almost unimaginably recently. The cancellation of the NCAA basketball championships brought us, in my son’s words, “March Sadness.” Why has our nation responded so dramatically and differently to COVID-19 than earlier pandemics, including H1N1 just 11 years ago? Economics offers a couple insights.

Things are certainly different. In 1919, hockey’s Stanley Cup was canceled due to the Spanish Flu. Not weeks ahead, however, but hours before Montreal and Seattle were to play Game 6. Montreal’s coach and five players had the flu; star Joe Hall died just days later.

The last global pandemic was H1N1. Schools and colleges remained open, and sports and entertainment did not halt. I remember the pandemic’s start and how the danger never seemingly materialized, but that wasn’t entirely accurate. The CDC estimates that the U.S. experienced 13,000 deaths, 275,000 hospitalizations, and 60 million cases; worldwide deaths may have exceeded 500,000.


Sports leagues certainly made no effort to soldier through COVID-19. Why the change? For one, we are wealthier than ever, and safety is a luxury good. In economics, a luxury good is one where its percentage of consumer spending increases as income goes up. Items like jewelry and vacation travel fit this description.

I use safety here because dozens of choices we make create risks to our life and health. Travel, diet and exercise, leisure activities, and work all matter. As people get richer, they are less willing in these choices to put themselves and their families at risk. Differences in attitudes toward risk can obscure this. Some rich people enjoy dangerous activities like skiing and flying a plane, while not all hypochondriacs are wealthy.

Because safety is a luxury, our actions during a pandemic will change. I would not expect a sports league to play until the players were dying, and we will close schools and cancel festivals more quickly than before. Furthermore, our greater knowledge of diseases affects our choices. Death tolls that were “acceptable” in the past are no longer so.

But more than just changing preferences are at work here. The economics of capacity constraints also matter.

We live in a world of scarcity, so we cannot get everything we want. We must produce the goods and services we value with limited resources. Production takes time and uses tools like factories, assembly lines and bulldozers which themselves take time to make. We cannot ramp up production as quickly as we might want.

The capacity constraints for COVID-19 are in the health care system: hospital beds, beds in intensive care units, ventilators, and supplies of drugs and antibiotics. According to the American Hospital Association, there are 925,000 staffed beds nationwide, with about 100,000 in intensive care. We might wish we had more beds, but capacity is costly; imagine maintaining hospitals solely for use in a pandemic.

This is why social distancing, canceling sporting events and festivals, and working from home are so important. Epidemiologists refer to this as leveling the curve, meaning the curve you get when plotting the number of new cases per day. In a pandemic, this curve can grow fast; enough growth in cases will overwhelm any capacity constraint.

Germany’s Chancellor Angela Merkel recently said that two out of three Germans may get COVID-19. Let’s suppose that proportion applies here. The timing of the cases determines whether capacity will be exceeded. If they occur over one or two years as opposed to one or two months, every patient who develops pneumonia can get the very best care possible today.

The dynamics are in a way similar to the seasonal flu. Healthy young adults face little risk from the flu. A flu shot helps them relatively little, but can keep them from giving their grandparents the flu. We would be wise not to be excessively brave in the face of what for some of us might be little danger.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

4 months ago

Socialism and the horrors of communism

Senator Bernie Sanders (I-VT)

Bernie Sanders’ pursuit of the Democratic presidential nomination continues to bring popular attention to socialism. Polls continue to reveal socialism’s considerable appeal to many Americans. Opponents of socialism often offer up the horrors of 20th Century Communism as a rebuttal. Is this history relevant today?

Received wisdom holds that young Americans know no history. So here’s the history lesson: communist regimes in the 20th Century produced over 100 million deaths, numerous famines, gulags, purges, and mass arrests. The Berlin Wall and Iron Curtain turned Eastern Europe into a virtual prison to keep citizens from fleeing.


Whether such brutality was an inevitable feature of communism is debatable. The Russian and Chinese revolutions occurred by force, not peaceful means. An international coalition, including the U.S., invaded the Soviet Union after World War I and helped foment civil war. External forces arguably pushed communist nations down a violent path.

Yet are Josef Stalin’s atrocities relevant for the Sanders campaign? I do not think so, and this strikes me as a poor rebuttal.

So why bring up this history? The answer lies, I think, in F.A. Hayek’s argument about “Why the Worst Get on Top” in The Road to Serfdom. Hayek was a distinguished economist (he won a Nobel Prize) who also had significant impact outside the academy. Britain’s Margaret Thatcher was strongly influenced by Hayek’s writings, and George H.W. Bush awarded him the Presidential Medal of Freedom.

Even socialist movements starting with a democratic tradition, Hayek argued, would be pushed to extremes. He had previously argued, along with Ludwig von Mises, that socialist economic planning would seriously founder. Socialists would need almost dictatorial powers to implement their economic plans. Once possessing such powers, leaders would face a choice “between disregard of ordinary morals and failure.”

If the initial socialist leaders would not use power to achieve their goals, they would lose out to less scrupulous rivals. As Hayek put it, “The old socialist parties are inhibited by their democratic ideals; they did not possess the ruthlessness required for the performance of their chosen task.”

Furthermore, Hayek thought that socialism must inherently be nationalistic, especially in a wealthy country; otherwise, all transfers would go to the world’s poor. Indeed, Mr. Sanders intends free college, Medicare for all, and government-guaranteed employment for Americans. Group demarcation is significant: thanks to centuries of living in small groups, humans often accept that the ends justify the means when advancing our group’s interests.

History shows, however, that Hayek’s argument was not totally correct. Great Britain was basically socialist under the Labor Party between 1946 and 1979. Free elections continued though, and eventually, Lady Thatcher was elected Prime Minister. France elected socialist Francios Mitterrand as President and Sweden serves as Mr. Sanders’ favorite example of socialism and neither descended into tyranny.

Liberalism distinguished European socialism from communism. As developed in England and exported to its American colonies, liberalism viewed individuals as the source of value in society. Previously people served kings and emperors; liberalism held that governments serve the people. Russia, China, and North Korea had no liberal tradition.

America’s democratic socialists, I think, accept that government exists to serve the people. They believe that Mr. Sanders’ economic programs would better enable all Americans to thrive, not just billionaires and millionaires. I strongly disagree with their economics, but accept their commitment to individuals as the standard of value, which implies that government cannot violate citizens’ fundamental rights.

I see Hayek’s tale as cautionary, not prophetic. Conservatives and libertarians who largely accept this story are deeply suspicious of the chain of events a socialist government would set in motion. We fear that when push comes to shove, democratic socialists will sacrifice individual rights.

Anecdotes like the following do not calm our fears. Philosopher Jason Brennan writes in Why Not Capitalism?: “A prominent Marxist philosopher was once asked how many people he would be willing to kill, during the Revolution, to bring about his favored goals. He responded, without blinking, ‘10%.’”

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

4 months ago

Billionaires and the good society

(El Borde/Youtube, G. Skidmore/Flickr)

Democratic presidential candidate Bernie Sanders contends we should not allow billionaires. His view produced interesting debate exchanges with Michael Bloomberg, who has a net worth of $53 billion. Are billionaires good for America?

A first consideration is the source of the riches. Were they earned from a successful business, or by stealing from or swindling others? Bank robbers and con artists do not benefit our economy. For those who inherited their wealth, we should consider the original source of the fortune.

The voluntary nature of purchases means that wealth accumulated through business is earned. A customer buying financial information from Mr. Bloomberg’s company or a book from Amazon, whose founder Jeff Bezos tops the Forbes Richest Americans list (net worth: $116 billion), receives value equal at least to the purchase price. You should only buy a book on Amazon for $15 if it is worth at least $15 to you.


Mr. Bezos, of course, does not keep the $15. Amazon likely bought the book from a publisher, and must also pay its employees. Only part of the $15 is profit for Amazon, and only a portion of this goes to Mr. Bezos.

Companies sell goods and services at the prices they do because they also benefit. Market transactions make both buyers and sellers better off. Billionaire entrepreneurs become rich by taking a small slice of the value created by a large volume of economic activity.

Did Mr. Bezos and Mr. Bloomberg make their billions at the expense of their workers? No, although their businesses needed the efforts of many employees to succeed. In a market economy, not even the world’s richest person can force anyone to work for them. All employment is voluntary. Every Amazon and Bloomberg Business employee chose to work for the wage or salary offered. The employees presumably found these jobs attractive relative to their alternative options.

Economics shows that workers get paid based on their productivity. Competition between businesses bids up wages to this level. An employee paid less than their contribution can be hired away by other businesses.

Consumers typically value what they buy more than the price paid. For instance, that $15 book you bought from Amazon might be worth $25 to you. The extra $10 is called consumer surplus and is our share of economic prosperity. In a sense, billionaire entrepreneurs get rich by providing us consumer surplus.

Research shows that billionaire entrepreneurs get very little of the value they create. Nobel-prize winning economist William Nordhaus found that firms capture just over two percent of the total value of their inventions. The rest goes to consumers mostly, but also to employees and suppliers. The two percent is for the business, not just the founder. Amazon’s value created for society must be in the tens of trillions of dollars.

Billionaire entrepreneurs make our modern world enormously more prosperous and have done nothing legally or morally wrong. Still, a billion dollars is more than anyone could spend responsibly in a dozen lifetimes. Couldn’t we tax their wealth, as Mr. Sanders has proposed?

A wealth tax may not have the dire consequences some predict. Money cannot really be motivating the super-rich who continue to work hard. Mr. Bloomberg was already a multimillionaire when he was crawling under desks to hook up his information boxes for clients. Perhaps their motive is intrinsic, that they simply desire business success. Or they may care about relative standing, say moving up the Forbes list.

Both of these motives suggest that reasonably high taxes may not deter the rich from working hard. Does this make a wealth tax good policy? Not necessarily. America’s billionaires might move to nations with lower taxes. And billionaires’ wealth helps fund new innovations by their companies and risky startup ventures by others, as Forbes columnist John Tamny emphasizes.

Billionaire entrepreneurs benefit America. They become super-rich by making our lives better, not by taking from us. Mr. Bloomberg may not win the Democratic presidential nomination, but he need not apologize for the wealth he has helped create.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

4 months ago

Is health care a right?

(Pixabay, YHN)

The debate over government’s role in health care and “Medicare for All” frequently revolves around whether health care is a human right. We establish government to secure our rights, so government should not deny Americans’ right to health care.

Health care is one of several economic rights, like rights to food, shelter and education. Arguments concerning health care generally apply to other economic rights.

The U.S. Constitution and Bill of Rights do not recognize economic rights, which run against our founders’ political philosophy. The American Revolution was fought to secure “negative” rights, like life, liberty and property. Negative rights can be enjoyed by preventing actions to violate them, like assault or theft. If you observe the non-initiation of force rule, you are unlikely to violate negative rights.


Negative rights though do not ensure that people have the things needed to achieve happiness. A person unable to eat, obtain an education, or receive medical care will not have a high-quality life. The world would be better if everyone enjoyed a minimal standard of living. Does this imply that people have economic rights? The United Nations’ Universal Declaration of Human Rights affirms this.

A problem arises with economic rights. Health care must be produced, so some people must be obligated to provide this. An unchosen obligation undermines the voluntary basis of social interaction. To avoid forcing doctors to treat patients, government must pay
the bills.

Opponents of economic rights see these obligations as problematic. The U.N. Declaration says that “all human beings are born free and equal in dignity and rights.” The obligation of some to provide health care for others seems to produce unequal rights.

Another problem concerns the level of care people are entitled to. Surely emergency medical care and treatments for life-threatening diseases should be covered, though probably not fertility treatments. Yet parenthood is a huge component of happiness. Plastic surgery seems excessive, but what about persons disfigured by accidents?

Are people obligated to live to minimize the burden they impose on taxpayers? Should people face dietary restrictions and exercise requirements? Should we ban dangerous recreational activities like skiing and softball? Such restrictions compromise the pursuit of happiness.

We need not make health care a right to provide coverage. Insurance already helps with this. Few Americans could afford $1 million in medical care themselves, but insurance can cover this, and through voluntary contracts and not taxes. Americans today are not denied emergency medical care due to an inability to pay.

Americans’ willingness to help those in need makes charity an alternative in providing medical care. Voluntary assistance provided a safety net before the modern welfare state, as documented by University of Alabama historian David Beito. In addition to numerous charities, today individuals can make appeals on GoFundMe.

I will leave the question of whether voluntary assistance could provide health care for another time. Instead, let’s consider two troublesome aspects of voluntary assistance.

Even with a diverse array of charitable assistance, people might fail to meet eligibility criteria. If churches provided charity medicine, for instance, low-income Americans who were not religious could go uncovered. Of course, people slip through the cracks of our government safety net: not everyone eligible for Medicaid enrolls. The gaps appear to be a fixable flaw of a government safety net but a feature of charity.

Proponents of government assistance feel that asking for charity is demeaning. I agree that many people find asking others for help unpleasant. A right to health care keeps people from having to beg for help.

Health care is a component of modern prosperity, which must be continually created. A poor society, like America at the time of our founding, can enforce negative rights to life and property. We can better ensure that every American has health care and other economic necessities through public policies designed to allow prosperity than by enshrining economic rights.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

4 months ago

The freedom to pump gas

(Luke AF Base)

Illinois State Representative Camille Lilly recently sponsored a bill to restrict self-service gasoline stations. New Jersey and Oregon already ban self-service gas, although Oregon exempts rural counties. Would creating jobs for gas station attendants be good economics?

There are many tasks we can either do ourselves or pay to have done. Consider food preparation. We can buy and cook food, pay someone to prepare food in our house, purchase cooked food from a restaurant or consume a meal at a restaurant.

Two factors affect our food preparation choices. First, hiring someone allows the use of specialized knowledge. A chef has likely been to culinary school, while a cook has training and experience. Of course, we probably won’t be hiring Emeril Lagasse to cook for us, and cooking at fast-food restaurants does not involve great culinary talent. Still, we can potentially hire skill we do not possess.


Second, having someone cook changes our cost. Cooking for ourselves takes our time, which is valuable. But we must pay someone to cook for us, whether at to a restaurant or a chef willing to come to our house.

To succeed in hiring someone, we will want potential workers to find our position desirable. Many people aspire to be chefs and have their own cooking show; many fewer want a career flipping burgers. How much we enjoy cooking ourselves is also relevant.

Similar factors affect the pumping of gas. Although little expertise is involved, carelessness can cause spills and fire risk. According to the National Fire Protection Association, 3,000 vehicle fires at gas stations caused an average one death and $8 million in damage annually between 2004 and 2008. Properly trained and attentive attendants could prevent some of these fires. A 46% decline in gas station fires since 1980 demonstrates that self-service has not fueled a crisis.

The price of gas will have to increase to pay attendants, and even more if states pass a $15 per hour minimum wage. Stations will need attendants on duty whenever open, and attendants will often be idle. Gas stations with lots of pumps will need to hire several attendants to use all the pumps simultaneously. Delays waiting for an attendant will increase our time cost of filling up.

We already know how most consumers weigh the inconvenience of pumping gas versus the costs of attendants. Full-service gas stations are not prohibited but almost all have been driven out of business. Drivers preferred the convenience and savings of self-service, even before the advent of pay-at-the-pump technology in the 1980s.

But wouldn’t jobs for attendants boost the economy? Illinois’ Representative Lilly thinks so. Desirability matters when considering creating or bringing back a class of jobs. How many people really want to pump gas all day in the snow and cold of Illinois or the heat of Alabama?

More significantly, labor is a scarce resource. Our economy is more prosperous when we produce goods and services using less labor. Pumping gas involves substituting our unpaid labor for paid attendants. Still, the money drivers save pumping their own gas will be spent on other things, perhaps food at convenience stores. This spending then creates jobs and provides things people value more.

Americans might have changed since the 1970s when self-service conquered the market. Today, many Americans would not try changing a tire, preferring to wait on and pay for roadside assistance. Podcaster Adam Corolla has humorously decried this trend. As an economist, I try to avoid judging the choices people make.

If people no longer want to pump their own gas, entrepreneurs can open new full-service gas stations. Or think outside the box and offer fuel-delivery service like the Birmingham startup company FuelFox. Balancing cost and convenience challenges all of us. The freedom to pump our own gas is one part of a prosperity-enhancing balancing.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

5 months ago

Death, taxes and prosperity

(Pixabay, YHN)

The only two sure things in life, according to the saying, are death and taxes. Should businesses profit when one of their employees dies? They can avoid taxes, and this reduces our prosperity.

I first read about “Janitors Insurance” or “Dead Peasants Insurance” in Harvard Professor Michael Sandel’s What Money Can’t Buy. Professor Sandel used the case to criticize how this affected businesses’ view of workers: “Creating conditions where workers are worth more dead than alive objectifies them; it treats them as commodity futures rather than as employees whose value to the company lies in the work they do.”


Corporations have legitimate reasons to take out insurance on top executives. A good CEO has a vision and strategy for a company, which subordinates may not fully grasp. The sudden and unexpected death of a leader can cost a company. The insurance industry creates value by covering such losses.

By contrast, firms’ financial interest in most employees is more modest. Employees are certainly worth more than their salary, because they know and are good at their jobs. Hiring and training a replacement takes time and money. The stake, however, is small relative to the insurance policies companies take out, like a $250,000 policy for a convenience store clerk. And companies keep the policies after employees quit or retire, so they are not protecting against losses from separation.

Janitors (and executives) Insurance policies are not for the employees’ benefit; they are “company-owned,” meaning that the business pays the premiums and is the beneficiary. Many businesses do offer life insurance as an employee benefit. Employees and their families are the beneficiaries of these policies.

Although Professor Sandel refers in the above quote to an employee being worth more dead than alive, no one accuses businesses of hastening employees’ deaths to collect Janitors Insurance.

Our tax code incentivizes businesses to purchase Janitors Insurance. Life insurance is an investment yielding a return on the premiums paid. The insurance company invests the premiums and shares some of the returns through a more generous benefit to make life insurance more attractive to potential customers.

Significantly for our story, life insurance death benefits are generally tax-free. This allows businesses tax-free investment income.

We might want to blame corporations for trying to pay less in taxes, but this would be misplaced. Public finance economics distinguishes between tax avoidance and tax evasion. Avoidance legally reduces taxes owed, while evasion involves lawbreaking. Economists assume that individuals and businesses will engage in avoidance. Indeed, numerous ads during income tax season encourage us to avoid paying too much. We control tax evasion through legal and moral sanctions.

Efforts like Janitors Insurance to avoid taxes divert businesses’ effort away from earning profits. The time and effort managers use devising new tax dodges cannot be spent making new or improved goods and services or lowering costs, activities which make our economy more prosperous. Avoiding taxes merely makes someone else pay for government. When businesses find avoiding taxes more profitable than producing goods and services, our economy grows more slowly.

Considerable investment went into developing Janitors Insurance. Corporations lobbied states during the 1980s for laws allowing the insuring of all employees, not just executives. And explaining the legality and wisdom of Janitors Insurance to top management must have taken many meetings.

We like taxing businesses because they appear rich. Yet the question of who truly pays business taxes is very complicated. Taxes can reduce worker pay, while many working Americans own stocks through a pension or IRA.

Because of these uncertainties and the enormous cost of making tax avoidance more profitable than production, many economists support lower business taxes. The Tax Cuts and Job Act of 2017 indeed cut the corporate tax rate from 35 to 21%. Time will tell, but this tax cut should reduce businesses’ use of Janitors Insurance.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

5 months ago

So you want to start a business


Economic freedom allows people to buy, sell, invest and use their property to pursue life goals. Many Americans aspire to exercise this freedom to start a business. Where someone wants to open this new business makes a big difference for the burden of government licenses, regulations and taxes.

The Center for the Study of Economic Liberty at Arizona State University’s Doing Business in North America report sheds light on this. The study extends the World Bank’s Doing Business project. The ASU study measures things like the number of approvals necessary to open a business and restrictions on hiring or firing workers.


The study focuses on barriers facing small and medium businesses, the types of firms which entrepreneurs start and try to grow. Most of these government rules are well-intended and likely beneficial. Still, failure to get proper permissions could at least temporarily shut down a new business.

The study tracks 63 different provisions for 115 cities across the U.S., Canada and Mexico, including Birmingham for Alabama. The World Bank project includes only New York and Los Angeles in the U.S. For economists studying economic freedom, Doing Business North America explores how business regulation varies across states.

The index includes six categories: starting a business, employing workers, getting electricity, registering property, paying taxes, and resolving uncertainty. Bankruptcy is as important as starting a business or hiring because many new businesses fail. Indeed, many eventually successful entrepreneurs initially fail, like Henry Ford. If entrepreneurs cannot get a fresh start, they may never put the lessons learned from failure to use.

Cities’ scores range from 0 (worst) and 100 (best). A city with the best policy on each component would get a score of 100, while a city with the worst policy on each would score 0. A score of 60 is roughly 60% of the best policies.

The U.S. and Canada are two of the world’s freest economies according to the Fraser Institute, while Mexico ranks 76th. Not surprisingly then, the 39 Mexican cities rated occupy the lowest ranks. Although the U.S. and Canada have similar national economic freedom scores, the top American cities outrank Canadian cities; Canada’s top city, Halifax, ranks 53rd.

Across America, Oklahoma City ranks first with a score of 85, or about 15% off the best policies on average. Arlington, Virginia, Sioux Falls, Boise and Atlanta round out the top five. San Francisco is America’s lowest ranked city (77th) with a score of 59.

Birmingham places 22nd with a score of nearly 80. Birmingham’s business environment is much closer to Oklahoma City’s than San Francisco’s. Its highest ranks are in the bankruptcy (tied with many cities for 1st), employment, and taxes categories, with its lowest ranks in starting a business and electricity. How do other Alabama cities compare to Birmingham? The Johnson Center is working with the Center for the Study of Economic Liberty on this.

The impact of legal and regulatory burdens likely depends on an entrepreneur’s background. Many Americans can navigate rules; we know that things like building permits and business licenses exist and how to get them. We know how to hire a lawyer or accountant if needed. Americans with lower incomes and less formal education are often unfamiliar with legal compliance. Even reasonable rules restrict their economic opportunities and possibly deprive us of their innovative ideas.

The biggest limitation in measuring economic and business freedom, I think, involves uncertainty about obtaining permission. Some permits require significant paperwork and processing time but will eventually be issued. Permits for things like liquor licenses and new construction are granted by public boards subject to citizen pressure. Political pushback can be hard to predict. The difficulty of quantifying such uncertainty about securing permission limits measuring the full burden on entrepreneurs.

Entrepreneurs create the new products, services and innovations that increase our prosperity. Thankfully, freedom to start a business and succeed or fail based on your merits still exists in much of America.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

5 months ago

Pandemics and quarantines


A coronavirus outbreak in China has sparked fears of a global pandemic, as communicable diseases do not respect national borders. Governments use quarantines and isolation to limit such threats, measures which libertarians find objectionable. Property rights offer helpful guidance here.

The 2019 Novel Coronavirus appeared last year in China’s Hubei province. The new coronavirus originated in animals but is now being transmitted from person to person. At the beginning of this week, there were over 1,000 confirmed cases and 80 deaths. Researchers learn more about the virus every day, and Centers for Disease Control’s website provides updates.


History tells of numerous deadly pandemics. The Black Death killed an estimated 30-60% of Europe’s population between 1347 and 1351. Cholera outbreaks in 19th Century America caused the evacuation of cities. The 1918-1919 influenza outbreak killed 50 million persons worldwide, including over 600,000 Americans.

The current outbreak demonstrates our need for medical research capacity. With all the vaccines and wonder drugs now available, we might think that our health challenge is providing the available medicine to all Americans. But new diseases must be researched.

If we had no further need for research, an economic case could be made for making drug companies sell at their production cost. This would make drugs much more affordable. But the loss of profits on successful drugs would effectively end privately funded medical research.

The current outbreak also reminds us of the value of effective public health services, typically a task for government. China, for instance, is trying to restrict travel for over 50 million people. George Mason University economist Tyler Cowen recently suggested that libertarians must embrace the need for government capacity to act decisively when needed. I agree wholeheartedly; limited government should be effective. We should ask government to perform only important tasks we cannot do ourselves. We benefit from government being good at these tasks.

Quarantines and isolation seemingly protect the group at the expense of individuals, which troubles libertarians. Libertarians see individuals as morally valuable; individuals should not be sacrificed for the group. Quarantines and isolation restrict the freedom to protect oneself during a pandemic.

Property rights, I think, provide perspective. Economists frequently describe property rights as giving people an incentive to use their possessions productively. Property rights also provide a formula for making decisions in an orderly, peaceful society.

Property is frequently privately owned but can be jointly owned. Property owned by a government is often public, but privately-owned spaces can also be public, like shopping malls. A space becomes “public” when opened to everyone without specific permission. A person does not trespass when entering a public place.

Our society and economy require public spaces. We could not travel as we do or produce and trade goods and services without movement through public spaces. Property owners must willingly allow access to their property; owners can always refuse entry. Although we might consider travel a fundamental freedom, it must be limited by property rights.

The quarantine power comes from owners’ freedom to condition access to their property. Owners can restrict persons suspected of having a contagious illness from entering their property. Governments, which own many public spaces on our behalf, can also restrict access.

A quarantine option helps keep public spaces open. To see why, suppose no restrictions on access to space opened for public use were allowed. Very few public spaces would likely exist.

Should possibly exaggerated pandemic fears prompt travel restrictions? Travel restrictions during the 2002-2003 SARS outbreak cost Asian economies an estimated $40 billion. Unfortunately, we do not know in real time which pandemic scares will prove overblown. Furthermore, fears are real even when danger never materializes. We respect people when we respect their fears and concerns.

The openness of public spaces enables our prosperous society. Property rights help harmonize our various and sometimes divergent interests. Quarantines represent the exercise of property rights, not a sacrifice of individuals for the good of society.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

5 months ago

Could an asteroid destroy our economy?

(NASA/Contributed, Pixabay, YHN)

An asteroid could wipe out all life on Earth, so yes. But what if we mined and brought an asteroid’s valuable metals to Earth? NASA’s plan to send a probe to an asteroid generated some out-of-this-world economic claims.

The asteroid belt between Mars and Jupiter may be the remnants of a proto-planet that broke up long ago. NASA plans to visit 16 Psyche, a heavy metal asteroid, which astronomers believe is mostly nickel and iron, but may contain precious metals like gold and platinum. The reported $10 quintillion (a 1 with eighteen zeros behind it) market value for Psyche’s metals seems to have been pulled out of thin air, or the vacuum of space.

Would the metals in 16 Psyche make everyone on Earth become a billionaire? Or would metals markets crash and somehow destroy the world economy? Economics suggests neither extreme.


We live in a world of scarcity, meaning that our desire for goods and services exceeds our ability to produce them. Production requires raw materials and, more importantly, know-how. Knowledge lies behind technology, from agriculture to supercomputers; discovering productive uses for nature’s bounty creates natural resources. The heavy metal asteroids have existed since before the first humans but will only become resources if we learn how to make space mining a reality.

Resource availability frequently constrains production. We cannot make metals or petroleum out of nothing. The harder we must toil to acquire resources, the greater their cost because everyone must be compensated for their hard work.

If technologically and economically feasible, space mining will increase the supply of metals and lower their prices. This will enable production of more goods at lower prices. Our standard of living will unambiguously rise.

But might a collapse of gold and precious metals prices bankrupt investors and cause a depression? Gold prices would likely tank, making investors holding gold suffer losses. At a price of $1,500 an ounce, all of the gold ever been mined is worth about $10 trillion. This is a lot of money, but Credit Suisse Research Institute estimates total world wealth at $360 trillion. Even a 90% drop in gold prices will not impoverish investors as a group.

The minerals from 16 Psyche would make some people wealthy, particularly the owners of Psyche’s minerals. Lower prices for cars, buildings, spaceships and other goods will increase investors’ effective wealth. The world’s economy will be more productive and investors (overall) more prosperous.

Furthermore, a decline in gold prices should not surprise investors. Expectations about demand and supply in the future influence commodity prices today. Market prices would fall as space mining becomes a reality.

Resource price adjustments also explain why 16 Psyche will not make everyone a billionaire. The news stories have a sliver of truth: $10 quintillion is over $1 billion for each of the world’s 7.5 billion persons. Yet if 16 Psyche has ten times more gold than currently in the world, gold’s price will be far below $1500 per ounce. As another way of considering this, no one has $10 quintillion to pay for Psyche’s resources.

Will space mining prove feasible? The scientific and engineering questions are well beyond my expertise. Two recently founded space mining companies, Planetary Resources and Deep Space Industries, have some highly respected scientists and smart investors involved. If Google’s Larry Page and Erich Schmidt are investing in a venture, it probably has some chance of success.

Perhaps space mining’s biggest contribution is illustrating the innumerable possibilities of the future. The potential that we will run out of resources remains highly persuasive, even though objective measures like the Cato Institute’s Simon Abundance Index demonstrate otherwise. Ultimately knowledge and discovery create the resources we use and new inventions which improve our lives. Undiscovered discoveries, however, are inherently hard to foresee. To envision the enormous potential for future innovation, just remember asteroids and space mining.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

6 months ago

Animal welfare and economics

(American Kennel Club/Twitter)

Dog owners in Canberra, Australia, must now walk their companions daily or face a $2,700 fine, due to a 2019 animal welfare law recognizing dogs as sentient beings. Does requiring the humane treatment of animals restrict the property rights of humans and the functioning of economies?

I will not let rain, sleet, snow or dark of night deter me from walking my dogs. Dogs’ unbridled enthusiasm for a walk is so marvelous that I never want to let them down. I will confess, though, that I’ve violated Canberra’s new law.


Political philosophers’ theories of rights describe how humans should treat each other. Humans have the capacity for rational, deliberative action. Furthermore, political rights establish the conditions for the exercise of our rational capacities. Although beyond my professional expertise, based on my understanding, I would be reluctant to say that animals have rights.

Nonetheless, I think animals should be treated humanely and ethically, even though people disagree about what exactly constitutes humane treatment. And standards for humane treatment have changed over time. In the 1800s, owners could beat horses or mules for failing to do work.

Some critics dismiss animal rights when proponents do not extend rights to insects. An advocate willing to swat mosquitos rejects what critics see as the logical extension of animal rights. I think humans can hold ourselves to whatever standards of treatment we want. We can have inconsistent standards across species and decide to treat cute animals better. And we need not compromise our health and safety; we can, for instance, spray mosquitos.

The most relevant animal treatment issues today involve hunting and eating meat. My personal opinion here is irrelevant. But standards of care for animals have increased over time, so I can imagine hunting and eating meat being banned someday.

Do requirements for humane treatment compromise the property rights that provide the basis for our economy? As a free-market economist, I normally defend peoples’ economic freedom to use their property as they wish. Shouldn’t economic freedom include the freedom to organize dog fights?

Perhaps I am rationalizing, but I do not believe so. Property rights are ultimately rights to use things we own in certain ways. Ownership of animals may entail fewer rights than ownership of, say, furniture. Parents have more limited decision rights for their children than for themselves and can lose parental rights for abuse or neglect. Since standards of humane treatment can be inconsistent, we may decide that killing pigs or cattle but not dogs or horses for food is OK.

Would the banning of meat decimate agriculture? The impacts would be significant; the U.S. has over 90 million cattle, 70 million hogs and 230,000 poultry farms. The 2.3 million Americans working in agriculture will likely continue to do so, probably growing crops instead of raising animals. We have already seen a more radical transformation, however, as 80% of Americans worked in agriculture in 1800.

Banning meat would cause ranchers losses on the poultry and livestock they owned. However, meat is unlikely to be banned until many more Americans first become vegetarians. Fewer meat-eaters would reduce livestock populations and prices, reducing the losses from an eventual ban.

Animals, though, may not benefit from vegetarianism. The vast majority of America’s 70 million hogs are alive today because they are being raised for market. Most farm animals would not exist if we did not eat meat.

Is it better for an animal never to be born than born and raised to be eaten? Population ethics wrestles with a version of this question. China’s one child policy controlled population growth, but millions of children were never born. Does a higher quality of life for those lucky enough to be born offset the lives that never were?

Humanity is arguably making moral progress: slavery has been abolished, war is becoming rarer and we insist on humane treatment of animals. Ownership, limited by norms of humane treatment, leads humans to care for animals. Evolving standards of humane treatment need never cause economic calamity.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

6 months ago

Prosperity and inequality


The world has achieved an unprecedented level of prosperity. Economist Deirdre McCloskey has labeled this the Great Enrichment. For the first time in human history, standards of living for ordinary people – as opposed to emperors or kings – have risen above subsistence.

Historical estimates of Gross Domestic Product (GDP) per capita, economists’ preferred measure of living standards, dramatically document the Great Enrichment. Economist Angus Maddison began this project, now continued at the University of Groningen Growth and Development Centre. The dollar figures mentioned here are in 1990 dollars, adjusted for inflation, and comparable across countries.


Professor Maddison estimated world GDP per capita in 1 AD to be $445. One thousand years later, it was $436, meaning complete stagnation for a millennium. Slow progress then began, with GDP rising to $566 in 1500 and $667 in 1820 before really taking off, reaching $875 in 1870, $1,525 in 1950 and $6,049 in 2001.

This represents an incredible improvement in the quality of billions of human lives. The World Bank defines extreme poverty as GDP per person of $2 per day or less. Essentially the world was poor until the middle of the 19th Century. And little progress was occurring. In most countries, over a century there would likely be no meaningful improvement in living standards.

The Great Enrichment began in Great Britain and the Netherlands around 1700. Britain and Holland remained the two leading world economies until the U.S. caught up in 1870 and became the world’s leading economy before World War I.

Over the past 50 years, prosperity has extended across the globe. China and India have received the most attention. Living standards have increased by factors of nine in China since 1976 and four in India since 1990. Prosperity in the world’s two most populous nations has really boosted global GDP.

Africa missed out on growth during the 20th Century. But numerous African nations are now becoming significantly richer. Since 2000, living standards have increased by 50% in Kenya, over 100 percent in Namibia, Sudan and Tanzania and 600% in Angola.

The Great Enrichment provides perspective on America’s current concern with income inequality. Enormous differences in wealth certainly exist. Jeff Bezos is worth over $100 billion, while the average household is worth $97,000. Several Democratic presidential hopefuls propose ambitious plans to reduce inequality.

Redistributionist policies take the existence of wealth as given. Economist John Kenneth Galbraith argued in The Affluent Society that since we had become a prosperous nation, we could now afford to address societal ills. This reasoning has become received wisdom.

Economic history, by contrast, shows that today’s wealth is the exceptional condition. America has billionaires, and a billion dollars is more money than one could spend in several lifetimes without wasting it. Yet, even America’s poor households enjoy a standard of living that kings and emperors of the past would envy.

The Great Enrichment has made the average person become wealthy for the first time. Unfortunately, prosperity has not been equally shared. Perhaps human society cannot produce wealth without inequality. Wake Forest University philosopher James Otteson offers this perspective:

What presents us with an uncomfortable dilemma is that the clear lesson from human economic history seems to be that the only way we have ever discovered to enable substantial numbers of people to rise out of poverty is a set of political-economic and cultural institutions that also engender inequality.

Many Americans believe in American exceptionalism, that our nation is somehow better than others. America helped drive the Great Enrichment and was the first nation founded on the principle of freedom. Yet some of America’s founders owned slaves. I’ll let others debate if we’re exceptional.

America’s accomplishments are due to our laws and constitution. I do not believe that America is immune from the forces shaping social interaction among humans. The American flag and the Pledge of Allegiance do not guarantee prosperity.

Just as freedom must be protected by every generation, prosperity must continue to be produced. If a quest to address income inequality compromises the conditions necessary for prosperity, we might once again find ourselves all equally poor.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.