How to Leave the Euro: A Practical Roadmap for Italy

The leaders of Italy’s new populist government say they do not want to leave the euro. Except, no one believes them.

They are on record as saying the euro was a bad idea. Until they were stopped by the President of the Italian Republic, they tried to appoint Paulo Savona, a well-known euroskeptic, as finance minister. Instead, they appointed him European Affairs Minister. In a recent book, Savona called the euro a “German cage” and wrote that “we need to prepare a plan B to get out of the euro if necessary … the other alternative is to end up like Greece.” In short, it is not surprising that the new government’s commitment to the euro is in doubt.

When whenever the possibility that some country might leave the euro arises. Leaving abandoning one currency and introducing a new one just too difficult, they say. They point to the years of planning that went into launching the euro in the first place. They warn that a country facing currencies would face a host of problems, ranging from panicked runs on banks to the need to reprogram vending machines.

Ultimately, though, such technical difficulties are surmountable. If Italy decides to leave the euro, the key to success will be to learn from the experience of the many countries that have changed currencies in the past. Here is a practical roadmap, drawing on the imaginative, pragmatic, devices that other countries have used to ease the introduction of a new currency.

Lesson 1: Use a Temporary Currency
Those who worry about technical barriers to changing currencies often point to the time it would take to print notes, mint coins, and put them into circulation. That time can be greatly shortened by using a temporary currency during a transition period.

Latvia’s exit from the ruble area in the early 1990s is a case in point. First, in May 1992, authorities introduced a Latvian ruble as a temporary replacement for the old Soviet ruble. Only months later, after due preparation, did they put the new permanent currency, the lats, into circulation. Distinguished by some of the most handsome coins in Europe, the lats served for more than 20 years, until Latvia switched again, joining the euro.

A temporary currency solves several problems. For one thing, it can be run off quickly and cheaply without all the elaborate counterfeiting safeguards of a modern currency. It won’t be around long enough to be worth counterfeiting.

The speed record for introducing a temporary currency seems to have been set by the Soviet occupiers of eastern Germany after World War II. When the Americans, British, and French replaced the old reichsmark with the deutschmark in 1948, the Soviets were caught flat-footed. As now-worthless Nazi-era currency flooded into the Soviet zone, occupation authorities hastily printed paper stickers, which they stuck on a limited number of reichsmark notes to serve as a temporary currency. Those were then replaced with a permanent currency, which became known as the ostmark and lasted until reunification.

To make things easy, there is no real need to issue coins in the temporary currency. For example, when Kazakhstan switched from the ruble to the tenge in the early 1990s, there was a period in which there were no coins in circulation. Their place was taken by grimy little fractional banknotes, which were a nuisance, but got the job done. Nicely minted tenge coins came later.

A temporary currency can also serve the useful purpose of absorbing public disrespect. If the new currency is expected to depreciate, as would probably happen to any new currency, were Italy to leave the euro under current conditions, people are going to think up some derisive name for it and treat it with contempt. For example, a temporary Belarus ruble issued in 1992 was popularly called the zaichik or “bunny” after a whimsical picture on the one ruble note. The name fueled many jokes until a new, somewhat more respectable Belarus ruble (without the bunny) was issued later.

Giving the temporary currency the same name and denomination as the old one, as was done in Latvia and Belarus, simplifies accounting, signage, and other technical matters. Following that  example, Italy might begin with a temporary Greek euro. A permanent new lira could come later, after the government established credible monetary management.

Lesson 2: Do not Fear Parallel Currencies

Introducing a new currency would be harder if it had to replace the old one all at once, but there is no reason it would have to do so. Many countries have eased the transition from one currency to another by allowing parallel circulation during a transitional period.

Sometime,s the old currency can be left in circulation for a while at a fixed exchange rate. That is how the euro was first introduced, and the same practice has been followed as new members later joined the Euro Area. Temporary parallel circulation makes sure that everyone doesn’t have to spend January 1 standing in line at their bank, and it gives technicians time to update parking lot gates so that no one is trapped in an underground garage with the wrong coin in their pocket.

Parallel currencies with a floating exchange rate are also possible. Floating-rate parallel currencies often develop spontaneously in countries experiencing hyperinflation. While the ruble or peso or whatever remains the legal tender, people start using a hard currency such as the dollar or euro alongside it. Typically, the rapidly depreciating local currency continues to be used as a means of exchange (at least for small transactions) while the more stable foreign currency serves as a means of account and a store of value.

In the hyperinflation case, parallel circulation often ends with the introduction of a new local currency having a fixed value relative to the unofficial parallel currency. For example, in the early 1980s, the value of Argentina’s peso was undermined by hyperinflation. As the peso became less and less dependable, people began to use U.S. dollars as a unit of account and store of value. In 1985, the Argentine government temporarily stopped hyperinflation by replacing the peso with a new currency, the austral, which it declared to have a fixed value of one dollar. When resurgent hyperinflation destroyed the austral in the early 1990s, it was ditched, in turn, in favor of a new peso, again declared to be worth one dollar. Estonia in 1992 and Bulgaria in 1997 are other examples in which parallel currency circulation preceded the introduction of a new, stable national currency.

If a country like Italy decided to exit the euro now, the situation would be different. Instead of moving from a less to a more stable currency, it would intentionally be abandoning the too rigidly stable euro in order to achieve desired inflation and depreciation.  That would make it a little like running the Argentine movie in reverse. Instead of the euro gradually coming into broader circulation as the national currency became less stable, a temporary floating currency could be introduced gradually alongside the euro, to be replaced only when the economy began to stabilize.

What would motivate people to use the new currency? One way to bring it into circulation would be to introduce it first where people have no choice. For example, if Italy were to leave the euro area, it could begin by printing a new, temporary, Italian euro, which at first would be used only to pay government salaries, pensions, and interest on the national debt, and also be accepted for payment of taxes. Gradually, its use could be extended, pushed along by administrative means as needed.

Lesson 3: Default Early, but Not Often

As a further barrier to exit from the euro, skeptics often point out that devaluation would very likely force default on any obligations that could not be redenominated to the new currency. True enough. But then, if a country could service its obligations, it wouldn’t be thinking about exit and devaluation in the first place, would it? The fact is, if a country is insolvent, it is going to have to default in one way or another. The only question is whether default with a change in currency is preferable to default within the old currency.

In general, the best strategy for anyone who can’t meet financial obligations is to default early, but not often. That means stopping payments on obligations as soon as it is clear that they cannot be met in full. The time to sort out any partial payments is after default, not before. A prolonged period in which default appears increasingly certain, but has not yet occurred, only encourages bank runs and capital flight.

One form of “defaulting often” is to ask for evergreening.  Evergreening occurs when creditors make additional loans to debtors they know to be insolvent, with the intent that the proceeds of the new loans be used to keep payments current on the old ones for a little while longer. The trouble is, that is rolling a snowball down the hill. The problem gets bigger and bigger the more you put off facing up to it.

Another mistake is to try to negotiate terms of default in advance. Negotiating a new “haircut” with your lenders every few months, as Greece did in the aftermath of the global financial crisis, is another form of defaulting often. There are three problems with this strategy. First, the debtor’s bargaining power is likely to be weaker if nonpayment is only a threat, not a fact. Second, the debtor is subject to pressure to accept terms like mandatory austerity measures that reduce its actual ability to repay. Third, there may simply be no way to know what kind of partial payment is realistic until some time has passed and the dust has settled.

Not all exit problems are mere technicalities

Not all barriers to switching currencies are mere technicalities. The combination of default and devaluation will cause real pain to foreign creditors, and the defaulting country is likely to suffer as a result. Fortunately, in today’s world, creditors are unlikely to use force, as France and Belgium did when they occupied the Ruhr in 1923 in an attempt to collect defaulted German reparation payments. The invasion helped set the stage for World War II, and no one has tried the tactic since. Still, that does not mean defaulters can expect to get off unscathed.

For one thing, defaulters may end up with limited access to international credit markets, as Argentina found following its 2001 default. Default by a member of the euro area, followed by an attempt to redenominate debts into a devalued currency, would create thorny legal problems in view of the many financial and nonfinancial firms that operate across national borders. National courts of creditor countries might attach any extraterritorial assets of the defaulting government, or of its citizens, that they could get their hands on. They might even make it hard for government officials or private citizens to travel. No one really knows.

Furthermore, devaluation and exit from the euro would not solve all macroeconomic problems, or perhaps we should say, it would solve one set of problems but would raise others. If the operation were carried out by a new populist government that abandoned structural reforms and budget discipline, it could be hard to achieve lasting stability. Again, Argentina is a cautionary tale. Despite a quick return of economic growth following its default, the government found it hard to control inflation and was eventually voted out of office.

In short, there are many reasons why no euro member has yet left the currency area. The pros and cons of doing so remain open to debate. However, as the debate goes on, let’s hear more about the real issues. Technicalities like printing banknotes and reprogramming vending machines. If Italy, or anyone else, really wanted to introduce a new currency, it could do so.

A version of this post previously appeared at Milken Institute Review

High-Risk Pools, Reinsurance, and Universal Catastrophic Coverage

The most challenging problem in health care policy is how to deal with the very tip of the cost curve — the 10 percent of the population who account for two-thirds of all personal health care spending; or among them, the 5 percent of the population who account for half of all spending; or among them, costliest of all, the 1 percent who account for a fifth of all spending.

The challenge is made harder still by the fact that insurance — in the traditional meaning of the term — is not an option. A large percentage of cases at the upper end of the curve fail to meet two standards of insurability.

One is that an insurable risk must be the result of unpredictable chance. In reality, though, many individuals suffer from chronic conditions like diabetes that make them certain to require costly care for the rest of their lives. Others have genetic markers that make them medical time bombs from the point of view of private insurers.

A second standard of insurability is that the actuarially fair premium — one high enough to cover the expected value of claims — must be affordable. However, an actuarially fair premium for many people with costly chronic conditions would exceed their entire income.

There are several partial solutions to the noninsurability of high-end health care risks. Guaranteed renewal requires insurers to continue to issue policies to those who become ill, provided there is no break in coverage. Guaranteed issue, which requires insurers to accept any applicants, regardless of pre-existing health conditions, is an even stronger step in the same direction. Community rating requires insurers to charge the same premium, based on average claims, to everyone in a general category regardless of their health status.

The Affordable Care Act uses a combination of these requirements to ensure that people can buy health insurance at a standard price regardless of pre-existing conditions. However, doing so creates problems of its own. For one thing, these requirements make the system vulnerable to adverse selection since healthy people can remain uninsured and buy into the system only when they become ill. Also, even with community rating, spreading health care costs evenly over an entire population can mean unaffordably high premiums for people with low incomes.

That brings us to the subject of this commentary — policies that aim to cut off the top end of the cost curve in order to make health care more affordable and accessible for everyone else. High-risk pools and reinsurance are two ways of doing this. After reviewing the way these approaches work, we will explain how their benefits can be realized through a policy of universal catastrophic coverage (UCC).

How high-risk pools and reinsurance work

In a high-risk pool, people whose health conditions make them uninsurable are placed in a separate category for which claims are paid by government. Prior to the ACA, according to a summary by Karen Pollitz of the Kaiser Family Foundation, 35 states had some type of high risk pool. All of the pools, by design, paid out more in claims than they brought in as premiums. The shortfall was made up, directly or indirectly, from general tax revenue.

The hope was that if insurance companies were relieved of the obligation to cover the highest-risk patients, they would lower premiums for the patients they continued to cover. However, because the pools were often underfunded, that hope was not always realized. In many states, high premiums, high deductibles, and waiting periods discouraged enrollment. As a result, the pools covered only 1 to 10 percent of the population that had insurance in the individual market — well below the 27 percent that Pollitz estimates should theoretically have been eligible. The state-run high-risk pools have now been made redundant by the guaranteed issue and community rating requirements of the ACA.

Reinsurance takes a different approach. Instead of segregating high-risk consumers into a separate pool, reinsurance leaves everyone in the general risk pool, while a government reinsurance fund reimburses insurers for claims above an attachment point and up to a cap. For example, suppose the attachment point is $50,000 in annual claims and the cap is $1 million. An insurance company would then get a reimbursement of $40,000 for a patient with $90,000 in claims and a maximum reimbursement of $950,000 for patients with claims of $1 million or more.

Under reinsurance, private insurers continue to cover the high-risk consumers, but the reimbursements make it possible for them to offer lower premiums to all their customers. As in the case of high-risk pools, the actual reduction in premiums would depend on the level of funding for the reinsurance program.

The traditional form of reinsurance is retrospective, in that no effort is made to identify high-risk consumers in advance. Under retrospective reinsurance, a large share of reimbursements would normally represent the claims of people with identifiable chronic conditions or genetic predispositions, but those claims would be treated no differently than those of healthy individuals who experienced high expenses because of unforeseeable accidents or illnesses.

A newer variant, prospective reinsurance, operates somewhat differently. Private insurers continue to issue policies to all applicants, regardless of health condition. However, when particular applicants are identified as being at high risk, the primary insurer “cedes” a part of the risk to the reinsurer, along with part of the premium. For example, the primary insurer might cede risks over $50,000 per year to the reinsurer in exchange for 90 percent of the premium. The reinsurer would then cover claims above the $50,000 attachment point, while the primary insurer would cover lesser claims in return for the remaining 10 percent of the premium.

Prospective reinsurance is, in effect, a hybrid. It resembles a high-risk pool in that it places consumers who are most likely to have high claims in a separate pool. Those high risks are covered by government funds, as under reinsurance. However, the whole process goes on out of sight. Because consumers do not know when a particular claim is being reimbursed by the government, prospective reinsurance schemes are sometimes called invisible high-risk pools.

The American Health Care Act of 2017, which passed the House but failed in the Senate, included a provision for prospective reinsurance, although that provision was criticized as underfunded. Despite the failure of the AHCA, several states are experimenting with the invisible risk-pool model, or are considering doing so.

Pros and cons

Reinsurance and high-risk pools in all their forms have pros and cons. On the positive side, all such schemes have the potential to handle costly chronic conditions that would otherwise be uninsurable. By taking the weight of the most expensive cases off the shoulders of private insurers, they offer the potential to hold down the premiums faced by other consumers who face only average risks. However, reinsurance and high-risk pools also have their drawbacks.

One obvious drawback is that they are expensive. Covering the costs of just the top 5 percent of health care spenders would cost something like $1.5 trillion as of 2018, or about 7 percent of GDP. If high-risk pools or reinsurance are not adequately funded, they do not lift enough of the burden from the broader individual market. In that case, premiums remain high for consumers with average risks. High premiums tempt healthy consumers to go without insurance, and that, in turn, exacerbates the problem of adverse selection, driving premiums higher still.

A second problem is that high-risk pools and prospective reinsurance require advance screening for health risks. Even if everyone is ultimately guaranteed coverage in one form or another, screening raises administrative costs. After the experience of guaranteed issue under the ACA, with no intrusive questionnaires, health histories, or physicals, a return to widespread screening for pre-existing conditions or genetic predispositions would probably encounter consumer resistance. Of the alternatives we have discussed, only retrospective reinsurance avoids the screening problem.

A third problem is that even if high-risk pools or reinsurance succeeded in lowering average premiums, many low- and middle-income consumers would be likely to find that coverage was still unaffordable. For example, if high-risk pools or reinsurance absorbed half of all personal health care expenditures, the remainder would still work out to an average close to $20,000 per year for a family of four. Families would have to pay those costs through a combination of premiums, deductibles, and copays. Health care expenses of $20,000 per year would be equivalent to some 40 percent of median household income — affordable for some, but by no means for all.

To overcome these problems, we need to lift the burden of the top tranche of health spending in a way that provides a reliable source of funding, that minimizes administrative complexities, and that takes household income into account. Fortunately, such an alternative is available — universal catastrophic coverage.

The UCC solution

The basic idea of universal catastrophic coverage is simple. UCC aims to protect all Americans against financially ruinous medical expenses while preserving the principle that those who can afford it should contribute toward the cost of their own care. For people below a specified low-income threshold, UCC would pay health care costs in full. Everyone else would get a UCC policy with a deductible scaled to their eligible income, that is, to the amount by which annual household income exceeds the low-income threshold.

For example, suppose the low-income threshold is set at $25,000 for a family of four, roughly equal to the official poverty line, and the deductible is set at 10 percent of eligible income. A family with income of $25,000 or less would then have no deductible; one with income of $75,000 would have a deductible of $5,000; and one with income of $400,000 would have a hefty deductible of $37,500. Regardless of income, once a household hit the deductible, their UCC policy would pay the rest of their medical expenses.

Don’t be surprised if this sounds a little like what we have been discussing all along. Universal catastrophic coverage belongs to the same general family of policies as high-risk pools and reinsurance. In its essence, UCC is simply retrospective reinsurance with an attachment point that is scaled to household income.

UCC would be funded using money that is already being spent on other federal health care programs, including Medicaid, ACA subsidies, and the tax expenditures that subsidize employer-sponsored insurance. For details, see my earlier post, “Could We Afford Universal Catastrophic Coverage?” As that post explains, there is a good chance that a properly designed UCC plan that included a range of cost-saving measures would actually reduce federal health care expenditures relative to their current level.

For administrative simplicity, UCC policies could be issued directly by the government, for example, as an extension of Medicare. Alternatively, the policies could be issued by private insurers, subject to coverage guidelines, as under Medicare Advantage. Those insurers would pay providers and then be reimbursed by a government reinsurance fund.

Because payments for catastrophic expenses are made retrospectively, UCC would avoid the need to screen for health risks. Everyone would be issued a UCC policy automatically. Most people would probably pay their share of noncatastrophic expenses out of pocket, possibly with the help of dedicated health savings accounts. Households with very high incomes and deductibles could, if they chose to do so, buy private supplemental insurance to cover expenses not reimbursed by UCC.

Finally, UCC would be affordable for everyone because of the way deductibles are scaled to income. People below the low-income threshold (very roughly, those now eligible for Medicaid) would get first-dollar coverage for all their health care needs. A package of basic preventive and primary care services would be exempt from deductibles for everyone, regardless of income. Middle-income households, would be responsible for their own fair share of noncatastrophic expenses, but in most cases, that share would be less than the premiums, deductibles, and copays they now pay for employer-sponsored insurance or policies bought on the ACA exchanges.

It is true that, in theory, all these issues of affordability and insurability could be addressed separately. You could use a combination of guaranteed issue and community rating to spread risks among all covered households. You could use a combination of premium subsidies and cost-sharing subsidies to limit the maximum cost to families with moderate incomes. You could use individual and employer mandates to fight adverse selection and enlarge the risk pool. You could run a completely separate program, like Medicaid, for households below the low-income threshold. You could do all that — but the result would be a cumbersome kludge that drove up administrative costs and still left millions without coverage. We know that because we tried it with the ACA, and look what we got.

So why not keep things simple by providing high quality, affordable health care for all with universal catastrophic coverage?

Reposted with permission from

Is Single-Payer the Right Way Forward for Health Care?

America’s progressives are right when they say we need a health care system that guarantees everyone affordable access to essential care, asks everyone to pay their fair share of the cost, but not more, and makes health care transparent, efficient, and consumer-friendly. But are the also right about the best way to get there?
They ask, why don’t we just adopt a single-payer health care system like every other rich democratic country has? Why can’t the government just pay everyone’s medical bills and be done with it?
These are understandable questions, but they oversimplify. If we look closely at the world’s top-rated health care systems – those in countries like the UK, Australia, and the Netherlands – we find that they are not true single-payer systems. Compared with proposals like Bernie Sanders’ Medicare for All, other countries’ health care systems are much more decentralized, and stop well short of paying for all care for everyone.
To get a health care system that is universal, affordable, fair, and efficient, the United States needs to learn from other countries’ experience and adapt it to specific American circumstances. Universal catastrophic coverage offers a more plausible model than an idealized single-payer system that exists nowhere else.
For a full discussion, check out the slideshow of my July 5th presentation to the Cracker Barrel Society of Northport, Michigan.

Work Requirements Are the Newest Front in the War on the War on Poverty

On July 12, President Trump’s Council of Economic Advisers released a report titled “Expanding Work Requirements in Non-Cash Welfare Programs” in response to an April 2018 executive order on reducing poverty in America.

The CEA’s basic argument is simple: The war on poverty has been won — properly measured, the poverty rate is just 3 percent, a historic low. However, victory has left an ever-increasing number of able-bodied working-age adults dependent on in-kind welfare, especially on Medicaid, SNAP (formerly food stamps), and housing assistance. This problem can best be addressed by expanding work requirements for non-cash welfare programs. Work requirements will encourage self-sufficiency, strengthen the economy, and ultimately benefit the welfare recipients themselves.

The CEA is right, at least in part, in its critique of current in-kind public assistance programs. However, the picture it draws is misleading in a number of ways, and the case it makes for work requirements is unconvincing. There are better ways to address the weaknesses of the current welfare system. Here are some questions that need to be addressed before undertaking a wholesale expansion of work requirements.

Is the war on poverty really over?

Let’s begin with the CEA’s controversial assertion that the war on poverty is over, as implied by the following chart from the report.

The blue line in the chart shows the official poverty rate, while the red line shows a consumption-based poverty measure devised by Bruce D. Meyer of the University of Chicago and James X. Sullivan of Notre Dame. By using consumption reported from consumer expenditure surveys, consumption-based poverty measures help overcome the problem of under-reported income, and arguably give a better picture of the wellbeing of the poor. (See here and here for samples of these authors’ many publications on the subject.) As we see, the official rate has changed little since the late 1960s. In the chart, the Meyer-Sullivan consumption poverty rate is calibrated to make the two series equal in 1980, which is the earliest date for which consistent data are available. From that point, consumption-based poverty appears to have fallen to about 3 percent today.

Can such a low poverty rate — less than a quarter of the official measure — be in any way credible? The answer is both “yes” and “no.”

On the “yes” side, we should note that the Meyer-Sullivan measure is based on academically respectable research published in credible journals. The authors’ critique of the official poverty rate is largely mainstream: First, the official rate uses an after-tax cash measure of income that omits benefits from some of the largest antipoverty programs, including the Earned Income Tax Credit (EITC), SNAP, housing assistance, and Medicare. Second, it is adjusted for changes in purchasing power using the standard consumer price index, which is widely thought to overstate the rate of inflation. Third, its data sources undercount some resources that may be more accurately captured by consumption data.

That does not mean, however, that 3 percent should be treated as the definitive poverty rate, as it is by the CEA report. The Meyer-Sullivan approach, too, is open to criticism on grounds of methodology and data sources. (See here and here for examples.) It should properly be seen as just one of many alternative measures of poverty, each of which highlights some aspects of being poor and downplays others. In fact, Meyer and Sullivan themselves offer variants of their consumption poverty measure that show rates much higher than 3 percent, depending on whether they use an absolute or relative definition of poverty, and depending on the year used to link their series to the official one.

The most important point, however, is not whether the Meyer and Sullivan measure of poverty is too low, but rather, why it is so much lower than the official rate. In large part, the reason is that in-kind welfare programs, especially the expansion of Medicaid and SNAP, really have made substantial improvements to the living standards of the poor. But those gains are fragile. They could easily be reversed if ill-advised reformers withdrew the support offered by existing in-kind programs without putting something better in their place.

Who are the nondisabled working-age adult in-kind welfare recipients, and why do they work so little?

The CEA report draws a sharp distinction between welfare recipients whom “society expects to work,” and those not expected to work. It refers to the former as “nondisabled working-age adults,” whom it defines as people aged 18 to 64 who have not qualified for Supplemental Security Income, Social Security Disability Benefits, or Veterans Disability Compensation. In a key chart, it notes that this group has far lower employment rates than nondisabled, working-age adults in the general population.

As of 2013, nondisabled working-age adults accounted for a majority of recipients of all three forms of in-kind welfare. Of those, 60 percent of Medicaid and SNAP beneficiaries and 52 percent of housing beneficiaries worked fewer than 20 hours per week. Those rates of nonwork are considerably higher than the 28 percent of nondisabled working age adults among the nonwelfare population who worked fewer than 20 hours per week.

Why do the nondisabled, working-age welfare recipients work so little? The CEA report attributes their low employment rates to the work disincentives of in-kind welfare programs, which systematically reduce benefits as earned income rises. When benefit reductions are combined with income and payroll taxes, the CEA report estimates the effective marginal tax rates facing in-kind welfare recipients to range from 20 to 90 percent.

Like most economists who have studied the matter, I agree that antipoverty programs are rife with disincentives to work. (See here and here, for example.) However, high effective marginal tax rates are only part of the story. A balanced analysis must also take into account other causes of weak labor-market attachment among welfare recipients.

A study of nonworking Medicaid recipients by the Kaiser Family Foundation provides some perspective. Like the CEA report, the KFF study excluded Medicaid recipients who had qualified for SSI or SSDI disability payments. It found that just 39 percent of officially nondisabled Medicaid recipients did not work at all. Of the 39 percent who did not work, 31 percent reported caretaking duties as the reason. Fifteen percent reported that they did not work because they were in school, 36 percent cited poor health or disability, and 18 percent gave other reasons.

Each of these categories raises issues not adequately discussed by the CEA report. The CEA does discuss child care as a factor in low work rates, but not caregiving duties toward other family members, such as disabled spouses or elderly parents. It altogether ignores study as a reason for nonemployment. But that is not all.

The CEA report pays no attention to the large number of in-kind welfare recipients who do not qualify for SSI or SSDI benefits, but still report illness or disability as their reasons for not working. That is a major red flag. It is simply not true that enrollment in official disability programs is an accurate indicator of “disability” in any sense that is relevant to labor-market behavior. According to other data supplied by KFF, a quarter of nonworking Medicaid adults without SSI have mobility or physical limitations such as difficulty going up or down stairs (24 percent), walking 100 yards (25 percent), sitting or standing for extended periods (27 percent), or stooping, kneeling or bending (24 percent). Many live with daily, activity-limiting pain. The official SSI and SSDI programs are not tailored to such partial disabilities, nor to illness or injuries that are temporarily disabling, but from which full recovery is likely.

Furthermore, even for people who do develop permanent disabilities, getting accepted into disability programs can take months or years. Enrolling in these programs often requires expensive legal assistance to navigate multiple rounds of rejection and appeal. Some who are truly disabled are discouraged from applying at all. At any time, then, there is a pool of fully disabled persons who have not yet qualified for SSI or SSDI, but who are unlikely candidates for employment regardless of effective tax rates, work requirements, or other policy details.

Finally, consider the people who give other, unspecified reasons for not working. Many of them have one or more other characteristics that are not technically “disabilities,” but that make it difficult for them to find and hold jobs. Substance abuse and criminal records are two of the most important. Other people have borderline mental issues such as depression, anxiety, and mild forms of autism that may fall short of diagnosed mental illness but still make them hard to employ.

In short, it is not reasonable to compare “nondisabled working-age adults” who are on welfare with those who are not. Some of the difference in their employment does reflect the disincentives of high benefit-reduction rates, but not all. Part also reflects a natural sorting according to employability. Better policy would help, but the fact remains that we are dealing with a population that is always going to be at risk of being the last to be hired and the first to be fired.

Are work requirements really effective?

According to the CEA, the available evidence shows work requirements to be effective. Specifically, it says that “the most relevant historical corollary — welfare reform in the 1990s — provides evidence that applying work requirements to existing welfare programs can increase employment and reduce dependency.”

Yet, that is not what the data actually say. The gold standard for evaluating the 1990s reform of cash welfare is a set of 11 controlled experiments known as the National Evaluation of Welfare-to-Work Strategies (NEWWS). The experiments each lasted five years and were conducted in various cities around the country. Each experiment compared a group of people whose benefits were conditioned on work requirements with a control group who continued to receive welfare as usual. Here are some of the key points from a detailed discussion of those experiments in the Milken Institute Review earlier this year:

  • First, any employment gains were modest. In the most successful experiment (Portland, Oregon), 85.8 percent of those in the group subject to work requirements worked during at least one of the 20 quarters of the study period — only slightly more than the 81.7 percent for the control group. In five of the 11 experiments, the differences in employment were not statistically significant, and in one case, employment was actually lower after imposition of work requirements. Note that even these modest gains are based on a far lower standard for “employment” (some kind of job in at least one calendar quarter over a five-year period) than the standard used in the CEA report (20 hours of work every week).
  • Second, where the imposition of work requirements did result in work, most of the gains accrued to taxpayers rather than participants. Ten of the NEWWS experiments tracked combined wage and benefit income over the five-year period. In six of them, the increase in wages earned was less than the decrease in benefits. Averaged across all ten experiments, participants subject to work requirements experienced about a 1 percent reduction in total income. In the light of those data, it is a stretch to say that work requirements moved large numbers of people to meaningful self-sufficiency.
  • Third, and most importantly, the NEWWS experiments produced positive results only where work requirements were backed by intensive administrative support. Running a successful welfare-to-work program requires adequate funding and well-trained staff. Case workers and other administrators must do more than simply monitor eligibility and compliance. In the successful Portland experiment, case workers interacted one-on-one with participants to cajole them into jobs or training programs, or to coerce them to make greater efforts by threatening withdrawal of benefits. Experiments in cities like Oklahoma City and Detroit, where staffing and administrative funding were lower, reported no statistically significant increase in employment.

The CEA report makes only the most off-hand references to the need for administrative support, even though without it, there is little hope of moving large numbers of people from welfare to self-sufficiency. But perhaps the very neglect of this topic gives us a hint as to the real agenda of those pushing for the expansion of work requirements.

What is the real agenda?

If the objective is true self-sufficiency, work requirements are unlikely to be effective. However, if the objective, instead, is simply to cut in-kind welfare rolls, work requirements hold out greater prospects for success.

Think of it this way. As currently configured, the message to in-kind welfare beneficiaries is, “We encourage you to work, but if you do work, we will take your benefits away.” Not surprisingly, that tends to discourage employment and encourage people to stay on the programs. Adding work requirements changes the message so that it becomes, “If you work, we will take your benefits away, but we will also take them away if you don’t work.”

You don’t need a Ph.D. in economics to understand that this change in the message is likely to reduce the number of people receiving benefits. Furthermore, if the objective is simply to cut the number of beneficiaries, it becomes irrelevant whether those leaving Medicaid, SNAP, or housing assistance actually become employed or simply disappear into the streets. In either case the amount of required administrative support is greatly reduced.

The struggle between these alternatives — programs with low administrative budgets designed to thin out the welfare rolls or administratively more costly programs that actually encourage self-sufficiency — is being played out right now as many states seek federal waivers to impose work requirements for Medicaid.

Some states have looked at the issue and blinked. In the words of still another KFF report,

Some states have decided to not implement the waiver authority that they have received due to administrative costs. For example, Arkansas did not implement its health savings accounts after considering a number of factors, including the administrative expense of the accounts and the size of the monthly contributions members would make.  Indiana is seeking to amend its waiver that originally set premiums at 2 percent of income and wants to change to a tiered structure instead, citing administrative complexity and costs. Kentucky amended its waiver application seeking to move from a tiered hour work requirement (depending on length of program enrollment) to a flat hourly requirement, also citing administrative concerns.  Unlike TANF agencies or workforce development agencies, state Medicaid agencies are generally not currently equipped to develop, provide, and administer work support programs.

On the other hand, here is what happens when states prioritize disenrollment over self-sufficiency, as described in a policy brief from the Center for Law and Social Policy, a Washington-based nonpartisan, nonprofit organization:

We know that for every additional piece of paperwork that is required, fewer people are able to secure or retain coverage. A work requirement compels people to submit documentation of their hours worked (sometimes from multiple jobs) on a regular basis. Failing to submit paperwork — even when they are working and meeting the work requirements — will cause people to lose their Medicaid coverage.

Work requirements do not reflect the realities of today’s low-wage jobs. For example, seasonal workers may have a period of time each year when they are not working enough hours to meet a work requirement and, as a result, will churn on and off the program during that time of year. Or, some may have a reduction in their work hours at the last minute and therefore not meet the minimum numbers of hours needed to retain Medicaid. Many low-wage jobs are subject to last-minute scheduling, meaning that workers do not have advance notice of how many hours they will be able to work. This not only jeopardizes their health coverage if Medicaid has a work requirement but also makes it challenging to hold a second job. If you are constantly at the whim of random scheduling at your primary job, you will never know when you will be available to work at a second job. This will lead to greater “churn” in Medicaid as people who become disenrolled reapply and enroll when they meet the work requirements.

Fortunately, there are better ways to encourage self-sufficiency when it is realistically possible. Instead of saying, “We will take your benefits away if you work, and also if you do not work,” the message should be, “We will provide unconditional help with your minimum needs, we will offer the support you need to find a job, and we will let you keep what you earn.”

The Earned Income Tax Credit is the one existing policy that follows that approach. For the lowest earners, it actually increases benefits, rather than cutting them, as earnings rise. The CEA report cautiously endorses the EITC and suggests extending it to single workers. Other observers have proposed more far-reaching reforms that would extend coverage, make benefits monthly rather than annual, and ease benefit reduction rates for people just above the poverty level (see here, for example).

But the EITC is only one among many proposals that would reduce the work disincentives inherent in today’s in-kind welfare programs. Some proposals focus on consolidating programs to simplify administration and reduce the high benefit-reduction rates that affect people who qualify for multiple programs. Others seek simplicity and flexibility by converting in-kind benefits to cash. Milton Friedman’s idea of a negative income tax is a proposal with good conservative roots that combines cash benefits with lower benefit-reductions. A universal basic income is a newer idea with a progressive tilt that goes even further in the same direction. Either could be a win-win for welfare recipients and for hardworking taxpayers. No more Catch-22 documentation requirements, no more armies of bureaucrats to sort people into categories and monitor their every move, no more shame-the-victim rhetoric to justify refusing to help the truly needy.

Previously posted on

Is the War on Poverty Really Over?

The Council of Economic Advisers recently released a report that began with the startling statement that the War on Poverty is over and has ended in victory. Properly measured, says the CEA, the poverty rate has fallen to just 3 percent.
Can such a low poverty rate, less than a quarter of the official measure (12.7 percent for 2017), be in any way credible? The answer turns out to be both “yes” and “no.”
There are, in fact, many different measures of the poverty rate. In addition to the official measure, the Census Bureau also publishes a modernized version called the Supplemental Poverty Measure, estimated at 13.97 percent for 2016. The Organization for Economic Cooperation and Development, a club of 36 middle- and high-income democratic countries, defines poverty as earning less than half of a country’s median income. By that definition, the U.S. poverty rate is 16.8 percent, the third highest in the OECD. Only Israel and Turkey have higher poverty rates.

The official U.S. poverty measure, was devised in the late 1950s. Its creators began by estimating the cost of a minimum healthy diet. They observed that poor families, on average, spent about a third of their income on food, so they estimated that an income equal to three times the cost of the minimum diet would be adequate to provide food plus other necessities. Income was defined as money income before taxes. That definition has remained unchanged over time except for adjustments for inflation.
Over the years, the official poverty level has been subject to many criticisms. Here are some of the most important:
1.      An accurate measure of poverty should use income after taxes, not before, in order to reflect both the burden of taxes paid and the benefits of tax credits received.
2.      An accurate measure of income should include the value of in-kind poverty programs, such as housing aid and SNAP (formerly food stamps).
3.      An accurate measure should take into account the way in which savings and ownership of real assets (such as cars and houses) affect the living standards of people with low current incomes.
4.      The data sources used to estimate the official rate undercount certain kinds of income.
5.      The standard consumer price index used to adjust the official poverty measure is an inaccurate measure of changes in the cost of living.
6.      More broadly, poverty should not be measured according to an absolute standard, such as the cost of purchasing a fixed basket of goods and services, but rather by a relative standard, such as half the median family income.
The 3 percent poverty rate cited by the CEA takes several of these points into account. It is based on research by Bruce D. Meyer of the University of Chicago and James X. Sullivan of Notre Dame. That research, which has been published in many well-regarded journals, focuses on the first five criticisms in the list. Their most recently released results are shown in the following chart.
The top line in the figure shows the official poverty measure. After adjustment for changes in the standard consumer price index for urban consumers (CPI-U), the official rate has been essentially flat, with ups and downs over the business cycle, for half century.
The middle line in the chart recalculates the official poverty measure using a different version of the consumer price index known as CPI-U-RSbut no other changes. CPI-U-RS is a research tool that attempts to reconstruct historical values of the consumer price index as they would look if the current methodology used by the Bureau of Labor Statistics had been used in earlier years.
Two methodological changes account for most of the difference between CPI-U and CPI-U-RS. Over time, BLS has improved the way it measures the impact of changes in the quality of goods like automobiles and consumer electronics. It has also changed the way it adjusts for changes over time in the content of the basket of goods purchased by the typical consumer. Taking these two changes into account, the standard CPI-U is estimated to have overstated increases in the cost of living by about 0.5 to 0.8 percentage points per year. That does not sound like much, but, as the chart shows, when those changes are applied year after year, they add up to a big difference in the estimated cost of living.
The lowest line in the chart shows Meyer and Sullivan’s own measure of poverty. It, too, uses the CPI-U-RS as a deflator, but it goes much farther than that. In an attempt to addresses the first three of the above-listed criticisms of the official rate, Meyer and Sullivan estimate poverty using data on consumption rather than income. That approach includes that consumer goods and services purchased with tax credits, such as the Earned Income Tax Credit, as well as goods and services obtained using SNAP debit cards or housing vouchers. Finally, using consumption data captures consumption goods that are purchased by drawing down savings and also the implicit consumption services from homes and cars that a family owns. All of those sources raise consumption but not its pretax income.
The switch from an income-based to a consumption-based poverty not only changes the poverty rate, but also who gets classified as poor. It does that by eliminating some people who have low incomes, but who would not normally be considered poor in any meaningful sense. For example, a consumption-based measure of poverty is less likely to count law or medical students who maintain an above-poverty standard of living by borrowing against the high earnings they expect in their careers. It is also be less likely to include retired couplse whose current income is low, but who live comfortably in houses they paid for during their working years, further supplementing their standard of living by drawing down savings or borrowing against home equity. As Meyer and Sullivan point out, those who are classified as poor by a consumption measure are more likely to be those who are truly desperate.
Changing the basis for measuring poverty would have most of these effects even if the data for both income and consumption were completely accurate. Meyer and Sullivan make the further contention that the income data are more likely to undercount resources available to the poor than consumption data. If so, that would make the gap between the official poverty measure and their own measure that much larger. It is worth noting, however, that not all researchers agree. Among the skeptics are Katheryn Edin and H. Luke Shaefer, authors of $2 a Day: Living on Almost Nothing in America. They summarize their doubts about the quality of the consumption data used by Meyer and Sullivan in this short blog post on their website.
Informative as it is, though, it would be wrong to treat the 3 percent rate as the definitive estimate of poverty, as the CEA report appears to do. It is not just that other methods of measurement come up with higher numbers, as we noted earlier. In addition, Meyer and Sullivan themselves make it clear that the poverty rate obtained using the consumption approach depends, among other things, on the year used to link their consumption measure to the official measure. They also note that the rate of consumption poverty, and its rate of decrease over time, depend on whether one uses an absolute or relative definition of poverty. Three percent should properly be viewed as the lower bound of a range of estimates that can be obtained using the consumption approach.
It is also important to point out that the reported decrease in consumption poverty does not mean that the War on Poverty is “over” in the sense that all but 3 percent of those who were previously poor have achieved self-sufficiency. On the contrary. The steady decline of consumption poverty that Meyer and Sullivan observe is, in large part, a result of the expansion of antipoverty programs, especially the Earned Income Tax Credit, SNAP, and Medicaid, the benefits of which are not picked up by the official measure. Other things being equal, cutting back on those programs would send a consumption-based measure of poverty rate back up again, despite the strong economy.
In short, the War on Poverty goes on. It is heartening to know that past efforts to aid the poor have not be entirely in vain, as one might pessimistically conclude from the persistently flat trend of the official poverty measure. However, many battles will yet have to be fought before America’s prosperity is shared by all of its citizens.
Reposted with permission from Medium