Wealthy investors to win bigly with Republicans’ proposed tax plan

http://blogs.berkeley.edu/2017/11/09/wealthy-investors-to-win-bigly-with-republicans-proposed-tax-plan/

By Gabriel Zucman and Emmanuel Saez

This blog is cross-posted from the Berkeley Opportunity Lab and the Washington Center for Equitable Growth.

The tax plan released by Republicans in Congress and praised by President Trump is a remarkable document in many ways, but most notably in that it achieves just the opposite of its stated goal. Presented as a tax cut for workers and job creating entrepreneurs, it instead is a giant cut for capitalists and inherited wealth. It is a bill that rewards the past, not the future.

First, the proposed legislation cuts the top rate on profits recorded by so-called pass-through businesses from 39.6 percent to 25 percent, but with a trick that neatly summarizes the philosophy of the bill. The reduced rate applies only to passive business owners, not to active entrepreneurs.

Payoff for wealthy investors but not workers

Investors who own shares in lucrative firms for which they do not work will pay 25 percent on the profits flowing to their bank accounts. But entrepreneurs who work to earn income from start-ups in which they are actively involved will pay the higher rate of 39.6 percent. Wealthy investors win bigly. More jobs are not created. Workers get nothing.

Since 1980, the authors say, average national income per adult has grown by only 1.4 percent per year.

The proposed plan contains complicated rules to avoid active businessmen angling to pay the lower 25 percent rate by pretending to be passive owners.

If these rules work as intended, passive owners will be the sole beneficiaries of the bill.

But if clever tax accountants abuse the new rules, or lobbyists in Washington succeed in getting the lower tax rate enacted for all owners of pass-through businesses, we will see an even larger tax cut for the top 1 percent of income earners, and a federal budget deficit that balloons even more.

Heirs and heiresses of the wealthy

Second, the Republican plan reduces and then eliminates the estate tax. The beneficiaries of this measure will be the heirs and heiresses of the wealthy who die with more than $5.5 million in net wealth— not exactly active entrepreneurs at this stage of their life.

Conveniently, the provision would allow the Trump family to avoid more than $1 billion in federal taxes (if they have not already organized their affairs to dodge the estate tax by creating family trusts). Inheritors, who by definition have not earned their wealth, will be able to keep their full inheritance free of any federal tax.

Third, the proposed bill cuts corporate income taxes by $846.5 billion, primarily by reducing the corporate tax rate from 35 percent to 20 percent. Whatever one believes about the long-run effects of cutting corporate taxes, it is clear that in the short- and medium-run the cut overwhelmingly benefits shareholders, who do not need to do any work to reap their profits.

So here is what the Republican tax plan boils down to. A retired passive business owner in Florida gets a huge tax cut, with his marginal income tax rate falling from 39.6 percent to 25 percent. His children will inherit a bigger estate and will not have to pay any tax on it.

In contrast, the successful start-up owner who is actively growing his business in Silicon Valley sees his marginal tax rate increase from 47.6 percent to 52.9 percent (when taking California taxes into account), because of the repeal of the deductibility of state income taxes. Of course, some Silicon Valley start-uppers will one day become Florida retirees, but if Congress want to help entrepreneurs, it seems more logical to cut their taxes while they’re young, rather than the taxes of their future old selves.

Republicans will noisily claim that cutting taxes on wealthy owners will boost economic growth and end up benefitting workers down the income ladder. The idea is that if government taxes the rich less, they will save more, grow U.S. capital stock and investment, and make workers more productive. The evolution of growth and inequality over the past three decades makes such a claim ludicrous.

No chance of trickling down to the rest of us

Since 1980, taxes paid by the wealthy have fallen dramatically, as the top marginal income tax rate fell from 70 percent to 39.6 percent and income at the top of the distribution has boomed, but gains for the rest of the population have been paltry. And average national income per adult has grown by only 1.4 percent per year—a poor performance by both historical and international standards.

As a result, the share of national income going to the top 1 percent has doubled from 10 percent to more than 20 percent while income accrued by the bottom 50 percent has been almost halved, from 20 percent to 12.5 percent. There has been no growth at all in the average pre-tax income of the bottom half of the population over the past 40 years—during which trickle-down enthusiasts promised just the opposite. Now they’re doing it again. Will we listen?

Delving deeper into the tax data, the surge in top incomes since 1980 was first driven by the working rich, who captured an increasing share of wages, but since 2000 virtually all the gains made by the top 1 percent income earners have gone to the owners of passive capital. The proposed bill more than doubles down on favoring this tiny fraction of the population.

The evidence shows that their gains have no chance to trickle down to the rest of us.

 

Focus on the source of most satisfaction, not consumption

http://blogs.berkeley.edu/2018/03/27/focus-on-the-source-of-most-satisfaction-not-consumption/

(Clair Brown wrote this blog with Simon Greenhill, a senior economics major at UC Berkeley who is writing a thesis on global poverty and the refugee crises. The commentary first appeared in Psychology Today on March 26.)

Buying stuff can make you happy for a short time. But you will revert to needing another happiness boost by buying even more stuff. We can, however, replace the boom and bust of a consumption-driven search for satisfaction with lives that are more fulfilling and economically sustainable. Psychologists have, for example, found that altruism creates happiness and produces a positive feedback loop leading to more altruism. In addition, neuroscientists have found that helping others engenders brain activity leading to happy feelings.

With a more holistic view of the world, decisions that make moral sense are also sound economically.

Sociologist Rachel Sherman’s recent finding—that wealthy liberals are often uneasy about their riches, hiding price tags from their hired help and guarding their bank account balances more carefully than the details of their sex lives—can be extended far beyond penthouse apartments and second homes in the Hamptons.

The complex relationship between wealth and social stigma is on daily display here at UC Berkeley. It’s rarely more evident than among privileged undergraduates navigating an environment far more diverse than their hometowns and learning to play the social games of adulthood. Twenty-year-olds in ragged jeans and Goodwill sweaters buy five-dollar coffees twice a day. A young woman who spilled sparkling water on her laptop had a new MacBook Pro in time to submit her homework the next day. One student posted an Instagram picture of her new Mercedes one day and, hardly a week later, a screenshot of her bank account balance, in the red, with a caption about how broke she was.

Our collective tendency to compare ourselves to those wealthier than us, forgetting those with less, isn’t just a quirk of human character or a testament to our selective social blindness. Thorstein Veblen, the 20th Century economist who coined the terms “conspicuous consumption” and “invidious comparisons,” first pointed out how individuals use luxury goods to show off their social status. As early as 1899, Veblen observed that people were living on treadmills of wealth accumulation, competing incessantly with others but rarely increasing their own well-being.

Our valuation of consumption rests on comparing ourselves to one another, and these invidious comparisons lead people from different socioeconomic backgrounds to ascribe wildly different values to the same material objects. The way we measure our social standing has far-reaching consequences, driving much of our personal life satisfaction and determining our collective impact on the environment.

Brown’s research on U.S. standards of living between 1918 and 1988 found that as family income grew over time, families tended to emulate the spending patterns of richer families, spending a larger share of their income on luxury or positional goods. As Americans’ incomes rose, they fulfilled their basic needs and then spent more and more money to showcase their wealth to others.

With rising incomes comes frivolous spending, which itself drives ever more needless consumption, all so we can try to maintain our relative standing and life satisfaction. Sherman’s research reinforces something we’ve known since the turn of the 20th Century:  luxury goods don’t add to personal well-being, and can even make people feel less happy. Feelings of social discontent and anxiety rise with growing inequality and keep people fighting to maintain their social position, leaving them unsatisfied with their new, fancier, lifestyles.

Today, the story of invidious comparison and ever-increasing consumption is also an environmental one. Weeks before COP 21, a paper by economists Thomas Piketty and Lucas Chancel reported that the world’s wealthiest are responsible for the lion’s share of greenhouse gas emissions by individuals. Topping the list of world high-emitters are American’s 1 percenters, who account for over 300 tons of greenhouse gas emissions per person. That’s fifteen times more emissions than the average American and fifty times more than the average person worldwide, according to figures from the World Resources Institute.

Yet most Americans, not just the rich, need to dramatically reduce consumption to meet the goal set by the Paris Climate Accord:  2.1 tCO2e (tons of carbon) per person per year by 2050. The United States’ current 16.5 tCO2 per person (2014 data) means that the U.S. must reduce carbon emissions by nearly 90%. This kind of reduction is a mind-boggling challenge, yet a combination of personal lifestyle changes and activist government can create a modern economy where people live more meaningful and less materialistic lives. Lavish consumption will finally be seen as the folly it is. When we consume to keep up with our neighbors, we aren’t just failing to fulfill our own desires, we’re expending our limited emissions budget without improving our lives.

In her most recent work, Brown explores how we can restructure our economy with policies that reduce inequality, reduce carbon emissions, and live more meaningful lives. When we ask students and friends what is important to them, we tend to get answers about relationships, helping others, and using their talents to help the world. Contrary to what many economic models assume, no one says they want to consume more.

When people focus on what brings them the greatest satisfaction, and not what guarantees them an ever-increasing income, they are unwittingly practicing what we call Buddhist Economics. Brown has dedicated her recent research to understanding how individual well-being and global sustainability can be integrated to an economic framework.

When satisfaction and happiness seem to elude you, think about what you truly care about. Spend your time and money on activities that you think are meaningful and lead to a worthwhile life. Stop worrying about how to become even richer or about which luxury goods to buy. Focus on how fortunate you are with your income, job, family and friends. Privileged students like those we see around UC Berkeley should be grateful for what they have, and move beyond conspicuous consumption to find long-lasting happiness. Realize that as a Middle American, you are one of the richest people on the planet—and that if we leave our conflicted relationship with wealth behind, we can find ourselves immeasurably richer.

CEOs take to the pulpit on gun control

http://blogs.berkeley.edu/2018/03/28/ceos-take-to-the-pulpit-on-gun-control/

You know their names: Emma Gonzalez (age 18); David Hogg (age 18); Naomi Wadler (age 11); Yolanda Renee King (age 9). These young people, and many more, stand at podiums to eloquently, outspokenly and loudly demand tighter gun control legislation from our political leaders following the Marjory Stoneman Douglas High School shooting that left 17 dead. They are organizing protests on the White House lawn and in cities across the U.S. and around the world.  They grace the cover of TIME magazine. They have become powerful influencers of social media content around this issue writ large. Indeed, Emma Gonzalez has more Twitter followers than the National Rifle Association. They established the madly-trending hashtag #never again.

You also recognize these names:  Blackrock, Dick’s Sporting Goods, Delta Airlines, United Airlines, Citigroup, First Omaha Bank, MetLife, Symantec. CEOs from these companies are joining the calls for tighter gun control laws in the wake of the Florida school shooting.  They are telling the young leaders, “We have heard you and we will use our voice and capitalistic power towards the same end.”

These firms don’t over-represent the top of the Fortune 500 company list; the vast majority of the Fortune 500 companies remain silent.  But Citigroup, 30th on the list, may have a far-reaching impact with its statement that it will no longer work with any retailer who does not enforce stricter gun control policies, such as no longer selling bump stocks or high capacity magazines.   Citi is not telling its customers how to use their cards; they are making a statement around the types of businesses with whom they will partner.  On the other side of the debate, locally-headquartered, #24 Wells Fargo says that gun control policies are not up to the bank, but should be left to the political and legislative process.

Risks vs. rewards

Corporate leaders – along with their financial bottom lines – face distinct risks when they take a stand.  They experience retribution via customer boycotts and negative social media campaigns that can damage their reputation. Delta Airlines lost tax advantages when the city of Atlanta punished them, some investors threaten divestiture, and the conservative press is lambasting them.

But these companies are also experiencing positive returns that so far seem to be outpacing the negatives.  Larger-scale social media campaigns herald their action and use of voice, customers claim they will switch to their products and services, large institutional investors, like CALSTRS, who represent a significant amount of access to capital, applaud them.

If economist Milton Friedman were alive, he likely would reprimand the CEOs using their pulpit and power.  He would surely espouse his famous belief that “the business of business is business” and that the CEO’s only role is to maximize shareholder returns, not focus on social or environmental issues.

Is this a new role for corporate leaders?  Is this the right role?  In the face of risks of customer boycotts and backlash, why do they speak out in the first place? We have seen a recent trend of corporate leaders taking a stand on issues like transgender bathrooms, gay marriage, the veracity of and need to address climate change, diversity, and gender equity.  They stepped down from President Trump’s business advisory council, triggering its demise.   Why are they choosing to do so?

CEOs as human beings

First of all, we seem to forget that CEOs are human beings; human beings who are affected by this tragedy of 17 teenagers being killed while simply getting an education, on top of other school shootings and office shootings and church shootings.  These human beings are daughters and sons and mothers and fathers, as well as and even before they were CEOs.  And these CEOs are leading other human beings who are their employees.

There is a solid business reason and related return to listen to, respond to, and retaining their employees, not to mention attracting new employees who have choices of where they work.  Ignoring and losing employees costs money.  We spend a large chunk of our days, and hence lives, at our respective places of work.  A significant number of Millennials – people who were recently teenagers like those who are standing at the podiums now — are current and future employees.  In fact, there are more Millennials in the workplace now than any other generation.  This generation is demanding that its leaders and employers take a stand on issues and causes that they care about and believe in.

Aside from employee demand, peers of CEOs are speaking out.  These peers lead competitors of their companies.  Companies exist to and win by competing.  Corporate social responsibility has become a competitive advantage for companies in attracting capital, customers, employees, and press – all of which leads to brand value.  There are costs to silence.  Simply not speaking is indeed relaying a message in that you cannot NOT communicate.

Today we live in one of the most divisive times under one of our most divisive and wildly tweeting presidents.  A growing segment of our citizens senses an increasing leadership vacuum.  Government leaders are being fired, or are leaving at their own accord, at record pace.  Our government is changing direction on environmental protection, human rights, gender equity, open borders, pacts with the WTO on fair trade, and abortion rights.

Social and political stability means economic stability, and a mercenary business leader knows she or he needs to be successful, our country needs economic stability.  Most if not all capitalism abhors instability and volatility.  If other institutions don’t act to better the social communities in which we live – and the killing of 17 innocent school children in NO way is bettering our society – we look for someone to take the lead.  The business sector is currently the focus of that gaze.

Setting expectations

In a way, corporate leaders have been given a hall pass to draft behind social, economic and political leadership, almost as free-riding citizens.  So again, is this a new role for CEOs?  Yes.  Is this a short-term role for corporate leaders?  I don’t think so.  That ship has sailed and is gaining rapid momentum.  Once one speaks out about issues, it become difficult to be silent in the face of such issues as demand increases, on both sides of the issue.

Expectations have been set.  There are self-serving and mercenary reasons for corporate leaders to speak up, but I stress that we tend to forget that companies are led by, and made up of, caring and feeling human beings.  If a 9-year-old can stand up and shout #neveragain, then of course a 50-year-old CEO must also feel compelled to. She or he is, after all, a human being with a beating heart.

 

Rapid innovations in agrifood supply chains

http://blogs.berkeley.edu/2018/06/19/rapid-innovations-in-agrifood-supply-chains/

We hosted our third Agrifood Supply Chain Conference on April 18 and 19 together with Solidaridad and other wonderful sponsors. The conference was hosted at the Energy Biosciences Institute (EBI) building in Berkeley, which houses cutting-edge institutions – the EBI and Innovative Genomics Initiative – that create new technologies affecting supply chains around the globe. The key premise of the event was that a high rate of innovation is triggering supply chains to evolve to create new products or new ways of producing existing products, in ways that are economical and meet environmental and social objectives.

The conference emphasized some of the tensions and contrasts within agricultural supply chains, and how policies can resolve or exacerbate these tensions. For example, the contrast between the supply chains for two important crops, cocoa and blueberries, was quite apparent. Cocoa originated in the Americas and production practices were established by Franciscan monks. This system has remained mostly in place to date. The crop is grown mostly in Western Africa where yields are low, trees are tall and require significant harvesting efforts. Modern inputs are rarely used, and there are concerns about labor practices and environmental ramifications. Yet, there are small, specialized producers who are producing high-value, refined varieties. Researchers have discovered new varieties that can improve productivity in the cocoa sector, but adoption of these varieties is limited due to credit constraints faced by farmers, as well as lack of investment in extension services and constraints imposed by government. Some of these constraints may be motivated by the concern that an increase in supply may lead to a drastic reduction in the price of cocoa, which is low already. One possibility that would allow improvement in productivity is to invest in nurseries that introduce higher yield cocoa varieties, and expand outreach to improve production methods, while simultaneously converting some of the land to other crops, such as palm oil. This would allow to maintain cocoa production levels, generate new sources of income, and overcome price stabilization concerns.

While cocoa is an established crop grown by traditional smallholders in developing countries, blueberries have emerged as a commercially significant crop recently. Demand for blueberries increased partly due to studies showing that they contain strong antioxidants. At the same time, supply increased as research efforts allowed for a uniform, high-quality product. Harvesting is labor-intensive, and as minimum wage rises and constraints on immigration grow, the industry is seeking out methods of automated harvesting. Blueberries, for processing, are already harvested mechanically in some cases. But there is hope that with increased precision, harvesting of blueberries for the fresh market will also be automated. In addition, there are continued efforts to increase the efficiency of production and availability to consumers, in part by designing smaller plants that can be grown in vertical farming systems.

Both the emergence of the blueberry sector and the desire to produce high-quality and sustainable cocoa reflect the agrifood sector’s emphasis on addressing consumer demands. The emergence of the organic sector is a prime example of this trend. Whole Foods has been a major promoter of organic, and their success has led other companies, like Costco and Walmart, to invest in building supply chains for organic products. The organic industry emphasizes that organic is “clean” and “natural” even though there is no significant scientific evidence of the superiority of organic products from a health perspective. Regardless, many consumers prefer organic, which leads to a price premium for these products. It also reflects a societal tension between science and ideology that may affect attitudes to agricultural biotechnology as well as climate change. Some of the people most concerned about climate change also oppose the use of biotechnology in agriculture, but modern biotechnology can be an effective tool to help address the potentially negative effects of climate change related to agricultural production. For example, genetic tools can be used to modify crop varieties to withstand changes in climatic conditions, such as droughts, floods, etc. With the introduction of gene editing tools, such as CRISPR, the capabilities of biotechnology are being enhanced even further. The likelihood of their adoption will increase if the regulation of gene editing in agriculture will balance benefits and risk. It’s clear that the major beneficiaries of many of these technologies are developing countries that suffer from food deficiencies, and are more vulnerable to the impacts of climate change. The major challenge is to develop the capacity to create crop varieties and systems that will be appropriate for various locations and be adopted when needed given the impending consequences of climate change.

One of the major features of modern supply chains is product differentiation. It is becoming evident that food markets are bifurcated between foods that target the affluent and foods that target the rest of the population. Very often, people who can afford the price premium tend to purchase and consume organic-certified products. For some, it is because of presumed health benefits, while for others it is due to taste.  Others cite more environmentally friendly practices (restricted pesticide use), or animal welfare considerations. We are now seeing the emergence of restaurant chains, including fast food chains, that emphasize organic products. From the farmer’s perspective, this is desirable as it increases farm income. From a global perspective, it may be problematic because the supply of organic products is limited and may be taxing on the environment. Furthermore, the misleading demonization of non-organic food products may lead consumers to misallocate resources, spend extra money while gaining minimal benefit, and actually harming the environment.

A growing aspect of the food system is access to information and transparency about the supply chain. More affluent consumers are generally more interested in and are more willing to pay to be more familiar with food ingredients, and even food production methods and working conditions. The cost of providing this information is declining with information technology and there is a growing reliance on certification systems. However, developing metrics for certification systems that adequately measure positive change, such as poverty reduction and improved market access for farmers, and reduced environmental impact of farming processes is a huge challenge in itself. This is a work in progress – the track record of certification systems to date is mixed. Some indeed may reduce deforestation and eliminate forced labor, but others may be costly to farmers and won’t necessarily lead to meaningful change. Certifiers themselves need to be scrutinized in terms of their impact and their cost. Much of the certification is conducted by NGOs, but these are temporary solutions that need to be upscaled by sound regulatory systems that are enforced by national governments and integrated into a global system.

Some useful insights on certification were shared by keynote speaker Nico Roozen of Solidaridad, our conference partner this year. Solidaridad is a global not-for-profit organization working for over 45 years in the area of sustainable agrifood supply chains. Solidaridad was, in fact, the first creator of fair trade labeling. In the 1980s, Nico witnessed first-hand the social unrest and brutal massacres of the civil wars in Central America. He realized that violence can be reduced through economic empowerment to improve the lives of poor communities striving for justice and equality. He learned from farmers that what they truly wanted was a better price for their coffee, not more aid money.

These experiences inspired him to create the first fair trade label (Max Havelaar) for sustainably produced coffee in 1988, and thereafter for bananas in 1996. Nico encountered resistance from both supermarkets and activists (who were against working with business) in establishing the certification program Max Havelaar. Solaridad continues to partner with businesses and national and regional governments in their work. Despite its long history with certification, Solidaridad agrees that it’s simply not possible to certify farmers out of poverty, or stop deforestation by certifying relatively small market segments. It recognizes the limitations posed by third-party certifications and emphasizes the need for innovation to overcome their shortcomings and eventually replace them by well-functioning regulatory frameworks.

The growing demand for organic food also seems to reflect a sense of dissatisfaction among consumers with the existing agrifood system. However, the food system is evolving. The changes in the food system are outcomes of relentless innovations that tend to originate in new knowledge and development and commercialization efforts resulting in new food products and methods of production and consumption, and provide growing capacity to deal with heterogeneity.

One of the biggest problems in the agrifood system is food waste and spoilage. A significant portion of food produced in the tropics is wasted due to high moisture during harvest and storage. It results in the growth of mycotoxins, including aflatoxin, which is a source of childhood stunting, liver cancer and other medical conditions. A solution to this problem is the concept of the “dry chain” where equipment and procedures are designed to dry produce after harvesting and preserve it in a manner that protects it. While the technical components of such a system are readily available and applied in many parts of US and Europe, the main challenge is to implement similar solutions in developing countries. This entails developing the production of affordable equipment, establishing mechanisms for finance, and providing promotion and education that generate demand and result in appropriate use of the new technologies.

cof

While there is a lot to be done, the world has witnessed immense ‘quiet revolutions’ over the past fifty years, that have improved the quality, convenience, and diversity of food throughout the world, and especially in developing countries, through the introduction of enhanced value chains. We have been witnessing a process through which many technologies introduced in the US and Europe in the 1950s and 60s (e.g. refrigeration, improved storage, processed foods, supermarkets) have been transferred to Latin America and Asia in the 1980s/90s and to many parts of Africa and South Asia in the past 20 years. The diffusion of these technologies is still only partial, but it is moving very quickly, and has had significant impact on farms and agribusiness supply chains. The well-being of many farmers has drastically improved, while many others have lost, at least in relative terms.

Some of the drudgery and waste associated with food preparation is being reduced by processing, in the forms of prepackaged salads and pre-cut meats. Consumers can enjoy the process of cooking and save time with meal kit services, which deliver directly to their doors. Precision agriculture embodied by technologies like drip irrigation and new applications of information technologies and robotics allows for variable application of farm inputs at different locations and over time and improve harvesting.

All these changes are associated with the development of new creative agrifood supply chains. Many of these changes rely on local resources, yet almost all of these changes affect interdependent global supply networks. These systems can be threatened by protectionist policies that erect barriers on the transport of goods or knowledge to protect local interests. Climate change is another threat; failure to mitigate it and adapt production systems and logistical facilities to changing conditions may endanger food security and safety globally. Understanding and improving agrifood supply chains and policies are works in progress, and we will continue to engage through this workshop, that aims to provide education and exchange of knowledge in the coming years.

What does Daylight Saving Time really save?

http://blogs.berkeley.edu/2018/07/02/what-does-daylight-saving-time-really-save/

DST can coordinate societal shifts to better use of daylight…but at a cost.

You would think most states have more pressing issues to confront these days, but legislation on the measurement of time is one of the perennial favorites in our nation’s legislatures. There is always someone passionate about the horrible costs or enormous benefits of Daylight Saving Time (DST) (or, closely related, the “correct” time zone for their state). In the last few years, more than a dozen bills have been introduced to change a state’s adherence to DST.

This year, California has gotten into the game (and not for the first time). Assembly Bill 807, signed by Governor Brown last week, is a first step in a multi-part process that could eventually undo the state’s 1949 adoption of Daylight Saving Time. Advocates argue that it would avoid the semi-annual clock adjustments that disrupt our sleep and schedules, and that it would reduce energy use.

DSTdebateFig2

These proposals trigger the retelling of the probably-not-apocryphal story of the gardener who remarked that DST is wonderful because the extra sunlight makes tomatoes grow faster. Let us now pause to chuckle smugly, because we all know that DST does not change the amount of light any location receives over a 24-hour period.

Many of us, however, seem to think that the timing of human behavior is as immutable as the rotation of the earth. Op-eds, office discussions, and even news reports on the subject are filled with assertions that changing how we designate time necessarily gives us more or less time for some outdoor leisure activity, or means that we will engage in some task when it is light or dark.

It’s possible that some folks making these arguments should not have been chuckling two paragraphs ago, because they actually do think that the designation of time changes the total amount of light. But more likely, they think that humans would not adapt at all to changing what is a completely arbitrary numbering system for the hours of the day, that they would robotically still perform each activity at the same clock time. [Aside: I suppose that is an antiquated use of the term “robotically” from the days when we thought that robots, unlike humans, would have little or no ability to adapt to a changing environment. How quaint.]

Benjamin Franklin argued that DST would save energy (you knew I would get around to an energy angle), and many policymakers today use that argument to support the practice. But two excellent studies (here and here) have found essentially no impact on energy consumption. There are many reasons, but surely among them is that humans can adjust their schedules to a renaming of the hours.DSTdebateFig1

But wait, you say, I can’t simply adjust my schedule, because it depends on the schedules of dozens or hundreds of other people — my family, my coworkers, the operator of my neighborhood coffee shop, whoever controls the start time at my kids’ schools,…– and we would all have to coordinate on the readjustment.

That’s right!! The numerical designation of hours is completely arbitrary, but it is still crucial for coordinating activities. That’s why there are real benefits to adjusting those designations as the amount of daylight changes over the seasons.DSTdebateFig4DSTdebateFig5

In December, when we get about 9½ hours of daylight in Berkeley, we need those hours to accomplish everything our busy lives demand, so most of us (reluctantly) start our days before the 7:21 AM sunrise. But in the June, when we get 14½ hours of daylight, maintaining the same clock schedule would mean waking up hours after the 4:47 AM sunrise. We’d like to shift our schedule to start the day earlier in the summer, but not if stores are closed, our work hours are unchanged and our favorite morning radio/TV shows haven’t started yet.

DST coordinates a shift of all activities to start earlier on summer days when there is lots of sunlight, and later on winter days when we’d rather not leave for work — or have children leave for school — in darkness.

DST is certainly not costless. The shifts between DST and Standard time are jarring for many people. Research suggests there may be a rise in heart attacks, auto accidents and other health risks on the day after the changes, particularly the March spring forward, when we “lose an hour.” But sticking to a single time would also have adverse effects, effects we can’t measure very well today, because most of the country has been on DST since at least the 1960s.

Permanent DST would likely lead to more pedestrian accidents on winter mornings, as more adults and children venture out in darkness, with the sun rising as late as 8:21 AM in Berkeley. Permanent Standard time would likely reduce sleep hours on summer mornings as daylight pours in before 5 AM, with adverse effects on health, as well as leaving less evening time for outdoor exercise and leisure.DSTdebateFig3

Plus, there would be increased costs due to coordination failures. More workplaces, restaurants, stores and schools would establish their own idiosyncratic summer and winter hours in order to adapt to the seasonal changes in daylight. (Many hardware stores and restaurants already do this, even with the current DST shifts.) This would create a multitude of problems for the workers, students and customers who also have to deal with other institutions that would not change their hours or would change them differently.

And that’s even when the changes are communicated well to the relevant people. Inevitably, there would also be an increase in workers, students and customers arriving to find closed doors, because they forgot to check operating-hour changes.

DST’s energy impact is likely miniscule, but there are weightier arguments for and against it. I suspect that most people prefer Standard time in the winter and DST in the summer, but hate the transition between them. Still, sticking to one or the other year-round would lead to inconvenient timing of many activities or, in many cases, would increase uncoordinated seasonal schedule changes, reducing the fundamental value of standardized time.  Are the disruptive twice-yearly transitions worth it to maintain better coordination? That’s the debate we need to have about daylight saving time.

I tweet energy news stories/research/blogs most days @BorensteinS

Original post on Energy Institute at Haas blog

Enjoying music, beer, environment and history in Ireland

http://blogs.berkeley.edu/2018/09/07/enjoying-music-beer-environment-and-history-in-ireland/

I have always wanted to visit Ireland but had never found a reason to go. So, I was excited when Dr. Mary Ryan, from the Agriculture and Food Authority in Ireland, invited me to give a talk at the European Association of Agricultural Economists’ meeting in Galway in western Ireland. I had a fascinating week that included an introduction to Galway and western Ireland, the conference and two days in Dublin.

Galway

I expected Galway to be fun, but the place exceeded my expectations. I stayed in the Jury Hotel, which is at the start of Quay Street, the main pedestrian street of Galway. The street is a tourist’s dream; neat shops of Irish wool goods, elegant restaurants of all cuisines except Mexican (maybe we should open a Gordo’s). I was surprised by the quality of the food. Great mixtures of seafood and local meats and greens, and of course an extensive selection of drinks served in exotic bars.

But for me the main charm is the music, much of it played in the street. It’s a New Orleans-like atmosphere, with wonderful bands and an interesting combination of instruments (one group had a saxophone, clarinet, banjo, drums and accordion) creating wonderful sounds. From “Irish Klezmer” to rock. Of course, there are many bluegrass-type acts, reminding me that it all started here. The bars compete with their decor, offerings and shows. Róisín Dubh gets the best performers and has been the go-to music venue in Galway for years. It has multiple gigs, from classy Celtic Music to noisy rock and stand up – and of course good beers and friendly crowds.

It seems that tourism is the main industry of western Ireland. The vistas are beautiful with green fields and huge rocks. But the land is not very fertile, it is “rock farming.”  Hardly any crops are grown, and while the green fields are beautiful, apparently the yield is not very high. Livestock graze on the patches of grass between the rock and the majority of products are sheep, beef and milk.

Western Ireland used to export a lot of immigrants to America and other countries, but it’s taken a turn and now the economy is more diverse with tourism, the knowledge sector around the university and some agriculture. Ireland as a whole seems to do well economically. It recovered from the financial debacle of 2009-2012, and now you can see new buildings, an excellent train service, and roads that are in good shape.

The conference

The conference aimed to investigate how policies and extension activities can make agriculture more sustainable, in terms of environment, economics and the community. An interesting discussion on the evolution of agricultural policies in Europe suggested that for most of the second half of the 20th century, the main emphasis was on protecting farm prices. But this strategy backfired because it has led to increased supply, which required even more price support. Towards the turn of the millennium, the emphasis shifted to “decoupled” policies that aimed to protect farmers’ income without affecting supply. That led to revised policies that reduced volatility of incomes and supported activities that enhanced environmental quality. Now, EU policies tend to emphasize decentralization, giving individual members more flexibility to develop policies that address their specific situations — aiming to protect farmers’ income, environmental quality, and rural well-being in the different regions.

Professor David Pannel from the University of Western Australia spoke about the challenge of designing environmental policies — in particular, payment for ecological services (PES). Such programs pay farmers to adopt practices that improve environmental quality. They may include payment to stop the use of chemicals, adopting tillage practices that reduce greenhouse gas emissions, etc.  There have been studies that found that tens of millions of dollars were spent on PES program without significant impact, because of the design. The challenge is to reduce environmental behavior that wouldn’t happen otherwise. It is crucial to quantify the outcomes at the micro level and encourage the payment for activities that maximize environmental benefit obtained with a given amount of money. The selection of farmers will be done using a reverse auction. Namely, farmers will ask for a payment for activities that improve environmental quality and the agency will choose the activities with the highest ratios of environmental benefit per dollar.

In my talk, I spoke about the changing role of extension in pursuing sustainable development. Agricultural modernization resulted mostly from the adoption of new technologies and practices by farmers. In the past, many of the innovations were outcomes of public research and the extension system that includes specialists on universities and farm advisers in the field. They have adapted the technologies to farmers needs and have provided the knowledge that has led to the adoption of innovations.

Over time the role of the private sector increased, both in developing new technologies and introducing them to farmers through private dealerships. The agri-food system is constantly evolving. As farmers become more educated, they rely on specialized consultants to improve management of water and pests, and large buyers of agricultural output have private extension services guiding farmers and overseeing farmers production activities. In this new reality, the public extension agent is less important as a source of information to the individual growers. But public extension services have emphasized training the trainers.

Public extension agents are becoming wholesalers of knowledge, training the private consultants, and professionals who are the retailers of knowledge and work with farmers. We found that for a sample of US farmers, it is estimated that 40 percent of their information is derived from public sector and the rest from private sector. But if we take into account that the private knowledge suppliers rely on public sources as well, the share of public knowledge is 70 percent.

Professor Brendan Dunford presented a fascinating case study of the Burren for Conservation Program, which he directs, (BFCP), which is introducing sustainable agricultural practices to the Burren region of southwest Ireland, not far from Galway. This is a region with a distinctive limestone landscape and is a refuge for many unique plants and animals. The farming mostly consists on grazing of grasslands by livestock. The BFCP have been supported by EU funding.

The research effort of the program identified sustainable practices for various conditions with interaction with researchers and the community. Public extension agents are training private consultants who are guiding the farmers in adopting more sustainable practices. A key element to induce adoption is PES. The program introduced new scrub control and water quality management activities, improved feeding systems and habitat restoration methods. The PES is results based payment system that relies on scoring the various aspects of farmers performance in the field. The program enhanced agricultural productivity and profitability and improved environmental outcomes. The practices require collective action and interaction and the program serves to enhance strengthen the community. The program is part of a regional development effort to development more viable and sustainable agriculture, combined with ecotourism to strengthen the overall value to the community.

 

Dublin

It’s a train ride from Galway to Dublin. The train was fast and modern, and every time I take a train in Europe I realize that there are ways to make America greater. When I arrived to Dublin I took a Hop On Hop Off Dublin Bus Tour, and on this tour I learned a lot about Irish history of the last 200 years. The peak of the Irish industrial might in the past was the Guinness Beer company, which covers a large lot of land and has one of the tallest buildings in town. The Guinness family played an important role in Ireland when it was poor. The family financed public goods including beautiful public parks.

But now Dublin seems like a city that recently had a facelift with many modern buildings, some being European headquarters of American companies. These companies were attracted to Ireland being part of the EU and its low tax rates. I visited the  wonderful The Little Museum of Dublin and the fascinating Glasnevin Cemetery to augment my education on Ireland’s history. This history has several themes that also appear in the histories of other countries such as Israel and India. The terrible famine of 1845-1852, which killed more than a million people, was to large extent a result of cruelty and indifference of a foreign power and it intensified the desire for independence. The 1916 Irish Rebellion against the British failed, and its leaders were hanged, but the fight for independence continued under the leadership of Michael Collins. Unfortunately, a terrible civil war erupted in 1923 when Michael Collins died. But at the end, the cause of independence won and Ireland became a democratic state.

Now Dublin is a wonderful city, with great parks, many pubs, singing and bands and a lot of pride in the great Irish writers. I am looking forward to come back with my wife! This was another trip that combined teaching, learning and exploring.

A North American road to the middle class

http://blogs.berkeley.edu/2018/10/22/a-north-american-road-to-the-middle-class/

Sketch of worker walking by NAFTA graffiti

In Buenos Aires, a man in Nikes walks in front of “NAFTA” graffiti. (Photo by Woody Wood.)

Co-authored by Representative Sander Levin (D-Michigan) 

Now that Canada has joined a revised North American Free Trade Agreement (NAFTA), renamed the US-Mexico-Canada Agreement (USMCA), we must not lose sight of the central problem that any new accord must address: the outsourcing of U.S. industrial jobs to Mexico’s system of suppressed wages. There have been efforts by some to dismiss or downgrade this issue and by others to focus on less central concerns relating to trade with Mexico. Any new agreement that fails to directly and forcefully address this issue of labor rights will only lock-in the status quo for many more years to come.

For proof, you need look no further than San Luis Potosí, an emerging hub of industrial production in central Mexico. Eight hundred workers there make tires at a state-of-the-art Goodyear plant. But here’s where the promise for prosperity takes a detour around most Mexicans. These workers have a compliant union and a so-called “protection agreement.” They earn about $1.50 an hour for a 9-hour shift with anemic benefits, hardly a route to the middle class.

On April 24, they walked off the job because of dangerous conditions and a promised raise that wound up being only 50 cents a day. That’s right, 50 cents a day! Fifty-seven leaders were promptly fired. One of us (Rep. Levin) met with fired leaders last month in San Luis Potosí and heard their disturbing grievances.

Down the road, 1,500 workers at a Continental Tire plant have an all-too-rare independent democratic union. They earn almost five times the Goodyear wage—$6 an hour—for an 8-hour shift, with far more generous benefits.

Mexican workers today can’t make a free choice between these two alternatives. They risk being fired and blacklisted or far worse. The overwhelming majority of the tens of thousands of labor agreements in Mexico are “protection agreements”, which are signed by an organization controlled by the ruling party of the government and which workers have never seen, signed or voted on. The result isn’t simply low wages, but an entrenched industrial policy of suppressed wages.

Let’s not forget the flip side of suppressed wages is low purchasing power, which not only harms workers and their families, but throttles economic growth. Moreover, in a highly integrated economy, suppressed wages in San Luis Potosí push down on wages in Akron, Indianapolis and Long Beach, and provide a magnetic attraction for new investment.

abandoned factory in Indianapolis

An abandoned factory in Indianapolis. (Photo by Chris Ley)

NAFTA was supposed to change all this when it went into effect in 1994, but instead it supercharged the problem. Trade has soared since then, but labor rights promises evaporated before the ink on the agreement was dry. Instead, NAFTA locked in a dysfunctional labor system for the next quarter century that’s led to an $80 billion trade deficit with Mexico in the auto sector.

Mexican workers have produced more and earned less under NAFTA. Manufacturing productivity rose by 60 percent between 1994 and 2011—an impressive achievement—while real wages dropped 20 percent and continue to slide. This was not necessary to compete in this key sector with China, but rather to lure industry from the U.S. to Mexico.

Mexicans overwhelmingly elected a reform-minded government this July that offers the promise of restoring rights for Mexican workers, thereby helping to protect conditions for workers in the U.S. and Canada. The new president, Andrés Manuel López Obrador, doesn’t take office until Dec. 1 but a new Mexican Senate, which his party dominates, has already been seated. On Sept. 20, the new Mexican Senate unanimously ratified ILO Convention 98 on the “Right to Organize and Collective Bargaining”, which the International Trade Union Congress (ITUC) has hailed as a “major victory for Mexican Workers.”

Although this move is a positive sign, much remains to be done. Mexico passed a constitutional amendment last year outlining important new rights for workers but the critical implementing legislation went backwards in the previous Senate. New legislation has yet to be drafted in the new Senate and what will happen once this takes place is unknown. While intentions are clearly good, absent clear benchmarks and effective enforcement, large elements of the status quo once again could be locked in for decades, especially given the buzz saw of opposition to real change from entrenched interests.

It is therefore imperative that any new NAFTA agreement provide clearly for the prompt termination of the tens of thousands of protection contracts now in place in Mexico starting with the critical auto sector, ensure that all workers can have real representation at the bargaining table, and provide a transparent, enforceable process for carrying out these vital objectives.

The new agreement needs to lay the basis for a growing continental middle class with independent unions vital for vibrant democratic societies across North America. History has shown that an important way to protect U.S. workers is to protect Mexican workers and the other way around. We need a North American road to the middle class, not expanded exit ramps.

This article originally appeared as an op-ed in The Hill on September 28, 2018.

How big a problem is the zero lower bound on interest rates?

https://www.brookings.edu/blog/ben-bernanke/2017/04/12/how-big-a-problem-is-the-zero-lower-bound-on-interest-rates/

If inflation is too low or unemployment too high, the Fed normally responds by pushing down short-term interest rates to boost spending. However, the scope for rate cuts is  limited by the fact that interest rates cannot fall (much) below zero, as people always have the option of holding cash, which pays zero interest, rather than negative-yielding assets. [1] When short-term interest rates reach zero, further monetary easing becomes difficult and may require unconventional monetary policy, such as large-scale asset purchases (quantitative easing).

Before 2008, most economists viewed this zero lower bound (ZLB) on short-term interest rates as unlikely to be relevant very often and thus not a serious constraint on monetary policy. (Japan had been dealing with the ZLB for several decades but was seen as a special case.) However, in 2008 the Fed responded to the worsening economic crisis by cutting its policy rate nearly to zero, where it remained until late 2015. Although the Fed was able to further ease monetary policy after 2008 through unconventional methods, the ZLB constraint greatly complicated the Fed’s task.

How big a problem is the ZLB likely to be in the future? A paper at the recent Brookings Papers on Economic Activity conference, by Federal Reserve Board economists Michael Kiley and John Roberts—of which I was a formal discussant—attempted to answer this question by simulating econometric models of the U.S. economy, including the model that serves as the basis for most Fed forecasting and policy analysis. Kiley and Roberts (KR) concluded that, under some assumptions about the economic environment and the conduct of monetary policy, short-term interest rates could be at or very close to zero (that is, the ZLB could be binding) as much as 30-40 percent of the time—a much higher proportion than found in most earlier studies. If correct, their result reinforces the need for fresh thinking about how to maintain the effectiveness of monetary policy in the future, a point recently emphasized by San Francisco Fed president John Williams and others (and with which, I should emphasize, I very much agree).

In this post I discuss the KR result but also point out a puzzle. If in the future the ZLB will often prevent the Fed from providing sufficient stimulus, then, on average, inflation should be expected to fall short of the Fed’s 2 percent target—a point shown clearly by KR’s simulations. The puzzle is that neither market participants nor professional forecasters appear to expect such an inflation shortfall. Why not? There are various possibilities, but it could be that markets and forecasters simply have confidence that the Fed will develop policy approaches to overcome the ZLB problem. It will be up to the Fed to prove worthy of that confidence.

The frequency and severity of ZLB episodes

As I’ve noted, KR’s research suggests that periods during which the short-term interest rate is at or close to zero may be frequent in the future. They also find that these episodes would typically last several years on average and (because monetary policy is hobbled during such periods) result in poor economic performance. Two key assumptions underlie these conclusions.

First is the presumption that the current, historically low level of interest rates will persist, even when the economy is once again operating at normal levels and monetary policy has returned to a more-neutral setting. As another paper at the Brookings conference examined in some detail, real (inflation-adjusted) interest rates have been declining for decades, for reasons including slower economic growth; an excess of global savings relative to attractive investment opportunities; an increased demand for safe, liquid assets; and other factors largely out of the control of monetary policy. If the normal real interest rate is currently about 1 percent—a reasonable guess—and if inflation is expected on average to be close to the Fed’s target of 2 percent, then the nominal interest rate will be around 3 percent when the economy is at full employment with price stability. Naturally, if interest rates are typically about 3 percent, then the Fed has much less room to cut than when rates are 6 percent or more, as they were during much of the 1990s, for example. Indeed, the KR simulations show that the expected frequency of ZLB episodes rises quite sharply when normal interest rates fall from 5 or 6 percent to 3 percent.

The second factor determining the frequency and severity of ZLB episodes in the KR simulations is the Fed’s choice of monetary policies. This important point is worth repeating: The frequency and severity of ZLB episodes is not given, but depends on how the Fed manages monetary policy. In particular, KR’s baseline results assume that the Fed follows one of two simple policy rules: one estimated from the Fed’s past behavior, and the second determined by a standard Taylor rule, which relates the Fed’s short-term interest rate target to the deviation of inflation from the Fed’s 2 percent target and on how far the economy is from full employment. Using the Fed’s principal forecasting model, KR find that in the future the U.S. economy will be at the ZLB 32 percent of the time under the estimated monetary policy rule, and 38 percent of the time under the Taylor-rule policy. Because of the frequent encounters with the ZLB, the simulated economic outcomes are not very good: Under either policy rule, on average inflation is about 1.2 percent (well below the Fed’s 2 percent target) and output is more than 1 percent below its potential.

What do markets and professional forecasters think?

Are these results plausible? A specific prediction of the KR analysis, that in the future frequent contact with the ZLB will keep inflation well below the Fed’s 2 percent target, can be compared to the expectations of market participants and of professional forecasters.

These comparisons do not generally support KR’s worst-case scenarios. For example, measures of inflation expectations based on comparing returns to inflation-adjusted and ordinary Treasury securities, suggest that market participants see inflation remaining close to the Fed’s 2 percent target in the long run.[2] The prices of derivatives that depend on long-run inflation outcomes also imply that market expectations of inflation are close to 2 percent. To illustrate the latter point, Figure 1 shows inflation expectations as derived from zero-coupon inflation swaps. (See here for an explanation of these instruments and a discussion of their properties.)

ZLB

Figure 1 suggests that market participants expect inflation to average about 2-1/4 percent over long horizons, up to thirty years. These expectations relate to inflation as measured by the consumer price index, which tends to be a bit higher than inflation measured by the index for personal consumption expenditures, the inflation rate targeted by the Fed. So Figure 1 seems quite consistent with a market expectation of 2 percent for the Fed’s targeted inflation rate over very long horizons.

Professional forecasters also see long-run inflation close to the Fed’s target. For example, the Survey of Professional Forecasters projects that the inflation rate targeted by the Fed will average 2.00 percent over the period 2016-2025, precisely equal to target. Similarly, primary dealers surveyed by the Federal Reserve Bank of New York see the inflation rate targeted by the Fed equaling 2.00 percent in the “longer run.” The same group also sees CPI inflation close to 2-1/4 percent over the next five years and during the five years after that, consistent with the inflation swaps data (Figure 1) and with the Fed’s preferred inflation measure remaining close to 2 percent. Interestingly, these respondents do not see the ZLB as irrelevant to policy; at the median, they see a 20 percent chance that the United States will be back at the ZLB by 2019.

Why have inflation expectations held up?

That longer-term inflation expectations appear relatively well-anchored at 2 percent appears inconsistent with the prediction that interest rates will be at the ZLB as much as 30 to 40 percent of the time in the future, preventing the Fed from reaching its inflation target during those times.[3]

How to resolve this contradiction? I don’t think there’s anything wrong with how KR conducted their analyses. Remember, though, their conclusion assumes that the Fed will continue to manage monetary policy using pre-crisis approaches, essentially ignoring the challenges of the zero lower bound. That’s unrealistic. Indeed, following the crisis the Fed addressed the ZLB constraint with a number of alternative strategies, including large-scale asset purchases (quantitative easing) and forward guidance to markets about the future path of interest rates. These policy innovations did not fully overcome the ZLB problem. Nevertheless, they may help explain why the unemployment rate and other measures of cyclical slack fell about as quickly in the recent recovery as in earlier postwar recoveries—a finding of another paper at the Brookings conference, by Fernald, Hall, Stock, and Watson—and also why core PCE inflation fell by less than expected given the severity of the recession.

Looking forward, it appears that market participants and professional forecasters believe that the Fed, perhaps in conjunction with fiscal policymakers, will “do what it takes” to mitigate the adverse effects of future encounters with the ZLB. That confidence is encouraging, but it should not be taken as license for policymakers to rest on their laurels. To the contrary, Fed and fiscal policymakers should think carefully about how best to adapt their frameworks and policy tools to reduce the frequency and severity of future ZLB episodes. In tomorrow’s post I’ll discuss some possible approaches.


Temporary price-level targeting: An alternative framework for monetary policy

https://www.brookings.edu/blog/ben-bernanke/2017/10/12/temporary-price-level-targeting-an-alternative-framework-for-monetary-policy/

Low nominal interest rates, low inflation, and slow economic growth pose challenges to central bankers. In particular, with estimates of the long-run equilibrium level of the real interest rate quite low, the next recession may occur at a time when the Fed has little room to cut short-term rates. As I have written previously and recent research has explored, problems associated with the zero-lower bound (ZLB) on interest rates could be severe and enduring. While the Fed has other useful policies in its toolkit such as quantitative easing and forward guidance, I am not confident that the current monetary toolbox would prove sufficient to address a sharp downturn. I am therefore sympathetic to the view of San Francisco Fed President John Williams and others that we should be thinking now about adjusting the framework in which monetary policy is conducted, to provide more policy “space” in the future. In a paper presented at the Peterson Institute for International Economics, I propose an option for an alternative monetary framework that I call a temporary price-level target—temporary, because it would apply only at times when short-term interest rates are at or very near zero.

To explain my proposal, I’ll begin by briefly discussing two other ideas for changing the monetary framework:  raising the Fed’s inflation target above the current 2 percent level, and instituting a price-level target that would operate at all times.  (See my paper for more details.)

A Higher Inflation Target

One way to increase the scope for monetary policy is to retain the Fed’s current focus on hitting a targeted value of inflation, but to raise the target to, say, 3 or 4 percent.  If credible, this change should lead to a corresponding increase in the average level of nominal interest rates, which in turn would give the Fed more space to cut rates in a downturn. This approach has the advantage of being straightforward, relatively easy to communicate and explain; and it would allow the Fed to stay within its established, inflation-targeting framework.  However, the approach also has a number of notable shortcomings (as I have discussed here and here).

One obvious problem is that a permanent increase in inflation would be highly unpopular with the public.  The unpopularity of inflation may be due to reasons that economists find unpersuasive, such as the tendency of people to focus on inflation’s effects on the prices of things they buy but not on the things they sell, including their own labor.  But there are also real (if hard to quantify) problems associated with higher inflation, such as the greater difficulty of long-term economic planning or of interpreting price signals in markets.  In any case, it’s not a coincidence that the promotion of price stability is a key part of the mandate of the Fed and most other central banks. A higher inflation target would therefore invite a political backlash, perhaps even a legal challenge.

More subtle, but equally important, we know from the insightful theoretical work of Paul Krugman, Michael Woodford and Gauti Eggertsson, and others that raising the inflation target is an inefficient approach to dealing with the ZLB. Under the theoretically optimal approach, inflation should rise temporarily following a severe downturn in which monetary policy is constrained by the ZLB.  The reason for the temporary increase is that, in the optimal framework, policymakers promise to hold rates “lower for longer” when the ZLB is binding, in order to make up for the fact that the ZLB is preventing current short-term rates from falling as far as would be ideal.  The promise of “lower for longer,” if credible, should ease financial conditions before and during the ZLB period, reducing the adverse effects on output and employment but subsequently resulting in a temporary increase in inflation. As Woodford has pointed out (pp. 64-73), raising the inflation target is a suboptimal response to the ZLB problem in that it forces society to bear the costs of higher inflation at all times, instead of only transitorily after periods at the ZLB. Moreover, a once-and-for-all increase in the inflation target does not take into account that, under the theoretically optimal policy, the vigor of the policy response (and thus the magnitude of the temporary increase in inflation) should be calibrated to the duration of the ZLB episode and the severity of the economic downturn.

Price-level Targeting

An alternative monetary framework, discussed favorably by President Williams and by a number of others (see here and here) is price-level targeting.  A price-level-targeting central bank tries to keep the level of prices on a steady growth path, rising by (say) 2 percent per year; in other words, a price-level-targeter tries to keep the very-long-run average inflation rate at 2 percent.

The principal difference between price-level targeting and conventional inflation targeting is the treatment of “bygones.”  An inflation-targeter can “look through” a temporary change in the inflation rate so long as inflation returns to target after a time.  By ignoring past misses of the target, an inflation targeter lets “bygones be bygones.”  A price-level targeter, by contrast, commits to reversing temporary deviations of inflation from target, by following a temporary surge in inflation with a period of inflation below target; and an episode of low inflation with a period of inflation above target.  Both inflation targeters and price-level targeters can be “flexible,” in that they can take output and employment considerations into account in determining the speed at which they return to the inflation or price-level target.  Throughout this post I am considering only “flexible” variants of policy frameworks. These variants are both closer to the optimal strategies derived in economic models and most consistent with the Fed’s dual mandate, which instructs it to pursue maximum employment as well as price stability.

A price-level target has at least two advantages over raising the inflation target.  The first is that price-level targeting is consistent with low average inflation (say, 2 percent) over time and thus with the price stability mandate. The second advantage is that price-level targeting has the desirable “lower for longer” or “make-up” feature of the theoretically optimal monetary policy.  Under price-level targeting, there is automatic compensation by policymakers for periods in which the ZLB prevents monetary policy from providing adequate stimulus. Specifically, periods in which inflation is below target (as is likely to happen when interest rates are stuck at the ZLB) must be followed by periods in which the central bank shoots for inflation above target, with the overshoot depending (as it optimally should) on the severity of the episode and the cumulative shortfall in monetary easing. If the public understands and expects the central bank to follow the “lower-for-longer” rate-setting strategy, then the expectation of easier policy and more-rapid growth in the future should mitigate declines in output and inflation during the period in which the ZLB is binding, and indeed reduce the frequency with which the ZLB binds at all.

For these reasons, adopting a price-level target seems preferable to raising the inflation target. However, this strategy is not without its own drawbacks.  For one, it would amount to a significant change in the Fed’s policy framework and reaction function, and it is hard to judge how difficult it would be to get the public and markets to understand the new approach. In particular, switching from the inflation concept to the price-level concept might require considerable education and explanation by policymakers. Another drawback is that the “bygones are not bygones” aspect of this approach is a two-edged sword.  Under price-level targeting, the central bank cannot “look through” supply shocks that temporarily drive up inflation, but must commit to tightening to reverse the effects of the shock on the price level.[1] Given that such a process could be painful and have adverse effects on employment and output, the Fed’s commitment to this policy might not be fully credible.

Temporary Price-Level Targeting

Is there a compromise approach? One possibility is to apply a price-level target and the associated “lower-for-longer” principle only to periods around ZLB episodes, retaining the inflation-targeting framework and the current 2 percent target at other times.  As with the ordinary price-level target, this approach would implement the lower-for-longer or “make-up” strategy at the ZLB, which—if understood and anticipated by the public—should serve to make encounters with the ZLB shorter, less severe, and less frequent.  In this respect, a temporary price-level target would be similar to an ordinary price-level target, which applies at all times.  However, a temporary price-level target has two potential advantages.

First, a temporary price-level target would not require a major shift away from the existing policy framework:  When interest rates are away from the ZLB, the current inflation-targeting framework would remain in place.  And at the ZLB, what I am calling here temporary price-level targeting could be explained and communicated as part of an overall inflation-targeting strategy, as it amounts to targeting the average inflation rate over the period in which the ZLB is binding.  Thus, communication could remain entirely in terms of inflation goals, a concept with which the public and market participants are already familiar.

Second, a temporary price-level target, unlike an ordinary price-level target, would not require the Fed to tighten policy to reverse shocks that temporarily drive up inflation when rates are away from the ZLB.  Instead, following the inflation-targeter’s approach, the Fed would simply guide inflation back to target over time.  Moreover, because the Fed would be targeting 2 percent inflation in both ZLB and non-ZLB periods, inflation over long periods should average around 2 percent.

To be more concrete on how the temporary price-level target would be communicated, suppose that, at some moment when the economy is away from the ZLB, the Fed were to make an announcement something like the following:

  • The Federal Open Market Committee (FOMC) has determined that it will retain its symmetric inflation target of 2 percent. The FOMC will also continue to pursue its balanced approach to price stability and maximum employment.  In particular, the speed at which the FOMC aims to return inflation to target will depend on the state of the labor market and the outlook for the economy.
  • However, the FOMC recognizes that, at times, the zero lower bound on the federal funds rate may prevent it from reaching its inflation and employment goals, even with the use of unconventional monetary tools. The Committee therefore agrees that, in future situations in which the funds rate is at or near zero, a necessary condition for raising the funds rate will be that average inflation since the date at which the federal funds rate first hit zero be at least 2 percent.  Beyond this necessary condition, in deciding whether to raise the funds rate from zero, the Committee will consider the outlook for the labor market and whether the return of inflation to target appears sustainable.

The charts below serve to illustrate this policy as might have been applied to the most recent ZLB episode if, hypothetically, temporary price-level targeting had been in effect. To be clear, nothing in this blog post or my paper should be taken as a commentary on current Fed policy.  I am considering instead a counterfactual world in which the announcement above had been made, and internalized by markets, prior to when the short-term rate hit zero in 2008.

Figure 1 shows the behavior of (core PCE) inflation since 2008 Q4, the quarter in which the federal funds rate effectively reached zero and thus marked the beginning of the ZLB episode. Since 2008, inflation has been below the 2 percent inflation target most of the time.

ES_20171011_BernankeInflationTarget1

The effect of this persistent undershoot of inflation relative to the 2 percent target has been a persistent undershoot of the overall level of prices, relative to trend. Figure 2 shows recent values of the (core PCE) price level relative to a 2 percent trend starting in 2008 Q4. As the figure shows, the price level is lower than it would have been had inflation been at the Fed’s 2% inflation target over the entire period.

ES_20171011_BernankeInflationTarget2

If a temporary price-level target had been in place, the Fed would have sought to “make up” for this cumulative shortfall in inflation. The necessary condition outlined in paragraph (2) of the framework, that average inflation over the ZLB period be at least 2 percent, is equivalent to the price level (light blue line) returning to its trend (dark blue line).  A period of inflation exceeding 2 percent would be necessary to satisfy that criterion, thereby compensating for the previous shortfall in inflation during the ZLB period (i.e. the slope of the light blue line would need to increase in order to converge with the dark blue line).  The result would be a lower-for-longer rates policy, which would be communicated and internalized by markets in advance.  The easier financial conditions that would have resulted could have hastened the desired outcomes of economic recovery and the return of inflation to target.  Notably, this framework would obviate the need for (and be superior to) the use of ad hoc forward guidance about rate policy.

Importantly, under my proposal and as suggested by the mock FOMC statement above, meeting the average-inflation criterion is a necessary but not sufficient condition to raise rates from the ZLB.  First, monetary policymakers would want to be sure that the average inflation condition is being met on a sustainable basis and not as the result of a transitory shock or measurement error. Expressing the condition in terms of core rather than headline inflation, as in the figures above, would help on that score. Second, consistent with the concept of “flexible” targeting, policymakers would also want to factor in real economic conditions such as employment and output in deciding whether it was time to raise rates.

In sum, a temporary price-level target, invoked only during ZLB episodes, appears to have many of the benefits of ordinary price-level targeting. It would preserve the commitment to price stability.  Importantly, it would create the expectation among market participants that ZLB episodes will lead to “lower-for-longer” or “make-up” rate policies, which would ease financial conditions and help mitigate the frequency and severity of such episodes.  Unlike an ordinary price-level target, however, the temporary variant could be folded into existing inflation-targeting regimes in a straightforward way, minimizing the need to change longstanding policy frameworks and communications practices.  In particular, central bank communication could remain focused on inflation goals. Finally, in contrast to an ordinary price-level target, the proposed approach would allow policymakers to continue to “look through” temporary inflation shocks that occur when rates are away from the ZLB.


[1] This problem would be mitigated but not eliminated if the price-level target were defined in terms of core inflation, excluding volatile food and energy prices.

The housing bubble, the credit crunch, and the Great Recession: A reply to Paul Krugman

https://www.brookings.edu/blog/ben-bernanke/2018/09/21/the-housing-bubble-the-credit-crunch-and-the-great-recession-reply-to-paul-krugman/

Why was the Great Recession so deep? Certainly, the collapse of the housing bubble was the key precipitating event; falling house prices depressed consumer wealth and spending while leading to sharp reductions in residential construction. However, as I argue in a new paper and blog post, the most damaging aspect of the unwinding bubble was that it ultimately touched off a broad-based financial panic, including runs on wholesale funding and indiscriminate fire sales of even non-mortgage credit. The panic in turn choked off credit supply, pushing the economy into a much more severe decline than otherwise would have occurred. My evidence for this claim is that indicators of panic, including the sharp increases in funding costs for financial institutions and the spiking yields on securitized non-mortgage assets, are strikingly better predictors of the timing and depth of the recession than are housing-related variables such as house prices, market pricing of subprime mortgages, or mortgage delinquency rates.

In a recent post, Paul Krugman gave his take on the causes of the Great Recession. His inclination, contrary to my findings, is to emphasize the effects of the housing bust on aggregate demand rather than the financial panic as the source of the downturn. In a follow-up response to my paper, Krugman asks for evidence on the transmission mechanism. Specifically, if the financial disruption was the major cause of the recession, how were its effects reflected in the major components of GDP, such as consumption and investment? In this post I’ll offer a few thoughts on Paul’s questions.

I’ll start with some observations on the transmission mechanism. Certainly, a reduction in credit supply will affect normally credit-sensitive components of spending, like capital investment, as Krugman notes. But a broad-based and violent financial panic, like the one that gripped the country a decade ago, will also affect the behavior of even firms and households not currently seeking new loans. For example, in a panic, any firm that relies on credit to finance its ongoing operations (such as major corporations that rely on commercial paper) or that might need credit in the near future will face strong incentives to conserve cash and increase precautionary savings. For many firms, the fastest way to cut costs is to lay off workers, rather than to hoard labor and build inventories in the face of slowing demand, as they might normally do. That appears to be what happened: Job losses, which averaged 120,000 per month from the beginning of the recession in December 2007 through August 2008, accelerated to 670,000 per month from September 2008 through March 2009, the period of most intense panic. The unemployment rate, which—despite the fact that house prices had been falling for more than two years — was still around 6 percent in September 2008, shot up almost 4 percentage points over the next year. These are not small effects. Workers, in turn, having been laid off or knowing that they might be, and expecting a lack of access to credit, would likewise have had every incentive to reduce spending and to try to build up cash buffers. Indeed, research has found significant increases in precautionary savings during the financial crisis for both households and firms. In Krugman’s preferred IS-LM terminology, the panic induced a large downward shift in the IS curve.

Although isolating the effects of the credit shock on individual spending components is difficult, it’s nevertheless interesting to follow Krugman and examine how key components of GDP behaved during the recession. The chart below shows real residential investment and real GDP (all data below are quarterly, at annualized growth rates) for the period 2006-2009. As Krugman points out, there were large declines in residential investment in 2006-2007, prior to the major disruptions in financial markets. That’s consistent with his “housing bust” theory of the recession. However, note two points.  First, despite the decline in residential investment in 2006-07, real GDP growth remained positive until the first quarter of 2008 and declined only very slightly over the first three quarters of that year, giving little hint of what was to come. However, after the crisis intensified in August/September 2008, GDP fell at annual rates of 8.4 percent in the fourth quarter of 2008 and 4.4 percent in the first quarter of 2009. That precipitous decline ended and began to reverse only as the panic was controlled in the spring of 2009.

Second, the pattern of residential investment was itself evidently affected by the panic, accelerating its pace of decline to a remarkable -34 percent at an annual rate in the fourth quarter of 2008 and -33 percent in the first quarter of 2009, before stabilizing in the second half of 2009 as the panic subsided. That the panic would affect the pace of homebuilding makes intuitive sense, given the reliance on credit of both construction companies and homebuyers. Indeed, my research finds that housing-related indicators like house prices and subprime mortgage valuations predict housing starts reasonably well through 2007, but that after that, indicators of financial panic, including the yields on non-mortgage credit, are actually better predictors of housing activity. In short, absent the panic, the pace and extent of the decline in the housing sector might itself not have been as severe.

Housing and Output

The next chart shows the growth of nonresidential fixed business investment, whose behavior Krugman also cites in favor of the housing bust view. But here again, the timing is key to the interpretation. Unlike residential investment, which began contracting early in 2006, business investment did not start to decline until well after the bursting of the housing bubble. From the start of 2006 through the third quarter of 2007, as house prices fell, nonresidential fixed investment growth averaged almost 8 percent, in line with or even above pre-crisis norms. From the beginning of the recession in the fourth quarter of 2007 to the third quarter of 2008, average investment growth was slow but positive. However, from the fourth quarter of 2008, when the panic became intense, through the end of the recession in mid-2009, the rate of business investment growth fell precipitously, to an average annualized rate of -20 percent. Essentially all the decline of business investment took place during the period of most intense panic.

Real nonresidential fixed investment

The next two charts show the growth of (1) real personal consumption expenditures for durable goods and (2) the components of the US trade balance. As with business investment, the worst declines in these series took place during the period of extreme panic.  In particular, consumer durables spending remained healthy throughout 2006 and 2007, despite declining house prices and home construction. However, in the fourth quarter of 2008, durables spending declined at a 26 percent annual rate, recovering in early 2009 as the panic ended. Likewise, over the fourth quarter of 2008 and the first quarter of 2009, real exports and real imports both fell at average annualized rates of close to 24 percent, as global trade contracted sharply.

Because both exports and imports fell, the net contribution of trade to U.S. aggregate demand was modest. The behavior of the components of trade shown in the figure is nevertheless interesting for this discussion. Trade is particularly credit-sensitive, because importers and exporters rely on trade finance and because a significant portion of trade is in durables, a credit-sensitive category. The collapse of trade in late 2008 and early 2009 is therefore a reasonably good signal of disruptions in credit supply. Likewise, improvements in trade in 2009 likely reflected policies that ended the panic. Expanding on the international theme, note also that the global financial crisis can explain, in a way that the U.S. housing bubble cannot, the depth and synchronization of the worldwide recession of 2008-2009. (See for example, recent analysis by the Bank of England.)

Real personal consumption expenditure (durables)

Trade

To be clear, none of this disputes that the housing bubble and its unwinding was an essential cause of the recession. Besides their direct effects on demand, the problems in housing and mortgage markets provided the spark that ignited the panic; and the slow recovery from the initial downturn likely was due in part to deleveraging by households and firms exposed to the housing sector.[1] Indeed, my own past research argues that factors related to balance sheet deleveraging and the so-called financial accelerator can have important effects on the pace of economic growth. I do claim, though, that if the financial system had been strong enough to absorb the collapse of the housing bubble without falling into panic, the Great Recession would have been significantly less great. By the same token, if the panic had not been contained by a forceful government response, the economic costs would have been much greater.

One more piece of evidence on this point comes from contemporaneous macroeconomic forecasts. Forecasts made in 2008, by both government agencies and private forecasters, typically incorporated severe declines in house prices and construction among their assumptions but still did not anticipate the severity of the downturn. For example, as discussed in a recent paper by Don Kohn and Brian Sack, the Fed staff’s August 2008 Greenbook report included economic forecasts under a “severe financial stress scenario.” Among the assumptions of this conditional forecast were that house prices would decline by an additional 10 percent relative to baseline forecasts (which had already incorporated significant declines). As a result, the assumed declines in house prices in this projection were close to those that actually would occur. However, even with these assumptions, Fed economists predicted that the unemployment rate would peak at only 6.7 percent, compared to its actual peak of around 10 percent in the fall of 2009. This conditional forecast would have taken full account of a sharp expected decline in housing construction and the wealth effects of falling house prices. The fact that forecasts still badly underestimated the rise in unemployment and the depth of the downturn suggests that some other factors—the financial panic, in my view — would play an important role in the contraction.

The failure of conventional economic models to forecast the effects of the financial panic relates to another point made by Krugman in a more recent post, in which he argues that the experience of the crisis and the Great Recession validates traditional macroeconomics. On many counts—such as the prediction that the Fed’s monetary policies would not be inflationary — I did and still do agree with him. However, as I discuss in my paper, current macro models still do not adequately account for the effects of credit-market conditions or financial instability on real activity. It’s an area where much more work is needed.

*Sage Belz and Michael Ng contributed to this post.