Month: November 2017

Austerity and household debt: a macro link?

For some time now I’ve been arguing that not only does austerity have real effects but also financial implications.

When the government runs a deficit, it produces a flow supply of safe assets: government bonds. If the desired saving of the private sector exceeds the level of capital investment, it will absorb these assets without government spending inducing inflationary tendencies.

This was the situation in the aftermath of the 2008 crisis. Attempted deleveraging led to increased household saving, reduced spending and lower aggregate demand. Had the government not run a deficit of the size it did, the recession would have been more severe and prolonged.

When the coalition came to power in 2010 and austerity was introduced, the flow supply of safe assets began to contract. What happens if those who want to accumulate financial assets — wealthy households for the most part — are not willing to reduce their saving rate? If there is an unchanged flow demand for financial assets at the same time as the government reduces the supply, what is the result?

Broadly speaking there are two possible outcomes: one is lower demand and output: a recession. If growth is to be maintained, the only option is that some other group must issue a growing volume of financial liabilities, to offset the reduction in supply by the government.

In the UK, since 2010, this group has been households — mostly households on lower incomes. As the government cut spending, incomes fell and public services were rolled back. Unsurprisingly, many households fell back on borrowing to make ends meet.

The graph below shows the relationship between the government deficit and the annual increase in gross household debt (both series are four quarter rolling sums deflated to 2015 prices).

hh2

From 2010 onwards, steady reduction in the government deficit was accompanied by a steady increase in the rate of accumulation of household debt. The ratio is surprisingly steady: every £2bn of deficit reduction has been accompanied by an additional £1bn per annum increase in the accumulation of household debt.

Note that this is the rate at which gross household debt is accumulated — not the “net financial balance” of the household sector. The latter is highlighted in discussions of “sectoral balances”, and in particular the accounting requirement that a reduction in the government deficit be accompanied by either an increase in the deficit of the private sector or a reduction in the deficit with the foreign sector.

Critics of the sectoral balances argument make the point that the net financial balance of the household sector is not the relevant indicator. Most household borrowing takes place within the household sector, mediated by the financial system. Savers hold bank deposits and pension fund claims, while other households borrow from the banks. The gross indebtedness of the household sector can therefore either increase or decrease without any change in the net position. Critics therefore see the sectoral balances argument argue as incoherent because it displays a failure to understand basic national accounting. This view has been articulated by Chris Giles and Andrew Lilico, among others.

For the UK, at least, this criticism appears misplaced. The chart below plots four measures of the household sector financial position along with the government deficit. The indicators for the household sector are the net financial balance, gross household debt as a share of both GDP and household disposable income, and the household saving ratio. The correlation between the series is evident.

hh3

The relationship between the government deficit and the change in gross household debt is surprisingly stable. The figure below plots the series for the full period for which data are available from the ONS: from 1987 until 2017. With the exception of the period 2001-2008, where there is a clear structural break, the relationship is persistent.

hh1

Why should this be the case? One needs to be careful with apparently stable relationships between macroeconomic variables — they have a habit of breaking down. One reason for caution is that the composition of household debt has changed over the period shown: in the pre-2008 period most of the increase was mortgage borrowing, while post-crisis, consumer debt in the form of credit cards, car loans and so on has played an increasing role. Nonetheless, a hypothesis can be advanced:

If one group of households saves a relatively constant share of income — and this represents the majority of total saving in the household sector — then variance in the supply of assets issued by public sector must be matched either by variations in output and employment or by variance in the issuance of financial liabilities by other sectors. If monetary policy is used to maintain steady inflation and therefore relatively stable output and employment, changes in the cost of borrowing may induce other (non-saver) households to adjust their consumption decisions in such a way that stabilises output.

Put another way, if the contribution of government deficit spending to total demand varies and saving among some households is relatively inelastic, avoiding recessions requires another sector (or sub-sector) to go into deficit in order that total demand be maintained.

This hypothesis fits with the observation that the household saving ratio falls as the rate of gross debt accumulation increases. Paradoxically, the problem is not too little household saving but too much, given the volume of investment. If inelastic savers were willing to reduce their saving and increase consumption in response to lower government spending, then recession could be avoided without an increase in household debt. A better solution would be an increase in the business investment of the private sector: it is the difference between saving and investment that matters.

There is a clear structural break in the relationship between the deficit and household debt, starting around 2001. This is likely the result of the global credit boom which gathered pace after Alan Greenspan cut the target federal funds rate from 6.5% in 1999 to 1% in 2001. During this period, the financial position of the corporate sector shifted from deficit to surplus, matched by large rises in the accumulation of household debt. With the outbreak of crisis in 2008, the previous relationship appears to re-emerge.

Careful econometrics work is required to try and disentangle the drivers of rising household debt. But relationships between macroeconomic variables with this degree of stability are unusual. Something interesting is going on here.

EDIT: 22 November

Toby Nangle left a comment suggesting that it would be good to show the data on borrowing by different income levels. It’s a good point, and raises a complex issue about the distribution of lending and borrowing within the household sector. This is something that J. W. Mason and others have been discussing. I need another post to fully explain my thinking on this, but for now, I’ll include the following graph:

hh4

This is calculated using an experimental new dataset compiled by the ONS which uses micro data source to try and produce disaggregated macro datasets. Data are currently only available for three years — 2008, 2012, and 2013 — but I understand that the ONS are working on a more complete dataset.

What this shows is that in 2008, at the end of the 2000s credit boom, only the top two income quintiles were saving: the bottom 60% of the population was dissaving. In 2012 and 2013, the household saving ratio and financial balance had increased substantially and this shows up in the disaggregated figures as positive saving for all but the bottom quintile.

I suspect that as the saving ratio and net financial balance have subsequently declined, and gross debt has increased, the distributional pattern is reverting to what it looked like in 2008: saving at the top of the income distribution and dissaving in the lower quintiles.

Advertisement

Dilettantes Shouldn’t Get Excited

A new paper on DSGE modelling has caused a bit of a stir. It’s not so much the content of the paper — a thorough but unremarkable survey of the DSGE literature and a response to recent criticism — as the tone that has caught attention. The paper begins:

“People who don’t like dynamic stochastic general equilibrium (DSGE) models are dilettantes. By this we mean they aren’t serious about policy analysis… Dilettantes who only point to the existence of competing forces at work – and informally judge their relative importance via implicit thought experiments – can never give serious policy advice.”

The authors, Lawrence Christiano, Martin Eichenbaum and Mathias Trabandt, make a number of claims, most eye-catchingly: “the only place that we can do experiments is in dynamic stochastic general equilibrium (DSGE) models.” They then list a number of policy questions that are probably best answered using a combination of time series econometrics and careful thinking. After their survey of the literature, the authors conclude — without recourse to evidence — “… DSGE models will remain central to how macroeconomists think about aggregate phenomena and policy. There is simply no credible alternative to policy analysis in a world of competing economic forces.”

The authors seem to have been exercised in particular by recent comments from Joseph Stiglitz, who wrote:

“I believe that most of the core constituents of the DSGE model are flawed—sufficiently badly flawed that they do not provide even a good starting point for constructing a good macroeconomic model. These include (a) the theory of consumption; (b) the theory of expectations—rational expectations and common knowledge; (c) the theory of investment; (d) the use of the representative agent model (and the simple extensions to incorporate heterogeneity that so far have found favor in the literature): distribution matters;(e) the theory of financial markets and money; (f) aggregation—excessive aggregation hides much that is of first order macroeconomic significance; (g) shocks—the sources of perturbation to the economy and (h) the theory of adjustment to shocks—including hypotheses about the speed of and mechanism for adjustment to equilibrium or about out of equilibrium behavior.”

Stiglitz is not the only dilettante in town. He’s not even the only Nobel prize-winning dilettante — Robert Solow has been making these points for decades now. The Nobels are not alone. Brad Delong takes a similar view, writing that “DSGE macro has … proven a degenerating research program and a catastrophic failure: thirty years of work have produced no tools for useful forecasting or policy analysis”. (You should also read his response to the new paper, and some of the comments on his blog).

Back in 2010, John Mulbaer wrote that “While DSGE models are useful research tools for developing analytical insights, the highly simplified assumptions needed to obtain tractable general equilibrium solutions often undermine their usefulness. As we have seen, the data violate key assumptions made in these models, and the match to institutional realities, at both micro and macro levels, is often very poor.”

This is how a well-mannered economist politely points out that something is very wrong.

The abstract from Paul Romer’s recent paper on DSGE macro summarises the attitude of Christiano at. al.:

“For more than three decades, macroeconomics has gone backwards… Macroeconomic theorists dismiss mere facts … Their models attribute fluctuations in aggregate variables to imaginary causal forces that are not influenced by the action that any person takes. [This] hints at a general failure mode of science that is triggered when respect for highly regarded leaders evolves into a deference to authority that displaces objective fact from its position as the ultimate determinant of scientific truth.”

What is the “scientific” argument for DSGE? It goes something like this. In the 1970s, macroeconomics mostly consisted of a set of relationships which were assumed to be stable enough to inform policy. The attitude taken to underlying microeconomic behaviour was, broadly, “we don’t have an exact model which tells us how this combination of microeconomic behavours produces the aggregate relationship but we think this is both plausible and stable enough to be useful”.

When the relationships that had previously appeared stable broke down at the end of the 1970s — as macroeconomic relationships have a habit of doing — this opened the door for the Freshwater economists to declare all such theorising to be invalid and instead insist that all macro models be built on the basis of Walrasian general equilibrium. Only then, they argued, could we be sure that the macro relationships were truly structural and therefore not invariant to government policy.

There was also a convenient side-effect for the Chicago School libertarians: state-of-the-art Walrasian general equilibrium had reached the point where the best that could be managed was to build very simple models in which all markets, including the labour market, cleared continuously — basically a very crude “economics 101” model with an extra dimension called “time”, and a bit of dice-rolling thrown in for good measure. The result — the so-called “Real Business Cycle model” — is something like a game of Dungeons and Dragons with the winner decided in advance and the rules replaced by an undergrad micro textbook. The associated policy recommendations were ideologically agreeable to the Freshwater economists.

Economics was declared a science and the problems of involuntary unemployment, business cycles and financial instabilty were solved at the stroke of a pen. There were a few awkward details: working out what would happen if there were lots of different individuals in the system was a bit tricky — so it was easier just to assume one big person. This did away with much of the actual microeconomic “foundations” and just replaced one sort of assumed macro relationship with another — but this didn’t seem to bother anyone unduly. There were also some rather inconvenient mathematical results about the properties of aggregate production functions that nobody likes to talk about. But aside from these minor details it was all very scientific. A great discovery had been made: business cycles were driven by the unexplained residual from an internally inconsistent aggregate production function. A new consensus emerged — aside from sniping from Robert Solow and a few heterodox cranks — that this was the only way to do scientific macroeconomics.

if you wanted to get away from the Econ 101 conclusions and argue, for example, that monetary policy could have some short-run effects, you now had no choice other than to start with the new model and add “frictions” or “imperfections” — anything else was dilettantism. The best-known of these epicycle-like modifications is the “Calvo Fairy” — the assumption that not all prices adjust instantly following a policy change. This allowed those less devoted to extreme free-market politics to derive old favourites such as the expectations-augmented Phillips curve in this strange new world.

Simon Wren-Lewis describes this hard reset of the discipline as follows: “Freshwater created a revolution and won, and were in a position to declare Year Zero: only things done properly (i.e consistently microfounded) are true macro. That was good for a new generation, who could rediscover past knowledge but because they (re)did it ‘properly’ avoid any acknowledgement of what had come before.” The implication is that all pre-DSGE macro is invalid and, from Year Zero onwards, anyone doing macro without DSGE is not doing it “properly”.

This is where the story gets really odd. If, for instance, the Freshwater people had said “there are some problems with your models not fitting the data, and by the way, we’ve managed to add a time dimension to Walrasian general equilibrium, cool huh?” things might have turned out OK. The Freshwater people could have amused themselves playing optimising Dungeons and Dragons while everyone else tryed to work out why the Phillips curve had broken down.

Instead, somehow, the Freshwater economists managed to create Year Zero: everyone now has to play by their rules. For the next 30 years or so, instead of investigating how economies actually functioned, macroeconomists worked out how to get the new model to reproduce the few results that were already well known and had some degree of stability — basically the Phillips Curve. What they didn’t do was produce any new understanding of how economies worked, or develop models with any out of sample predictive power.

On what basis do Christiano et al. then argue that DSGE is the only game in town for making macro policy and,  more bizarrely, the only place where we can do “experiments”? One can certainly do experiments with a DSGE model — but you are experimenting on a DSGE model, not the economy. And it’s fairly well established by now that the economy doesn’t behave much like any benchmark DSGE model.

What Christiano et. al. are attempting to do is reimpose the Year Zero rules: anyone doing macro without DSGE is not doing it “properly”. But on what basis is DSGE macro “done properly”? What is the empirical evidence?

There are two places to look for empirical validation — the micro data and the macro data. Why look at micro data for validation of a macro model? The answer is that Year Zero imposed the requirement that all macro models be deduced — one logical step after another — from microeconomic assumptions. As Lucas, the leading revolutionary put it, “If these developments succeed, the term ‘macroeconomic’ will simply disappear from use and the modifier ‘micro’ will become superfluous. We will simply speak, as did Smith, Ricardo, Marshall and Walras of economic theory”

Is the microeconomic theory correct? The answer is “we don’t know”. It is a set of assumptions about how individuals and firms behave which is all but impossible to either validate or falsify.

The use of the deductive method in economics originated with Ricardo’s Principles of Political Economy in 1819 and is summarised by Nassau Senior in 1836:

“The economist’s premises consist of a very few general propositions, the result of observation, or consciousness, and scarely requiring proof … which every man, as soon as he hears them, admits as familiar to his thoughts … [H]is inferences are nearly as general, and, if he has reasoned correctly, as certain, as his premises”

Nearly two hundred years later, Simon Wren-Lewis’ description of the method of DSGE macro is remarkably similar:

“Microeconomics is built up in a deductive manner from a small number of basic axioms of human behaviour. How these axioms are validated is controversial, as are the implications when they are rejected. Many economists act as if they are self evident.”

What of the macroeconomic results — perhaps we shouldn’t worry whether the microfoundations are correct if the macro models fit the data?

The Freshwater version of the model concluded that all government policy has no effect and that any changes are driven by an unexplained residual. The more moderate Saltwater version, with added Calvo fairy, allowed a rediscovery of Milton Friedman’s main results: an expectations-augmented Phillips Curve and short-run demand effects from monetary policy. The model has two basic equations: aggregate demand (the IS relationship) and aggregate supply (the Phillips curve) along with a policy response rule.

The first, the aggregate demand relationship, is based on an underlying assumption about how households behave in response to changes in the rate of interest. Unfortunately, not only does the equation not fit the data, the sign of the main coefficient appears to be wrong. This is likely because, rather than trying to understand the emergent properties of many interacting agents, modellers took the short-cut of assuming that the one big person assumed to represent the economy would simply replicate the behaviour of a single textbook-rational individual — much like assuming that the behaviour of an ant colony would be the same as that of one big textbook ant. It’s hard to see how one can make an argument that this has advanced knowledge beyond what you could glean from a straightforward Keynesian or Modigliani consumption function. What if, instead, we’d spent 30 years looking at the data and trying to work out how people actually make consumption and investment decisions?

What of the other relationship, the Phillips Curve? The Financial Times has recently published a series of articles on the growing, and awkward, realisation that the Phillips Curve relationship appears to have once again broken down. This was the theme of a recent all-star conference at the Peterson Institute. Gavyn Davies summarises the problem: “Without the Phillips Curve, the whole complicated paraphernalia that underpins central bank policy suddenly looks very shaky. For this reason, the Phillips Curve will not be abandoned lightly by policy makers.”

The “complicated paraphernalia” Davies refers to are the two basic equations just described. More complex versions of the model do exist, which purport to capture further stylised macro relationships beyond the standard pair. This is done, however, by adding extra degrees of freedom — justified as essentially arbitrary “frictions” — and then over fitting the model to the data. The result is that the models are pretty good at “predicting” the data they are trained on, and hopeless at anything else.

30 years of DSGE research have produced exactly one empirically plausible result — the expectations-augmented Phillips Curve. It was already well known. There is an ironic twist here: the breakdown of the Phillips Curve in the 1970s gave the Freshwater economists their breakthrough. The breakdown of the Phillips Curve now — in the other direction — leaves DSGE with precisely zero verifiable achievements.

Christiano et al.’s paper is welcome in one respect. It confirms what macroeconomists at the top of the discipline think about those lower down the academic pecking order — particularly those who take a critical view. They have made public what many of us long suspected was said behind closed doors.

The best response I can think of once again comes from Simon Wren-Lewis, who seems to have seen Christiano et. al coming:

“That some macroeconomists (I call them microfoundations purists) can argue that you should model and give policy advice based not on what you see but on what you can microfound represents something that I cannot imagine any philosopher of science taking seriously (after they had stopped laughing).”