The future of money – UWE student takes for Bristol Festival of Economics (alongside mine)

Last month, I participated in an excellent panel on the Future of Money at the Bristol Festival of Economics.  In preparation for the event, UWE undegraduate students taking my course on Economic Theory and Policy worked together to produce two-sided briefs on what they thought to be the most interesting questions for the future of money, and distributed them in advance of the panel.   These briefs provided a great background to our conversation, exploring questions of digital money, endogenous money (and its heretics) and shadow money.

Given that we are economists with a certain respect for the power of (fair) competition, we had a contest for the best brief. The quality was excellent, so I chose three out of the seven to be distributed (see Money1 (1), Money2 and Money3).  Given the size of the audience, we could have easily distributed the rest as well (see Money Brief 4 , Money Brief 5Money at a glance 6Money Brief 7).

Screen Shot 2017-12-13 at 09.27.27Screen Shot 2017-12-13 at 09.27.42.png


My opening remarks focused on shadow money. Read them below.

Modern controversies about money typically focus on two topics – the power of banks to create money and the threats to this power posed by crypto-currencies. We suspect banks of yielding too much political power, having convinced states to enter a social contract that makes bank deposits into the ultimate money of the financial system. Bank of England recently confirmed this suspicion, in a widely discussed paper that confirmed what heterodox economists – Steve Keen here a famous example – have been saying for a long time.

There is somewhat of a paradox in this. If we consider the regulations that central banks have introduced since the crisis, they have not sought to limit banks’ power to create money. Rather, the new rules introduce by the Basel committee, and by the newly created Financial Stability Board, want banks to issue more of traditional bank deposits, and less of a new type of money, that I will call shadow money.

What is this shadow money? It is money created by banks and other financial institutions through the mysterious universe of shadow banking. If we accept the argument that a society’s money reflects the way in which the credit system is organised, then I think the future is shadow money.

Shadow money is, like all credit money, an IOU. Bank money is an IOU through which the bank promises to pay you a pound of cash for each pound in our bank deposit. You trust the bank that it will convert the deposit into cash at par if you wish to. The difference, however, is that the IOU in shadow money does not rely on trust, but on collateral. When a bank issues shadow money, it issues an IOU backed by tradable securities like government bonds, or corporate bonds, or other securities issued in shadow banking, like the famous CDOs.

Let me give you an example. You and I keep some of our wealth in a bank deposit because we trust the bank, or the deposit guarantee behind it, and because it is convenient for our daily payment routines. This is not the case for a pension fund, or an insurance company or what we call institutional investors and their asset managers. For them, traditional bank money is not an attractive option. The deposit guarantee is too small for what they consider ‘pocket money’. So the bank says ‘look, I will issue you an IOU that gives you the same kind of safety a bank deposit gives a small depositor. To create that safety, I will give you government bond collateral. I still get the interest payments on that bond but I will allow you to become the legal owner of that bond so you can sell it if I go bankrupt’. See how this clever legal arrangement behind shadow money is also advantageous for the bank – it can now fund that government bond with an IOU held by the pension fund.

The issuer of that bond – the government in our case – is also benefitting. Surely if banks and shadow banks have an IOU that allows them to borrow from institutional investors, it creates more demand and more liquidity for their government bonds. Liquidity is the magic word for governments wanting low and stable funding costs to run fiscal policies (at least until we get an MMT-inspired government). The seductive appeal of liquidity  applies to securities markets and their issuers more broadly – what we have here is clever system of organizing credit creation via capital markets. And it’s a big system – the cyryto-currency universe is worth roughly USD 200 bn. Shadow dollars, shadow euros and shadow yuan together amount to USD 20 trillion. That is, 100 times more (remember I wrote this before the Bitcoin frenzy).

This shadow money sounds really safe, you may be thinking. Why would regulators seek to limit its creation? The politics of this shadow money is both exceedingly intricate and fundamental to modern financial markets. Shadow money comes with two words that keep regulators awake at nights: leverage and interconnectedness. Going back to my example, it often occurs that the bank would be an intermediary between the pension fund who wants a safe IOU and a hedge fund who wants to borrow more to buy more securities. The hedge funds issued shadow dollars to the bank, and the bank issues shadow dollars to the pension fund. In this way, collateral has changed hands twice, it belongs to the hedge funds, but sits with the pension fund in case of default. They are all interconnected, and dependent on the hedge funds’ leverage decisions. If something goes wrong with the hedge fund, then everyone else stands to suffer.We get runs on shadow money.

Indeed, if you look close at how the global financial crisis unfolded, it started as a run on shadow dollars triggered by the collapse of Lehman Brothers – the familiar Gorton and Metrick story that proved influential in shaping how regulators think about regulating global (shadow) banking.  The run then travelled to shadow euros, where it evolved quietly but powerfully to engulf what we now call ‘periphery countries’ under the impotent eyes of the ECB, forced by its mandate to use the wrong cure (looking at you L-TROs) and make the crisis worse. Yes, this crisis is not a simple story of naive investors, fiscally irresponsible governments and European politics unable to credibly enforce rules stoping these governments. It was a crisis of shadow euros, despite ECB protestations.  It may soon resurge again in China, who is liberalising the production of shadow money in a bid to attract foreign investors and further RMB internationalisation (paper coming soon).

The future of shadow money is uncertain. One thing we know is that it takes a lot of room for manoeuvre for central banks to expand their crisis framework in order to stabilise shadow money. It is not a coincidence that the only that has done so formally – the Bank of England – is led by Mark Carney, who is also head of the Financial Stability Board. Bank of England has now formally assumed role of market-maker of last resort for systemic collateral markets (very different from lender of last resort), the only solution to stabilise shadow money outside prohibiting it all together (something the European Commission nearly – and accidentally – proposed when it planed to slap an FTT on shadow euros). The FSB & Basel III rules constrain it – and so the Trump administration is quietly making plans to free securities markets from the shackles of international regulation. To reduce the Minsky-type vulnerabilities, significantly magnified in this new world, we need a social contract around shadow money. It wont be a panacea, but it will make life a bit easier. This is not a mere question of better plumbing – it goes to the heart of ongoing discussions about the welfare state, inequality and our capacity to collectively provision for an uncertain future through the state, rather than through markets.




Austerity and household debt: a macro link?

For some time now I’ve been arguing that not only does austerity have real effects but also financial implications.

When the government runs a deficit, it produces a flow supply of safe assets: government bonds. If the desired saving of the private sector exceeds the level of capital investment, it will absorb these assets without government spending inducing inflationary tendencies.

This was the situation in the aftermath of the 2008 crisis. Attempted deleveraging led to increased household saving, reduced spending and lower aggregate demand. Had the government not run a deficit of the size it did, the recession would have been more severe and prolonged.

When the coalition came to power in 2010 and austerity was introduced, the flow supply of safe assets began to contract. What happens if those who want to accumulate financial assets — wealthy households for the most part — are not willing to reduce their saving rate? If there is an unchanged flow demand for financial assets at the same time as the government reduces the supply, what is the result?

Broadly speaking there are two possible outcomes: one is lower demand and output: a recession. If growth is to be maintained, the only option is that some other group must issue a growing volume of financial liabilities, to offset the reduction in supply by the government.

In the UK, since 2010, this group has been households — mostly households on lower incomes. As the government cut spending, incomes fell and public services were rolled back. Unsurprisingly, many households fell back on borrowing to make ends meet.

The graph below shows the relationship between the government deficit and the annual increase in gross household debt (both series are four quarter rolling sums deflated to 2015 prices).


From 2010 onwards, steady reduction in the government deficit was accompanied by a steady increase in the rate of accumulation of household debt. The ratio is surprisingly steady: every £2bn of deficit reduction has been accompanied by an additional £1bn per annum increase in the accumulation of household debt.

Note that this is the rate at which gross household debt is accumulated — not the “net financial balance” of the household sector. The latter is highlighted in discussions of “sectoral balances”, and in particular the accounting requirement that a reduction in the government deficit be accompanied by either an increase in the deficit of the private sector or a reduction in the deficit with the foreign sector.

Critics of the sectoral balances argument make the point that the net financial balance of the household sector is not the relevant indicator. Most household borrowing takes place within the household sector, mediated by the financial system. Savers hold bank deposits and pension fund claims, while other households borrow from the banks. The gross indebtedness of the household sector can therefore either increase or decrease without any change in the net position. Critics therefore see the sectoral balances argument argue as incoherent because it displays a failure to understand basic national accounting. This view has been articulated by Chris Giles and Andrew Lilico, among others.

For the UK, at least, this criticism appears misplaced. The chart below plots four measures of the household sector financial position along with the government deficit. The indicators for the household sector are the net financial balance, gross household debt as a share of both GDP and household disposable income, and the household saving ratio. The correlation between the series is evident.


The relationship between the government deficit and the change in gross household debt is surprisingly stable. The figure below plots the series for the full period for which data are available from the ONS: from 1987 until 2017. With the exception of the period 2001-2008, where there is a clear structural break, the relationship is persistent.


Why should this be the case? One needs to be careful with apparently stable relationships between macroeconomic variables — they have a habit of breaking down. One reason for caution is that the composition of household debt has changed over the period shown: in the pre-2008 period most of the increase was mortgage borrowing, while post-crisis, consumer debt in the form of credit cards, car loans and so on has played an increasing role. Nonetheless, a hypothesis can be advanced:

If one group of households saves a relatively constant share of income — and this represents the majority of total saving in the household sector — then variance in the supply of assets issued by public sector must be matched either by variations in output and employment or by variance in the issuance of financial liabilities by other sectors. If monetary policy is used to maintain steady inflation and therefore relatively stable output and employment, changes in the cost of borrowing may induce other (non-saver) households to adjust their consumption decisions in such a way that stabilises output.

Put another way, if the contribution of government deficit spending to total demand varies and saving among some households is relatively inelastic, avoiding recessions requires another sector (or sub-sector) to go into deficit in order that total demand be maintained.

This hypothesis fits with the observation that the household saving ratio falls as the rate of gross debt accumulation increases. Paradoxically, the problem is not too little household saving but too much, given the volume of investment. If inelastic savers were willing to reduce their saving and increase consumption in response to lower government spending, then recession could be avoided without an increase in household debt. A better solution would be an increase in the business investment of the private sector: it is the difference between saving and investment that matters.

There is a clear structural break in the relationship between the deficit and household debt, starting around 2001. This is likely the result of the global credit boom which gathered pace after Alan Greenspan cut the target federal funds rate from 6.5% in 1999 to 1% in 2001. During this period, the financial position of the corporate sector shifted from deficit to surplus, matched by large rises in the accumulation of household debt. With the outbreak of crisis in 2008, the previous relationship appears to re-emerge.

Careful econometrics work is required to try and disentangle the drivers of rising household debt. But relationships between macroeconomic variables with this degree of stability are unusual. Something interesting is going on here.

EDIT: 22 November

Toby Nangle left a comment suggesting that it would be good to show the data on borrowing by different income levels. It’s a good point, and raises a complex issue about the distribution of lending and borrowing within the household sector. This is something that J. W. Mason and others have been discussing. I need another post to fully explain my thinking on this, but for now, I’ll include the following graph:


This is calculated using an experimental new dataset compiled by the ONS which uses micro data source to try and produce disaggregated macro datasets. Data are currently only available for three years — 2008, 2012, and 2013 — but I understand that the ONS are working on a more complete dataset.

What this shows is that in 2008, at the end of the 2000s credit boom, only the top two income quintiles were saving: the bottom 60% of the population was dissaving. In 2012 and 2013, the household saving ratio and financial balance had increased substantially and this shows up in the disaggregated figures as positive saving for all but the bottom quintile.

I suspect that as the saving ratio and net financial balance have subsequently declined, and gross debt has increased, the distributional pattern is reverting to what it looked like in 2008: saving at the top of the income distribution and dissaving in the lower quintiles.

Dilettantes Shouldn’t Get Excited

A new paper on DSGE modelling has caused a bit of a stir. It’s not so much the content of the paper — a thorough but unremarkable survey of the DSGE literature and a response to recent criticism — as the tone that has caught attention. The paper begins:

“People who don’t like dynamic stochastic general equilibrium (DSGE) models are dilettantes. By this we mean they aren’t serious about policy analysis… Dilettantes who only point to the existence of competing forces at work – and informally judge their relative importance via implicit thought experiments – can never give serious policy advice.”

The authors, Lawrence Christiano, Martin Eichenbaum and Mathias Trabandt, make a number of claims, most eye-catchingly: “the only place that we can do experiments is in dynamic stochastic general equilibrium (DSGE) models.” They then list a number of policy questions that are probably best answered using a combination of time series econometrics and careful thinking. After their survey of the literature, the authors conclude — without recourse to evidence — “… DSGE models will remain central to how macroeconomists think about aggregate phenomena and policy. There is simply no credible alternative to policy analysis in a world of competing economic forces.”

The authors seem to have been exercised in particular by recent comments from Joseph Stiglitz, who wrote:

“I believe that most of the core constituents of the DSGE model are flawed—sufficiently badly flawed that they do not provide even a good starting point for constructing a good macroeconomic model. These include (a) the theory of consumption; (b) the theory of expectations—rational expectations and common knowledge; (c) the theory of investment; (d) the use of the representative agent model (and the simple extensions to incorporate heterogeneity that so far have found favor in the literature): distribution matters;(e) the theory of financial markets and money; (f) aggregation—excessive aggregation hides much that is of first order macroeconomic significance; (g) shocks—the sources of perturbation to the economy and (h) the theory of adjustment to shocks—including hypotheses about the speed of and mechanism for adjustment to equilibrium or about out of equilibrium behavior.”

Stiglitz is not the only dilettante in town. He’s not even the only Nobel prize-winning dilettante — Robert Solow has been making these points for decades now. The Nobels are not alone. Brad Delong takes a similar view, writing that “DSGE macro has … proven a degenerating research program and a catastrophic failure: thirty years of work have produced no tools for useful forecasting or policy analysis”. (You should also read his response to the new paper, and some of the comments on his blog).

Back in 2010, John Mulbaer wrote that “While DSGE models are useful research tools for developing analytical insights, the highly simplified assumptions needed to obtain tractable general equilibrium solutions often undermine their usefulness. As we have seen, the data violate key assumptions made in these models, and the match to institutional realities, at both micro and macro levels, is often very poor.”

This is how a well-mannered economist politely points out that something is very wrong.

The abstract from Paul Romer’s recent paper on DSGE macro summarises the attitude of Christiano at. al.:

“For more than three decades, macroeconomics has gone backwards… Macroeconomic theorists dismiss mere facts … Their models attribute fluctuations in aggregate variables to imaginary causal forces that are not influenced by the action that any person takes. [This] hints at a general failure mode of science that is triggered when respect for highly regarded leaders evolves into a deference to authority that displaces objective fact from its position as the ultimate determinant of scientific truth.”

What is the “scientific” argument for DSGE? It goes something like this. In the 1970s, macroeconomics mostly consisted of a set of relationships which were assumed to be stable enough to inform policy. The attitude taken to underlying microeconomic behaviour was, broadly, “we don’t have an exact model which tells us how this combination of microeconomic behavours produces the aggregate relationship but we think this is both plausible and stable enough to be useful”.

When the relationships that had previously appeared stable broke down at the end of the 1970s — as macroeconomic relationships have a habit of doing — this opened the door for the Freshwater economists to declare all such theorising to be invalid and instead insist that all macro models be built on the basis of Walrasian general equilibrium. Only then, they argued, could we be sure that the macro relationships were truly structural and therefore not invariant to government policy.

There was also a convenient side-effect for the Chicago School libertarians: state-of-the-art Walrasian general equilibrium had reached the point where the best that could be managed was to build very simple models in which all markets, including the labour market, cleared continuously — basically a very crude “economics 101” model with an extra dimension called “time”, and a bit of dice-rolling thrown in for good measure. The result — the so-called “Real Business Cycle model” — is something like a game of Dungeons and Dragons with the winner decided in advance and the rules replaced by an undergrad micro textbook. The associated policy recommendations were ideologically agreeable to the Freshwater economists.

Economics was declared a science and the problems of involuntary unemployment, business cycles and financial instabilty were solved at the stroke of a pen. There were a few awkward details: working out what would happen if there were lots of different individuals in the system was a bit tricky — so it was easier just to assume one big person. This did away with much of the actual microeconomic “foundations” and just replaced one sort of assumed macro relationship with another — but this didn’t seem to bother anyone unduly. There were also some rather inconvenient mathematical results about the properties of aggregate production functions that nobody likes to talk about. But aside from these minor details it was all very scientific. A great discovery had been made: business cycles were driven by the unexplained residual from an internally inconsistent aggregate production function. A new consensus emerged — aside from sniping from Robert Solow and a few heterodox cranks — that this was the only way to do scientific macroeconomics.

if you wanted to get away from the Econ 101 conclusions and argue, for example, that monetary policy could have some short-run effects, you now had no choice other than to start with the new model and add “frictions” or “imperfections” — anything else was dilettantism. The best-known of these epicycle-like modifications is the “Calvo Fairy” — the assumption that not all prices adjust instantly following a policy change. This allowed those less devoted to extreme free-market politics to derive old favourites such as the expectations-augmented Phillips curve in this strange new world.

Simon Wren-Lewis describes this hard reset of the discipline as follows: “Freshwater created a revolution and won, and were in a position to declare Year Zero: only things done properly (i.e consistently microfounded) are true macro. That was good for a new generation, who could rediscover past knowledge but because they (re)did it ‘properly’ avoid any acknowledgement of what had come before.” The implication is that all pre-DSGE macro is invalid and, from Year Zero onwards, anyone doing macro without DSGE is not doing it “properly”.

This is where the story gets really odd. If, for instance, the Freshwater people had said “there are some problems with your models not fitting the data, and by the way, we’ve managed to add a time dimension to Walrasian general equilibrium, cool huh?” things might have turned out OK. The Freshwater people could have amused themselves playing optimising Dungeons and Dragons while everyone else tryed to work out why the Phillips curve had broken down.

Instead, somehow, the Freshwater economists managed to create Year Zero: everyone now has to play by their rules. For the next 30 years or so, instead of investigating how economies actually functioned, macroeconomists worked out how to get the new model to reproduce the few results that were already well known and had some degree of stability — basically the Phillips Curve. What they didn’t do was produce any new understanding of how economies worked, or develop models with any out of sample predictive power.

On what basis do Christiano et al. then argue that DSGE is the only game in town for making macro policy and,  more bizarrely, the only place where we can do “experiments”? One can certainly do experiments with a DSGE model — but you are experimenting on a DSGE model, not the economy. And it’s fairly well established by now that the economy doesn’t behave much like any benchmark DSGE model.

What Christiano et. al. are attempting to do is reimpose the Year Zero rules: anyone doing macro without DSGE is not doing it “properly”. But on what basis is DSGE macro “done properly”? What is the empirical evidence?

There are two places to look for empirical validation — the micro data and the macro data. Why look at micro data for validation of a macro model? The answer is that Year Zero imposed the requirement that all macro models be deduced — one logical step after another — from microeconomic assumptions. As Lucas, the leading revolutionary put it, “If these developments succeed, the term ‘macroeconomic’ will simply disappear from use and the modifier ‘micro’ will become superfluous. We will simply speak, as did Smith, Ricardo, Marshall and Walras of economic theory”

Is the microeconomic theory correct? The answer is “we don’t know”. It is a set of assumptions about how individuals and firms behave which is all but impossible to either validate or falsify.

The use of the deductive method in economics originated with Ricardo’s Principles of Political Economy in 1819 and is summarised by Nassau Senior in 1836:

“The economist’s premises consist of a very few general propositions, the result of observation, or consciousness, and scarely requiring proof … which every man, as soon as he hears them, admits as familiar to his thoughts … [H]is inferences are nearly as general, and, if he has reasoned correctly, as certain, as his premises”

Nearly two hundred years later, Simon Wren-Lewis’ description of the method of DSGE macro is remarkably similar:

“Microeconomics is built up in a deductive manner from a small number of basic axioms of human behaviour. How these axioms are validated is controversial, as are the implications when they are rejected. Many economists act as if they are self evident.”

What of the macroeconomic results — perhaps we shouldn’t worry whether the microfoundations are correct if the macro models fit the data?

The Freshwater version of the model concluded that all government policy has no effect and that any changes are driven by an unexplained residual. The more moderate Saltwater version, with added Calvo fairy, allowed a rediscovery of Milton Friedman’s main results: an expectations-augmented Phillips Curve and short-run demand effects from monetary policy. The model has two basic equations: aggregate demand (the IS relationship) and aggregate supply (the Phillips curve) along with a policy response rule.

The first, the aggregate demand relationship, is based on an underlying assumption about how households behave in response to changes in the rate of interest. Unfortunately, not only does the equation not fit the data, the sign of the main coefficient appears to be wrong. This is likely because, rather than trying to understand the emergent properties of many interacting agents, modellers took the short-cut of assuming that the one big person assumed to represent the economy would simply replicate the behaviour of a single textbook-rational individual — much like assuming that the behaviour of an ant colony would be the same as that of one big textbook ant. It’s hard to see how one can make an argument that this has advanced knowledge beyond what you could glean from a straightforward Keynesian or Modigliani consumption function. What if, instead, we’d spent 30 years looking at the data and trying to work out how people actually make consumption and investment decisions?

What of the other relationship, the Phillips Curve? The Financial Times has recently published a series of articles on the growing, and awkward, realisation that the Phillips Curve relationship appears to have once again broken down. This was the theme of a recent all-star conference at the Peterson Institute. Gavyn Davies summarises the problem: “Without the Phillips Curve, the whole complicated paraphernalia that underpins central bank policy suddenly looks very shaky. For this reason, the Phillips Curve will not be abandoned lightly by policy makers.”

The “complicated paraphernalia” Davies refers to are the two basic equations just described. More complex versions of the model do exist, which purport to capture further stylised macro relationships beyond the standard pair. This is done, however, by adding extra degrees of freedom — justified as essentially arbitrary “frictions” — and then over fitting the model to the data. The result is that the models are pretty good at “predicting” the data they are trained on, and hopeless at anything else.

30 years of DSGE research have produced exactly one empirically plausible result — the expectations-augmented Phillips Curve. It was already well known. There is an ironic twist here: the breakdown of the Phillips Curve in the 1970s gave the Freshwater economists their breakthrough. The breakdown of the Phillips Curve now — in the other direction — leaves DSGE with precisely zero verifiable achievements.

Christiano et al.’s paper is welcome in one respect. It confirms what macroeconomists at the top of the discipline think about those lower down the academic pecking order — particularly those who take a critical view. They have made public what many of us long suspected was said behind closed doors.

The best response I can think of once again comes from Simon Wren-Lewis, who seems to have seen Christiano et. al coming:

“That some macroeconomists (I call them microfoundations purists) can argue that you should model and give policy advice based not on what you see but on what you can microfound represents something that I cannot imagine any philosopher of science taking seriously (after they had stopped laughing).”


China’s shadow banking: New growth model or the next Lehman Brothers?

A debate between Christopher Balding and Daniela Gabor, moderated by Jo Michell


Thursday November 2nd 2017, 4-5.30 pm                                                                                    Faculty of Business and Law building Room 2X242                                                                      UWE Bristol, Frenchay Campus

Since the global financial crisis, shadow banking in China has grown rapidly as a result of financial repression, macro policy, and the politics of local-central government relationships. Is this the financial Wild West, the escape valve of a financial system repressed by the long hand of the state or a carefully engineered process to bring market forces into the financial system? How successful are China’s policies to transform shadow banking into securities-market based finance? Have they really addressed concerns about implicit state guarantees? And how do reforms fit with the need for deep and liquid securities markets if Reminibi internationalisation is to succeed?

Christopher Balding is an Associate Professor in Business and Economics at the HSBC Business School of Peking University Graduate School in Shenzhen, China. One of the leading experts on the Chinese economy and financial markets, he is a Bloomberg View contributor and advises governments, central banks, and investors around the world. He has contributed to Bloomberg, the Wall Street Journal, the Financial Times, BBC, CNBC, and Al-Jazeera. He tweets at @BaldingsWorld

Daniela Gabor is Professor of Economics and Macrofinance at UWE Bristol. Her research project ‘Managing shadow money’, funded by the Institute for New Economic Thinking since 2015, explores shadow banking in the US, Europe and China. One of the project papers, ‘Goodbye (Chinese) shadow banking, hello market-based finance’, will be published in Development and Change in December 2017. She is finalising a book manuscript on Shadow Money. She blogs at and tweets at @DanielaGabor

Jo Michell is Associate Professor in Economics at UWE Bristol. He has a PhD in Economics on from SOAS University of London, written about the Chinese banking and financial system. His research interests include macroeconomics, money and banking, and income distribution. He has published on macroeconomics and finance in peer reviewed journals including the Cambridge Journal of Economics and Metroeconomica. He co-edited the Handbook of Critical Issues in Finance with Jan Toporowski (Elgar, 2012).

For further inquiries, please email

Strong and stable? The Conservatives’ economic record since 2010

In a recent interview, Theresa May was asked by Andrew Neil how the Conservatives would fund their manifesto commitments on NHS spending. Given that the Conservatives chose not to cost their manifesto pledges, May was unable to answer. Instead she simply repeated that the Conservatives are the only party that can deliver the economic growth and stability required to pay for essential public services. When pressed, May’s response was simple: ‘our economic credibility is not in doubt’.

Does the record of the last seven years support May’s claim?

The first statistic always quoted in such discussions is GDP growth. A lot has been made of the latest quarterly GDP figures, showing the UK at the bottom of the G7 league with quarterly GDP growth of just 0.2%. But these numbers actually tell us very little: they refer to a single quarter and are still subject to revision.

It is more useful to look at real GDP per capita over a longer period of time. This tells us the additional ‘real’ income available per person that has been generated. The performance of the G7 countries since the pre-crisis peak in 2007 is shown in the chart below, with the series indexed to 1 in 2007 for each country. (Data are taken from the most recent IMF WEO database.)

G7 GDP per capita, 2007-2016

GDP per capita in the UK only surpassed its pre-crisis level in 2015. By 2016, GDP per capita relative to the pre-crisis level was less than 2% higher than in 2007, putting the UK behind Japan, Germany, the US and Canada, slightly ahead of France, and well ahead of the Italian economy which remains mired in a deep depression. On this measure, the UK’s performance is not particularly impressive.

For most people, wages are a more important gauge of economic performance than GDP per capita. Here, the UK is an outlier. Relative real wage growth in the G7 economies is shown in the table below, alongside the changes in GDP per capita for the period 2007-2015.


% change in GDP per capita, 2007-2015

% change in average real wage, 2007-2015

Canada 3.2 0.8
France -0.2 0.6
Germany 6.3 0.9
Italy -11.7 -0.7
Japan 3.0 -0.2
United Kingdom 0.7 -1.0
United States 3.7 0.5

Despite coming mid-table in terms of GDP per capita, the UK has the worst performance in terms of real wages, which have fallen by an average of 1% per year over the period. Even in depression-struck Italy, wages did not fall so far.

This translates into a fall of almost five percent in the real wage of the typical (median) worker since the crisis, as the chart below shows. This LSE paper, from which the chart is taken, finds that while almost everyone is worse off since the crisis, the youngest have seen the largest falls in income with 18-21-year-olds facing a fall in real wages of over 15%


With the value of the pound falling since the Brexit vote, inflation is once again eating into real wages and the latest figures show that, after a period of a couple of years in which wages had been recovering, real wages are now falling again and are likely to do so for the next few years. Average earnings are not projected to reach 2007 levels again until 2022 – by then the UK will have gone fifteen years without a pay rise.

A related issue is the UK’s desperately poor productivity performance. ‘Productivity’ here refers to the amount produced per worker on average. As the chart below from the Resolution Foundation shows, the UK has now experienced a decade without any increase in productivity — something which is historically unprecedented.


What causes productivity growth is a controversial topic among economists. Until recently, the majority view was that productivity is not affected by government macroeconomic policy. This position (which I disagree with) is increasingly hard to defend. As Simon Wren-Lewis argues here, evidence is mounting that the UK’s productivity disaster is the result of government policy: the Conservatives’ austerity policies have caused flatlining productivity.

Austerity — or, as it was branded at the time, the ‘Long Term Economic Plan‘ — was the central plank of Osborne’s policy from 2010 until the Brexit referendum vote in 2015.

As I and others have argued at length elsewhere, austerity was based on two false premises — ‘lies’ might be more accurate. The first was that excessive spending by Labour was a cause of the 2008 crisis. The second was that the size of the UK’s government debt posed serious and immediate risks that outweighed other concerns.

One thing that almost all macroeconomists agree on is that when recovering from a severe downturn such as 2008 — and with interest rates at nearly zero — the deficit should not be the target of policy. Instead, it should be allowed to expand until the economy has recovered.

Simply put, the deficit should not be used as a yardstick for successful management of an economy in the aftermath of a major economic crisis such as 2008. But since eliminating the deficit was the single most important target of the Conservatives’ so-called Long Term Economic Plan, we should examine the record.

In 2010, Osborne stated that the deficit would be eliminated by 2015. Two years after that deadline passed, the current Conservative manifesto states — in a passage that would not pass any undergraduate economics exam — that they will ‘aim to’ eliminate the deficit by 2025.

Even on their own entirely misguided terms, they have failed completely.


While the dangers of the public debt have been vastly exaggerated by the Conservatives, they have had little to say about private sector debt. It is now widely accepted that the only remaining motor of economic growth is consumption spending. But with wages stagnant, continued growth of consumption cannot be sustained without rising levels of household debt.

This is the reason given when economists are asked why their predictions of post-referendum recession were so wrong: they didn’t anticipate the current credit-driven consumption burst. But this trend has been apparent for at least the last two years. It shouldn’t have been too hard to see this coming.


Just as the Tories tend to stay quiet on private debt, they also have little to say about the ‘other’ deficit — the current account deficit. This is a measure of how much the country is reliant on foreigners to finance our spending. The deficit expanded from 2011 onward to reach almost 5% of GDP. This is an important source of vulnerability for a country which is about to try and extricate itself from economic integration with its closest neighbours.

CHART-BoP- current account balance as per cent of GDP

Overall, the Tories economic record is far from impressive: stagnant wages and productivity, weak investment and manufacturing, rising household debt, and a large external deficit.

Now, a reasonable response might be that these are long-standing issues with the UK economy and are not the fault of the Conservatives. There is some truth to this. But if this is the case, Theresa May should identify and acknowledge these issues and provide a clear outline of how her policies will address them. This is not what she has done. Instead, she simply repeats her mantra that only the Conservatives will deliver on the economy, without providing any evidence to support her claim.

And then there is the decision to call a referendum on Brexit. It is hard to think of a more economically reckless move. Household analogies for government economic policy should be avoided — but I can’t think of an alternative in this case.

Following up on an austerity programme with the Brexit referendum is like sending the children to school without lunch money for six years and allowing the house to fall into serious disrepair in order to needlessly over-pay a zero-interest mortgage — and then gambling the house on a dice game.

Given this record, it is astonishing that the Conservatives present themselves, with a straight face, as the party of economic competence — and the media dutifully echoes the message. The truth is that the Conservatives have mismanaged the economy for the last seven years, needlessly imposing austerity, choking off growth in productivity, wages and incomes. They then called an entirely unnecessary referendum, gambling the future prosperity of the country for political gain.

Theresa May is correct — there is little doubt about the economic credibility of the Conservatives. It is in short supply.

Thoughts on the NAIRU

Simon Wren-Lewis’s post attacking Matthew Klein’s critique of the NAIRU provoked some strong reactions. On reflection, my initial response was wide of the mark. Matthew responded saying he agreed with most of Simon’s piece.

So are we all in agreement? I think there are differences, but we need to first clarify the issues.

Matthew’s main point was empirical: if you want to use a relationship between employment and inflation as a policy target it needs to be relatively stable. The evidence suggests it is not.

But there is a deeper question of what the NAIRU actually means – what is a NAIRU? The simple definition is straightforward: it is the rate of unemployment at which inflation is stable. If policy is used to increase demand, reducing unemployment below the NAIRU, inflation will rise until excess demand is removed and unemployment allowed to increase again.

At first glance this appears all but identical to the ‘natural rate of unemployment’, a concept originating with Friedman’s monetarism and inherited by some New Keynesian models – in particular the ‘standard’ sticky-price DSGE model of Woodford and others. In this view, the economy has ‘natural rates’ of output and employment, beyond which any attempt by policy makers to increase demand becomes futile, leading only to ever-higher inflation. Since there is a direct correspondence between stabilizing inflation and fixing output and employment at their ‘natural’ rates, policy makers should simply adjust interest rates to hit an inflation target. In typically modest fashion, economists refer to this as the ‘Divine Coincidence‘ – despite the fact it is essentially imposed on the models by assumption.

Matthew’s piece skips over this part of the history, jumping straight from Bill Phillips’s empirical relationship to the NAIRU. But the NAIRU is a weaker claim than the natural rate. As Simon says, all that is required for a NAIRU is a relationship of the form inf = f(U, E[inf]), i.e. current inflation is some function of unemployment and expected inflation. At its simplest, agents could just assume inflation will be the same in the current period as the last period. Then, employment above some level would causing rising inflation and vice versa.

More sophisticated New Keynesian formulations of the NAIRU are a good distance removed from the ‘natural rate’ theory – these models include imperfections in the labour and product markets and a bargaining process between workers and firms. As a result, they incorporate (at least short-run) involuntary unemployment and see inflation as driven by competing claims on output rather than the ‘too much nominal demand chasing too few goods’ story of the monetarists and simple DSGE models.

It is also the case that such a relationship is found in many heterodox models. Engelbert Stockhammer explores heterodox views on the NAIRU in a provocatively-titled paper, ‘Is the NAIRU Theory a Monetarist, New Keynesian, Post Keynesian or Marxist Theory?’. He doesn’t identify a clear heterodox position – some Post-Keynesians reject the NAIRU outright, while others present models which incorporate NAIRU-like relationships.

Engelbert notes that arguably the earliest definition of the NAIRU is to be found in Joan Robinson’s 1937 Essays in the Theory of Employment:

In any given conditions of the labour market there is a certain more or less definite level of employment at which money wages will rise … there is a certain level of employment, determined by the general strategical position of the Trade Unions, at which money wages rise, and at that level of employment there is a certain level of real wages, determined by the technical conditions of production and the degree of monopoly’ (Robinson, 1937, pp. 4-5)

Recent Post-Keynesian models also include NAIRU-like relationships. For example, Godley and Lavoie’s textbook includes a model in which workers and firms compete by attempting to impose money-wage and price increases respectively. The size of wage increases demanded by workers is a function of the employment rate relative to some ‘full employment’ level. That sounds a lot like a NAIRU – but that isn’t how Godley and Lavoie see it:

Inflation under these assumptions does not necessarily accelerate if employment stays in excess of its ‘full employment’ level. Everything depends on the parameters and whether they change … An implication of the story proposed here is that there is no vertical long-run Phillips curve. There is no NAIRU. (Godley and Lavoie, 2007, p. 304, my emphasis)

The authors summarise their view with a quote from an earlier work by Godley:

Indeed if it is true that there is a unique NAIRU, that really is the end of discussion of macroeconomic policy. At present I happen not to believe it and that there is no evidence of it. And I am prepared to express the value judgment that moderately higher inflation rates are an acceptable price to pay for lower unemployment. But I do not accept that it is a foregone conclusion that inflation will be higher if unemployment is lower (Godley 1983: 170, my emphasis).

This highlights a key difference between Post-Keynesian and neoclassical approaches to the NAIRU: where Post-Keynesian models do include NAIRU-like relationships, the relevent employment level is endogenous, due to hysteresis effects for example. In other words, the NAIRU moves around and is influenced by demand-management policy. As such, the NAIRU is not an attractor for the unemployment rate as in many neoclassical models.

Marxist theory also contains something which looks a lot like a NAIRU: the ‘industrial reserve army’ of the unemployed. Marx argued that unemployment is the mechanism by which capitalists discipline workers and prevent wage claims rising to the point at which profits and capital accumulation are depleted. Periodic recessions are therefore a necessary part of the capitalist development process.

This led Nicholas Kaldor to describe Margaret Thatcher as ‘our first Marxist Prime Minister’ – not because she was an advocate of socialist revolution but because she understood the reserve army mechanism: ‘They have managed to create a pool – or a “reserve army” as Marx would have called it – of 3 million unemployed … the British working classes have been thoroughly cowed and frightened.’ (This point is passed over rather quickly in Simon’s piece. In the 1980s, he writes, ‘policy changed and increased unemployment and inflation fell.’)

So we should be careful about blanket dismissals of the NAIRU. Instead, we must be clear how our analysis differs: what are the mechanisms which generate inflationary pressure at low levels of unemployment – conflicting claims or excess nominal demand? Is the NAIRU stable and exogenous? Does it act as an attractor for the unemployment rate, and over what time period? What are the implications for policy?

Ultimately, I think this breaks down into an issue about semantics. How far from the unique, stable, vertical long-run Phillips curve can we get and still have something we call a NAIRU? Simon adopts a very loose definition:

There is a relationship between inflation and unemployment, but it is just very difficult to pin down. For most macroeconomists, the concept of the NAIRU really just stands for that basic macroeconomic truth.

I’d like to believe this were true. But I suspect most macroeconomists, trained on New Keynesian DSGE models, have a narrower view: they tend to think in terms of a stable short-run sticky-price Phillips curve and a unique long-run Phillips curve at the ‘natural’ rate of employment.

There is one other aspect to consider. Engelbert Stockhammer distinguishes between the New Keynesian NAIRU theory and the New Keynesian NAIRU story. He argues (writing in 2007, just before the crisis) that the NAIRU has been used as the basis for an account of unemployment which blames inflexible labour markets, over-generous welfare states, job protection measures and strong unions. The policy prescriptions are then straightforward: labour markets should be deregulated and welfare states scaled back. Demand management should not be used to reduce unemployment.

While economists have changed their tune substantially in the decade since the financial crisis, I suspect that the NAIRU story is one reason that defence of the NAIRU theory generates such strong reactions.

EDIT: Bruno Bonizzi points me to this piece at the INET blog with has an excellent discussion of the empirical evidence and theoretical implications of hysteresis effects and an unstable NAIRU.


Image reproduced from Wikipedia:

Philanthrocapitalists meet the world’s poor: international development in the fintech era

Daniela Gabor and Sally Brooks

“Within the global development landscape, few funding areas are hotter right now than financial inclusion” (Inside Philanthropy, May 2016)

“The significant progress in moving away from cash that Bangladesh has made in such a short amount of time is due to the government’s strong leadership, the innovation of the private sector and citizens’ openness to a digital future”  (The Better Than Cash Alliance)

On day one of this year’s World Economic Forum at Davos, OXFAM named US philanthropists Bill Gates and Mark Zuckerberg on the list of ‘just eight men own as much wealth as half of humanity’. Why, the question was then raised, are we bashing ‘philanthrocapitalists’ like Gates who had donated so much of their wealth to tackling global poverty?

Philanthropists, we argue in a new paper, are far more influential in international development than commonly understood. After the 2008 crisis, international development has embraced financial inclusion as the new development paradigm. With this, development interventions are increasingly organised through a new alliance of developing countries, international financial organisations, ‘philanthropic investment firms’ and fintech companies, what we term the fintech-philanthropy-development (FPD) complex. The FPD version of financial inclusion – know thy (irrational) customer – celebrates the power of technology to simultaneously achieve positive returns, philantrophy and human development.

‘Transform mobile behaviour into financial opportunity’

The premise is simple. Poverty can be tackled faster if the poor have better access to finance. And something unpredecented is happening with the world’s poor in Sub-Saharan Africa, Asia, Latin America and the Caribbean. Roughly 1.7 billion of the 2 billion without formal access to finance have a mobile phone. These generate ‘digital footprints’ that can be harnessed by big data and predictive algorithms to better understand, and thus include, the ‘unbankable’. Transforming mobile behaviour into financial opportunity.

The FPD origins can be traced back to the Alliance for Financial Inclusion. Created in 2011 with funding from the Bill and Melinda Gates Foundation (BMGF) and endorsement from G20 as key to achieving the sustainable development goals, AFI brought together policy makers from ninety developing countries united in their commitment to work with private actors and international development organisations (the World Bank) in order to ‘reach the world’s 2.5 billion unbanked’. By 2014, the Omidyar Network (backed by Ebay founder Pierre Omidyar) would become the second philanthropic investment organization officially partnered with AFI. That same year, AFI launched the Public Private Dialogue Platform (PPD), promising the private sector ‘an unprecedented opportunity’ to connect to policy makers who are regulating new and high growth markets. In 2015, Mastercard, Visa and the Spanish bank BBVA have become AFI members, with more partnerships to be formalized in the future. Meanwhile, the AFI acts an umbrella and incubator for a growing number of global and regional FI programmes such as the UNDP-Funded ‘Mobile Money for the Poor’ (MM4P) and ‘Shaping Inclusive Finance Transformations’ (SHIFT), among others.

Thus, the FPD complex sees the growing influence of a digital elite in development interventions. The public-private partnerships are predicated on the idea that technology and big data can play a critical role in advancing financial inclusion. For example, the Omidyar Network is investing in fintech companies whose strategic goal is to ‘disrupt traditional risk assessment’ by, for example, predicting customers ‘appetite for risk’ based on ‘patterns of calls and text messages’, or even inviting them to participate in online games and quizzes that generate behavioural data that can be fed into predictive algorithms. The promise is to connect lenders to upwardly mobile customers. Through these strategies of what Izabella Kaminska has called ‘financial intrusion’, consumers’ ‘digital footprints’ are being created, without their knowledge, and used or stored for future commercial use.

A cash-lite future

India’s recent demonetization initiative has received global attention. Widely judged as a misstep, the decision to withdraw 86% of all cash from circulation is typically explained as fight again shadow economy. But there is more to India’s initiative. It represents one (important) element of its adoption of  the FPD approach to development.

Indeed, the state agreed to play an important role in the harvesting and commodification of digital footprints, by opening up its direct relationship with the poor to fintech. A spinoff from the AFI, the Better than Cash Alliance, encourages developing countries to digitalize social transfers, thus reaching the ‘unbankable’ at a stroke through the long arm of the state. Housed at the UN as implementing partner for the G20 Global Partnership for Financial Inclusion, the Better than Cash Alliance promises that a ‘cash lite’ Finance for Development agenda would put the UN’s Sustainable Development Goals within reach (Goodwin-Groen 2015).

The Better than Alliance has proved adept at illustrating the benefits of a cash-lite future. Digitizing payments from government to people can save the government of Bangladesh US 146 million per year across 6 social safety net programs. India, a member since 2015, saves USD 2billion by paying cooking gas subsidies digitally.

While such savings appeal immediately to governments worldwide, a Bankable Frontier Associates report made clear what is at stake in the ‘journey towards cash lite’. For financial service providers, the opportunities for FI via digital payments do not arise from increasing use of bank deposits by the previously unbanked, since bank accounts are not ‘daily relevant’. Rather, opportunities ‘come from financial service providers using the digital information generated by e-payments and receipts to form a profile for each individual customer’. This digital profiling then enables providers to offer more appropriate and relevant products.

Thus, data and algorithms become critical to pushing the risk frontier in low-income countries, as fintech companies create, collect and commodify behavioral data, within an ‘ecosystem’ fostered by networks of philanthropic investors, development finance institutions and donors and policy makers in participating countries.

Another, potentially more problematic issue arises in this process. Traditional microfinance lenders mobilised peer pressure in ‘solidarity groups’ to discipline borrowers to be ‘good financial citizens’. In the fintech era of international development, the mantra is ‘know thy irrational customer’ via algorithms. Cignifi for instance promises to continuously track changes in customers’ mobile behaviour, as mobile phones generate data that capture users moving from ‘one behavioural state to another’. This would allow lenders to create choice architectures that nudge customers in the direction of desired behaviours to preserve mobile-data-based credit score.

While the ethics of nudge are increasingly being debated, digital financial inclusion combines the inherent opacity of nudge techniques with that of predictive algorithm design, technically complex and subject to commercial confidentiality, in ways that have remained remarkably free from scrutiny.

While these programmes have adopted the language of inclusion and access, the question is who is actually accessing whom? Since the 2008 financial crisis a tendency to see its victims, rather than the system that created it as most in need of correction, has become entrenched. Meanwhile the possibilities of ‘fintech’ together with discovery of the ‘nudge’ toolbox has created new opportunities for financial capital to reach ever more remote consumers. As if the crisis never happened, this is the sub-prime ‘moment’ recast, perversely, as development policy, turning poverty in the developing world into a new frontier for profit making and accumulation.


Full Reserve Banking: The Wrong Cure for the Wrong Disease

Towards the end of last year, the Guardian published an opinion piece arguing there is a link between climate change and the monetary system. The author, Jason Hickel, claims our current monetary system induces a need for continuous economic growth – and is therefore an important cause of global warming. As a solution, Hickel endorses the full reserve banking proposals put forward by the pressure group Positive Money (PM).

This is an argument I encounter regularly. It appears to have become the default position among many environmental activists: it is official Green Party policy. This is unfortunate because both the diagnosis of the problem and the proposed remedy are mistaken. It is one element of a broader set of arguments about money and banking put forward by PM. (Hickel is not part of PM, but his article was promoted by PM on social media, and similar arguments can be found on the PM website.)

The PM analysis starts from the observation that money in modern economies is mostly issued by private banks: most of what we think of as money is not physical cash but customer deposits at retail banks. Further, for a bank to make a loan, it does not require someone to first make a cash deposit. Instead, when a bank makes a loan it creates money ‘out of thin air’. Bank lending increases the amount of money in the system.

This is true. And, as Positive Money rightly note, neither the mechanism nor the implications are widely understood. But Positive Money do little to increase public understanding – instead of explaining the issues clearly, they imbue this money creation process with an unnecessary air of mysticism.

This isn’t difficult. As J. K. Galbraith famously observed: ‘The process by which banks create money is so simple the mind is repelled. With something so important, a deeper mystery seems only decent.’

To the average person, money appears as something solid, tangible, concrete. For most, money – or lack of it – is an important (if not overwhelming) constraint on their lives. How can money be something which is just created out of thin air? What awful joke is this?

This leads to what Perry Mehrling calls the ‘fetish of the real’ and ‘alchemy resistance’ – people instinctively feel they have been duped and look for a route back to solid ground. Positive Money exploit this unease but deepen the confusion by providing an inaccurate account of the functioning of the monetary and financial system.

There is nothing new about the ‘fetish of the real’. Economists have been trying to separate the ‘real’ economy from the financial system for centuries. Restrictive ‘tight money’ proposals have more commonly been associated with free-market economists on the political right, while economists inclined towards collectivism have favoured less monetary restriction. One reason is that the right tends to view inflation as the key macroeconomic danger while the left is more concerned with unemployment.

The original blueprint for the Positive Money proposal is known as the Chicago Plan, named after a group of University of Chicago economists who argued for the replacement of ‘fractional reserve’ banking with ‘full reserve banking’. To understand what this means, look at the balance sheet below.


The table shows a stylised list of the assets and liabilities on a bank balance sheet. On the asset side, banks hold loans made to customers and ‘reserve balances’ (or ‘reserves’ for short). The latter item is a claim on the Central Bank – for example, the Bank of England in the UK. These reserve balances are used when banks make payments among themselves. Reserves can also be swapped on demand for physical cash at the Central Bank. Since only the Central Bank can create and issue these reserves, alongside physical cash, they form the part of the ‘money supply’ which is under direct state control.

For banks, reserves therefore play a role similar to that of deposits for the general public – they allow them to obtain cash on demand or to make payments directly between their individual accounts at the Bank of England

The only thing on the liability side is customer deposits – what we think of as ‘money’. These deposits can increase for two reasons. If customers decide to ‘deposit’ cash with the bank, the bank accepts the cash (which it will probably swap for reserves at the Central Bank) and adds a deposit balance for that customer. Both sides of the bank balance sheet increase by the same amount: a deposit of £100 cash will lead to an increase in reserves of £100 and an increase in deposits of £100.

Most increases in deposits happen a different way, however. When a bank makes a loan, both sides of its balance sheet increase as in the above example – except this time ‘loans’ not ‘reserves’ increases on the asset side. When a bank lends £100 to a customer, both ‘loans’ and ‘deposits’ increase by £100. Absent any other changes, the amount of money in the world increases by £100: money has been created ’out of nothing’.

The Positive Money proposal – like the Chicago Plan of the 1930s – would outlaw this money-creating power. Under the proposal, banks would not be allowed to make loans: the only asset allowed on their balance sheet would be ‘reserves’ – hence the name ‘full reserve banking’. Since reserves can only be issued by the Central Bank, private banks would lose their ability to create new money when they make loans.

What’s wrong with the PM proposal? To answer, we first need to ask what problem PM are trying to solve. They list several issues on their website: environmental degradation, inequality, financial instability and a lack of decent jobs. How does Positive Money think the monetary system contributes to these problems? The following quote and diagram, taken from the Positive Money website, give the crux of the argument:

The ‘real’ (non-financial), productive economy needs money to function, but because all money is created as debt, that sector also has to pay interest to the banks in order to function. This means that the real-economy businesses – shops, offices, factories etc – end up subsidising the banking sector. The more private debt in the economy, the more money is sucked out of the real economy and into the financial sector.


This illustrates the central misconception in PM’s description of money and banking. The ‘real economy’ needs money to operate – so individuals and business can make payments. This is correct. But PM imply that in order to obtain this money, the ‘real economy’ must borrow from the banks. And because the banks charge interest on this lending, they then end up sucking money back out of the ‘real economy’ as interest payments. In order to cover these payments, the ‘real economy’ must obtain more money – which it has to borrow at interest! And so on.

If this were a genuine description of the monetary system, the debts of the ‘real economy’ to the banks would grow uncontrollably and the system would have collapsed decades ago – PM essentially describes a pyramid scheme. The connection to the ‘infinite growth’ narrative is also clear – the ‘real economy’ is forced to produce ever more output just to feed the banks, destroying the environment in the process.

But neither the quote nor the diagram is accurate. To illustrate, look at the diagram below. It shows a bank, with a balance sheet as above, along with two individuals, Jack and Jill. Two steps are shown. In the first step, Jill takes out a loan from the bank – the bank creates new money as it lends. In the second step, Jill uses this money to buy something from Jack. Jack ends up holding a deposit while Jill is left with a loan to the bank outstanding. The bank sits between the two individuals.

The point here is twofold. First, the ultimate creditor – the person providing credit to Jill – is not the bank, but Jack. Jack has lent to Jill, with the bank acting as a ‘middleman’. The bank is not a net lender, but an intermediary between Jill and Jack – albeit one with a very important function: it guarantees Jill’s loan. If Jill doesn’t make good on her promise to pay, the bank will take the hit – not Jack. Second, the initial decision to lend wasn’t made by Jack – it was made by the bank. By inserting itself between Jack and Jill, and substituting Jill’s guarantee with its own, the bank allows Jill to borrow and spend without Jack first choosing to lend. But in accepting a deposit as a payment, Jack also makes a loan – to the bank. As well as acting as ‘money’, a bank deposit is a credit relationship: a loan from the deposit-holder to the bank.

A more accurate depiction of the outcome of bank lending is therefore the following:


Jill will be charged interest on her loan – but Jack will also receive interest on his deposit. Interest payments don’t flow in only one direction – to the bank – as in the PM diagram. Instead interest flows both in and out of the bank, which makes its profits on the ‘spread’, (the difference) between the two interest rates: it will charge Jill a higher rate than it pays Jack. This is not to argue that there aren’t deep problems with the ways the banking system is able to generate large profits, often through unproductive or even fraudulent activity – but rather that money creation by banks does not cause the problems suggested by Positive Money.

So the banks don’t endlessly siphon off income from the ‘real economy’ – but isn’t it still the case that in order to obtain money for payments, someone has to borrow at interest and someone else has to lend?

To see why this is misleading, we need to consider not only how money is created but also how it is destroyed. We’ve already seen how new money is created when a bank makes a loan. The process also happens in reverse: money is destroyed when loans are repaid. For example, if after the steps above, Jack were to subsequently buy something from Jill, the deposit will return to her ownership and she can pay off her loan – extinguishing money in the process.

One possibility is that instead of selling goods to Jack – for example a phone or a bike – Jill ‘sells’ Jack an IOU: a private loan agreement between the two of them. In this case Jill can pay off her loan to the bank and replace it with a direct loan from Jack. This would leave the balance sheets looking as follows:


Note that after Jill repays her loan, the bank is no longer involved – there is only a direct credit relationship between Jack and Jill.

This mechanism operates constantly in the modern economy – individuals swap bank deposits for other financial assets, or pay a proportion of their wages into a pension scheme. In fact, the volume of non-bank financial intermediation outweighs the volume of bank lending. The implication is that the demand from individuals for interest-bearing financial instruments is greater than the demand for bank deposits as a means of payment. Rather than banks being able to force loans on people because of their need for money to make payments, the opposite is true: people save for their future by getting rid of money and swapping it for other financial assets.

The quantity of money in the system isn’t determined by bank lending, as in the PM account. Instead it is a residual – the amount of deposits remaining in customer accounts after firms borrow, hire and invest; workers receive wages, consume and save; and the financial systems matches savers to borrower directly through equity and bond markets, pension funds and other non-bank mechanisms.

So the monetary argument is wrong. What of the argument that lending at interest requires endless economic growth?

Economic growth can be broken down into two components: population increase and growth in output per person. For around the last 100 years, global GDP growth of around 3 per cent per year has been split evenly between these two factors: about 1.5 per cent was due to population growth. The economy is growing because there are more people in it. This is not caused by bank lending. Further, projections suggest that the global population will peak by around 2050 then begin to fall as a result of falling fertility rates.

What about growth of output per head? Again, the answer is no. There is simply no mechanistic link between lending at interest and economic growth. Interest flows distribute income from one group of people to another – from borrowers to lenders. Government taxation and social security payments play a similar role. Among other functions, lending and borrowing at interest provides a mechanism by which people can accumulate financial claims during their working life which allow them to receive an income after retirement when they consume out of previously acquired wealth.  This mechanism is perfectly compatible with zero or negative growth.

If anything, excessive lending is likely to cause lower growth in the long run: in the aftermath of big credit expansions and busts, economic growth declines as households and firms reduce spending in an attempt to pay down debt.

Even if we did want to reduce growth rates, history teaches us that using monetary means to do so is a very bad idea. During the monetarist experiment of the early 1980s, the Thatcher government tried exactly this: they restricted growth of the money supply, ostensibly in an attempt to reduce inflation. The result was a recession in which 3 million people were out of work.

Oddly, despite the environmental argument, we can also find arguments from PM about ways that monetary mechanisms can be used to induce higher output and employment. These proposals, which go by titles such as ‘Green QE’ and ‘People’s QE’, argue that the government should issue new money and use it to pay for infrastructure spending.

An increase in government infrastructure spending is undoubtedly a good idea. But we don’t need to change the monetary system to achieve it. The public sector can do what it has always done and issue bonds to finance expenditures. (This sentence will inevitably raise the ire of the Modern Money Theory crowd, but I don’t want to get sidetracked by that debate here.)

Further, the conflation of QE with the use of newly printed money for government spending is another example of sleight of hand by Positive Money. QE involves swapping one sort of financial asset for another – the central bank swaps reserves for government bonds. This is a different type of operation to government investment spending – but Positive Money present the case as if it were a straight choice between handing free money to banks and spending money on health and education.  It is not. It should also be emphasised that printing money to pay for government spending is an entirely distinct policy proposal to full reserve banking – which do would nothing in itself to raise infrastructure spending – but this is obfuscated because PM labels both proposals ‘Sovereign Money’.

The same is true of other issues raised by PM: inequality, excessive debt, and financial instability. All are serious issues which urgently need to be addressed. But PM is wrong to promise a simple fix for these problems. None would be solved by full reserve banking – on the contrary, it is likely to exacerbate some. For example, by narrowing the focus to the deposit-issuing banks, PM excludes the rest of the financial system – investment banks, hedge funds, insurance companies, money market funds and many others – from consideration. The relationship between retail banks and these ‘shadow’ banking institutions is complex, but in narrowing the focus of ‘financial stability’ to only the former, the PM proposals would potentially shift risk-taking activity away from the more regulated retail banking system to the less regulated sector.

Another justification PM provide for full reserve banking is that issuing money generates profits in itself. By stripping the banks of money creation powers, the government could instead gain this profit (known as ‘seigniorage’):

Government finances would receive a boost, as the Treasury would earn the profit on creating electronic money, instead of only on the creation of bank notes. The profit on the creation of bank notes has raised £16.7bn for the Treasury over the past decade. But by allowing banks to create electronic money, it has lost hundreds of billions of potential revenue – and taxpayers have ended up making up the difference.

This is incorrect. As explained above, banks make a profit on the ‘spread’ between rates of interest on deposits and loans. There is simply no reason why the act of issuing money generates profits in itself. It’s not clear where the £16.7bn figure is taken from in the above quote since no source is given. (While Martin Wolf appears to support this position, he instead seems to be referring to general banking profits from interest spreads, fees etc.)

None of the above should be taken to imply that there are not problems with the current system – there are many. The banks are too big, too systemically important and too powerful. Part of their power arises from the guarantees and backstops provided by the state: deposit insurance, central bank ‘lender of last resort’ facilities and, ultimately, tax-payer bailouts when losses arise as a result of banks taking on too much risk in the search for profits. QE is insufficient as a macroeconomic tool to deal with on-going repercussions of the 2008 crisis – government spending is needed – and has pernicious side effects such as widening wealth inequality. The state should use the guarantees proved to the banks as leverage to force much more substantial changes of behaviour.

Milton Friedman was a proponent of the original Chicago Plan, and the intellectual force behind the monetarist experiment of the early 1980s. He was also deeply opposed to Roosevelt’s New Deal – a programme of government borrowing and spending aimed at reviving the economy during the Great Depression. Friedman describing the New Deal as ‘the wrong cure for the wrong disease’ – in his view the problems of the 1930s were caused by a shrinking money supply due to bank failures. Like PM, he favoured a simple monetary solution: the Fed should print money to counteract the effect of bank failures.

He was wrong about the New Deal. But his description is fitting for Positive Money’s Friedman-inspired monetary solutions to an array of complex issues: lack of decent jobs, inequality, financial instability and environmental degradation. The causes of these problems run deeper than a faulty monetary system. There are no simple quick-fix solutions.

PM wrongly diagnose the problem when they focus on the monetary system – so their prescription is also faulty. Full reserve banking is the wrong cure for the wrong disease.

Economics, Ideology and Trump

So the post-mortem begins. Much electronic ink has already been spilled and predictable fault lines have emerged. Debate rages in particular on the question of whether Trump’s victory was driven by economic factors. Like Duncan Weldon, I think Torsten Bell gets it about right – economics is an essential part of the story even if the complete picture is more complex.

Neoliberalism is a word I usually try to avoid. It’s often used by people on the left as an easy catch-all to avoid engaging with difficult issues. Broadly speaking, however, it provides a short-hand for the policy status quo over the last thirty years or so: free movement of goods, labour and capital, fiscal conservatism, rules-based monetary policy, deregulated finance and a preference for supply-side measures in the labour market.

Some will argue this consensus has nothing to with the rise of far-right populism. I disagree. Both economics and economic policy have brought us here.

But to what extent has academic economics provided the basis for neoliberal policy? The question had been in my mind even before the Trump and Brexit votes. A few months back, Duncan Weldon posed the question, ‘whatever happened to deficit bias?’ In my view, the responses at the time missed the mark. More recently, Ann Pettifor and Simon Wren Lewis have been discussing the relationship between ideology, economics and fiscal austerity.

I have great respect for Simon – especially his efforts to combat the false media narratives around austerity. But I don’t think he gets it right on economics and ideology. His argument is that in a standard model – a sticky-price DSGE system – fiscal policy should be used when nominal rates are at the zero lower bound. Post-2008 austerity policies are therefore at odds with the academic consensus.

This is correct in simple terms, but I think misses the bigger picture of what academic economics has been saying for the last 30 years. To explain, I need to recap some history.

Fiscal policy as a macroeconomic management tool is associated with the ideas of Keynes. Against the academic consensus of his day, he argued that the economy could get stuck in periods of demand deficiency characterised by persistent involuntary unemployment. The monetarist counter-attack was led by Milton Friedman – who denied this possibility. In the long run, he argued, the economy has a ‘natural’ rate of unemployment to which it will gravitate automatically (the mechanism still remains to be explained). Any attempt to use activist fiscal or monetary policy to reduce unemployment below this natural rate will only lead to higher inflation. This led to the bitter disputes of the 1960s and 70s between Keynesians and Monetarists. The Monetarists emerged as victors – at least in the eyes of the orthodoxy – with the inflationary crises of the 1970s. This marks the beginning of the end for fiscal policy in the history of macroeconomics.

In Friedman’s world, short-term macro policy could be justified in a deflationary situation as a way to help the economy back to its ‘natural’ state. But, for Friedman, macro policy means monetary policy. In line with the doctrine that the consumer always knows best, government spending was proscribed as distortionary and inefficient. For Friedman, the correct policy response to deflation is a temporary increase in the rate of growth of the money supply.

It’s hard to view Milton Friedman’s campaign against Keynes as disconnected from ideological influence. Friedman’s role in the Mont Pelerin society is well documented. This group of economic liberals, led by Friedrich von Hayek, formed after World War II with the purpose of opposing the move towards collectivism of which Keynes was a leading figure. For a time at least, the group adopted the term ‘neoliberal’ to describe their political philosophy. This was an international group of economists whose express purpose was to influence politics and politicians – and they were successful.

Hayek’s thesis – which acquires a certain irony in light of Trump’s ascent – was that collectivism inevitably leads to authoritarianism and fascism. Friedman’s Chicago economics department formed one point in a triangular alliance with Lionel Robbins’ LSE in London, and Hayek’s fellow Austrians in Vienna. While in the 1930s, Friedman had expressed support for the New Deal, by the 1950s he had swung sharply in the direction of economic liberalism. As Brad Delong puts it:

by the early 1950s, his respect for even the possibility of government action was gone. His grudging approval of the New Deal was gone, too: Those elements that weren’t positively destructive were ineffective, diverting attention from what Friedman now believed would have cured the Great Depression, a substantial expansion of the money supply. The New Deal, Friedman concluded, had been ‘the wrong cure for the wrong disease.’

While Friedman never produced a complete formal model to describe his macroeconomic vision, his successor at Chicago, Robert Lucas did – the New Classical model. (He also successfully destroyed the Keynesian structural econometric modelling tradition with his ‘Lucas critique’.) Lucas’ New Classical colleagues followed in his footsteps, constructing an even more extreme version of the model: the so-called Real Business Cycle model. This simply assumes a world in which all markets work perfectly all of the time, and the single infinitely lived representative agent, on average, correctly predicts the future.

This is the origin of the ‘policy ineffectiveness hypothesis’ – in such a world, government becomes completely impotent. Any attempt at deficit spending will be exactly matched by a corresponding reduction in private spending – the so-called Ricardian Equivalence hypothesis. Fiscal policy has no effect on output and employment. Even monetary policy becomes totally ineffective: if the central bank chooses to loosen monetary policy, the representative agent instantly and correctly predicts higher inflation and adjusts her behaviour accordingly.

This vision, emerging from a leading centre of conservative thought, is still regarded by the academic economics community as a major scientific step forward. Simon describes it as `a progressive research programme’.

What does all this have to with the current status quo? The answer is that this model – with one single modification – is the ‘standard model’ which Simon and others point to when they argue that economics has no ideological bias. The modification is that prices in the goods market are slow to adjust to changes in demand. As a result, Milton Friedman’s result that policy is effective in the short run is restored. The only substantial difference to Friedman’s model is that the policy tool is the rate of interest, not the money supply. In a deflationary situation, the central bank should cut the nominal interest rate to raise demand and assist the automatic but sluggish transition back to the `natural’ rate of unemployment.

So what of Duncan’s question: what happened to deficit bias? – this refers to the assertion in economics textbooks that there will always be a tendency for governments to allow deficits to increase. The answer is that it was written out of the textbooks decades ago – because it is simply taken as given that fiscal policy is not the correct tool.

To check this, I went to our university library and looked through a selection of macroeconomics textbooks. Mankiw’s ‘Macroeconomics’ is probably the mostly widely used. I examined the 2007 edition – published just before the financial crisis. The chapter on ‘Stabilisation Policy’ dispenses with fiscal policy in half a page – a case study of Romer’s critique of Keynes is presented under the heading ‘Is the Stabilization of the Economy a Figment of the Data?’ The rest of the chapter focuses on monetary policy: time inconsistency, interest rate rules and central bank independence. The only appearance of the liquidity trap and the zero lower bound is in another half-page box, but fiscal policy doesn’t get a mention.

The post-crisis twelfth edition of Robert Gordon’s textbook does include a chapter on fiscal policy – entitled `The Government Budget, the Government Debt and the Limitations of Fiscal Policy’. While Gordon acknowledges that fiscal policy is an option during strongly deflationary periods when interest rates are at the zero lower bound, most of the chapter is concerned with the crowding out of private investment, the dangers of government debt and the conditions under which governments become insolvent. Of the textbooks I examined, only Blanchard’s contained anything resembling a balanced discussion of fiscal policy.

So, in Duncan’s words, governments are ‘flying a two engined plane but choosing to use only one motor’ not just because of media bias, an ill-informed public and misguided politicians – Simon’s explanation – but because they are doing what the macro textbooks tell them to do.

The reason is that the standard New Keynesian model is not a Keynesian model at all – it is a monetarist model. Aside from the mathematical sophistication, it is all but indistinguishable from Milton Friedman’s ideologically-driven description of the macroeconomy. In particular, Milton Friedman’s prohibition of fiscal policy is retained with – in more recent years – a caveat about the zero-lower bound (Simon makes essentially the same point about fiscal policy here).

It’s therefore odd that when Simon discusses the relationship between ideology and economics he chooses to draw a dividing line between those who use a sticky-price New Keynesian DSGE model and those who use a flexible-price New Classical version. The beliefs of the latter group are, Simon suggests, ideological, while those of the former group are based on ideology-free science. This strikes me as arbitrary. Simon’s justification is that, despite the evidence, the RBC model denies the possibility of involuntary unemployment. But the sticky-price version – which denies any role for inequality, finance, money, banking, liquidity, default, long-run unemployment, the use of fiscal policy away from the ZLB, supply-side hysteresis effects and plenty else besides – is acceptable. He even goes so far as to say ‘I have no problem seeing the RBC model as a flex-price NK model’ – even the RBC model is non-ideological so long as the hierarchical framing is right.

Even Simon’s key distinction – the New Keynesian model allows for involuntary unemployment – is open to question. Keynes’ definition of involuntary unemployment is that there exist people willing and able to work at the going wage who are unable to find employment. On this definition the New Keynesian model falls short – in the face of a short-run demand shortage caused by sticky prices the representative agent simply selects a new optimal labour supply. Workers are never off their labour supply curve. In the Smets Wouters model – a very widely used New Keynesian DSGE model – the labour market is described as follows: ‘household j chooses hours worked Lt(j)’. It is hard to reconcile involuntary unemployment with households choosing how much labour they supply.

What of the position taken by the profession in the wake of 2008? Reinhart and Rogoff’s contribution is by now infamous. Ann also draws attention to the 2010 letter signed by 20 top-ranking economists – including Rogoff – demanding austerity in the UK. Simon argues that Ann overlooks the fact that ‘58 equally notable economists signed a response arguing the 20 were wrong’.

It is difficult to agree that the signatories to the response letter, organised by Lord Skidelsky, are ‘equally notable’. Many are heterodox economists – critics of standard macroeconomics. Those mainstream economists on the list hold positions at lower-ranking institutions than the 20. I know many of the 58 personally – I know none of the 20. Simon notes:

Of course those that signed the first letter, and in particular Ken Rogoff, turned out to be a more prominent voice in the subsequent debate, but that is because he supported what policymakers were doing. He was mostly useful rather than influential.

For Simon, causality is unidirectional: policy-makers cherry-pick academic economics to fit their purpose but economists have no influence on policy. This seems implausible. It is undoubtedly true that pro-austerity economists provided useful cover for small-state ideologues like George Osborne. But the parallels between policy and academia are too strong for the causality to be unidirectional.

Osborne’s small state ideology is a descendent of Thatcherism – the point when neoliberalism first replaced Keynesianism. Is it purely coincidence that the 1980s was also the high-point for extreme free market Chicago economics such as Real Business Cycle models?

The parallel between policy and academia continues with the emergence of the sticky-price New Keynesian version as the ‘standard’ model in the 90s alongside the shift to the third way of Blair and Clinton. Blairism represents a modified, less extreme, version of Thatcherism. The all-out assault on workers and the social safety net was replaced with ‘workfare’ and ‘flexicurity’.

A similar story can be told for international trade, as laid out in this excellent piece by Martin Sandbu. In the 1990s, just as the ‘heyday of global trade integration was getting underway’, economists were busy making the case that globalisation had no negative implications for employment or inequality in rich nations. To do this, they came up with the ‘skill-biased technological change’ (SBTC) hypothesis. This states that as technology advances and the potential for automation grows, the demand for high-skilled labour increases. This introduces the hitch that higher educational standards are required before the gains from automation can be felt by those outside the top income percentiles. This leads to a `race between education and technology’ – a race which technology was winning, leading to weaker demand for middle and low-skill workers and rising ‘skill premiums’ for high skilled workers as a result.

Writing in the Financial Times shortly before the financial crisis, Jagdish Bagwati argued that those who looked to globalisation as an explanation for increasing inequality were misguided:

The culprit is not globalization but labour-saving technical change that puts pressure on the wages of the unskilled. Technical change prompts continual economies in the use of unskilled labour. Much empirical argumentation and evidence exists on this. (FT, January 4, 2007, p. 11)

As Krugman put it:

The hypothesis that technological change, by raising the demand for skill, has led to growing inequality is so widespread that at conferences economists often use the abbreviation SBTC – skill-biased technical change – without explanation, assuming that their listeners know what they are talking about (p. 132)

Over the course of his 2007 book, Krugman sets out on a voyage of discovery – ‘That, more or less, is the story I believed when I began working on this book’ (p. 6). He arrives at the astonishing conclusion – ‘[i]t sounds like economic heresy’ (p. 7) – that politics can influence inequality:

[I]nstitutions, norms and the political environment matter a lot more for the distribution of income – and … impersonal market forces matter less – than Economics 101 might lead you to believe (p. 8)

The idea that rising pay at the top of the scale mainly reflect social and political change, … strikes some people as … too much at odds with Economics 101.

If a left-leaning Nobel prize-winning economist has trouble escaping from the confines of Economics 101, what hope for the less sophisticated mind?

As deindustrialisation rolled through the advanced economies, wiping out jobs and communities, economists continued to deny any role for globalisation. As Martin Sandbu argues,

The blithe unconcern displayed by the economics profession and the political elites about whether trade was causing deindustrialisation, social exclusion and rising inequality has begun to seem Pollyannish at best, malicious at worst. Kevin O’Rourke, the Irish economist, and before him Lawrence Summers, former US Treasury Secretary, have called this “the Davos lie.”

For mainstream macroeconomists, inequality was not a subject of any real interest. While the explanation for inequality lay in the microeconomics – the technical forms of production functions – and would be solved by increasing educational attainment, in macroeconomic terms, the use of a representative agent and an aggregate production function simply assumed the problem away. As Stiglitz puts it:

[I]f the distribution of income (say between labor and capital) matters, for example, for aggregate demand and therefore for employment and output, then using an aggregate Cobb-Douglas production function which, with competition, implies that the share of labor is fixed, is not going to be helpful. (p.596)

Robert Lucas summed up his position as follows: ‘Of the tendencies that are harmful to sound economics, the most seductive, and in my opinion the most poisonous, is to focus on questions of distribution.’ It is hard to view this statement as informed more strongly by science than ideology.

But while economists were busy assuming away inequality in their models, incomes continued to diverge in most advanced economies. It was only with the publication of Piketty’s book that the economics profession belatedly began to turn its back on Lucas.

The extent to which economic insecurity in the US and the UK is driven by globalisation versus policy is still under discussion – my answer would be that it is a combination of both – but the skill-biased technical change hypothesis looks to be a dead end – and a costly one at that.

Similar stories can be told about the role of household debt, finance, monetary theory and labour bargaining power and monopoly – why so much academic focus on ‘structural reform’ in the labour market but none on anti-trust policy?  Heterodox economists were warning about the connections between finance, globalisation, current account imbalances, inequality, household debt and economic insecurity in the decades before the crisis. These warnings were dismissed as unscientific – in favour of a model which excluded all of these things by design.

Are economic factors – and economic policy – partly to blame for the Brexit and Trump votes? And are academic economists, at least in part, to blame for these polices? The answer to both questions is yes. To argue otherwise is to deny Keynes’ dictum that ‘the ideas of economists and political philosophers, both when they are right and when they are wrong are more powerful than is commonly understood.’

This quote, ‘mounted and framed, takes pride of place in the entrance hall of the Institute for Economic Affairs’ – the think-tank founded, with Hayek’s encouragement, by Anthony Fisher, as a way to promote and promulgate the ideas of the Mont Pelerin Society. The Institute was a success. Fisher was, in the words of Milton Friedman, ‘the single most important person in the development of Thatcherism’.

The rest, it seems, is history.

What is the Loanable Funds theory?

I had another stimulating discussion with Noah Smith last week. This time the topic was the ‘loanable funds’ theory of the rate of interest. The discussion was triggered by my suggestion that the ‘safe asset shortage’ and associated ‘reach for yield’ are in part caused by rising wealth concentration. The logic is straightforward: since the rich spend less of their income than the poor, wealth concentration tends to increase the rate of saving out of income. This means an increase in desired savings chasing the available stock of financial assets, pushing up the price and lowering the yield.

Noah viewed this as a plausible hypothesis but suggested it relies on the loanable funds model. My view was the opposite – I think this mechanism is incompatible with the loanable funds theory. Such disagreements are often enlightening – either one of us misunderstood the mechanisms under discussion, or we were using different definitions. My instinct was that it was the latter: we meant something different by ‘loanable funds theory’ (LFT hereafter).

To try and clear this up, Noah suggested Mankiw’s textbook as a starting point – and found a set of slides which set out the LFT clearly. The model described was exactly the one I had in mind – but despite agreeing that Mankiw’s exposition of the LFT was accurate it was clear we still didn’t agree about the original point of discussion.

The reason seems to be that Noah understands the LFT to describe any market for loans: there are some people willing to lend and some who wish to borrow. As the rate of interest rises, the volume of available lending increases but the volume of desired borrowing falls. In equilibrium, the rate of interest will settle at r* – the market-clearing  rate.

What’s wrong with this? – It certainly sounds like a market for ‘loanable funds’. The problem is that LFT is not a theory of loan market clearing per se. It’s a theory of macroeconomic equilibrium. It’s not a model of any old loan market: it’s a model of a one very specific market – the market which intermediates total (net) saving with total capital investment in a closed economic system.

OK, but saving equals investment by definition in macroeconomic terms: the famous S = I identity. How can there be a market which operates to ensure equality between two identically equal magnitudes?

The issue – as Keynes explained in the General Theory– is that in a modern capitalist economy, the person who saves and the person who undertakes fixed capital investment are not usually the same. Some mechanism needs to be in place to ensure that a decision to ‘not consume’ somewhere in the system – to save – is always matched by a decision to invest – to build a new machine, road or building – somewhere else in the economy.

To see the issue more clearly consider the ‘corn economy’ used in many standard macro models: one good – corn – is produced. This good can either be consumed or invested (by planting in the ground or storing corn for later consumption). The decision to plant or store corn is simultaneously both a decision to ‘not consume’ and to ‘invest’ (the rate of return on investment will depend on the mix of stored to planted corn). In this simple economy S = I because it can’t be any other way. A market for loanable funds is not required.

But this isn’t how modern capitalism works. Decisions to ‘not consume’ and decisions to invest are distributed throughout the economic system. How can we be sure that these decisions will lead to identical intended saving and investment – what ensures that S and I are equal? The loanable funds theory provides one possible answer to this question.

The theory states that decisions to save (i.e. to not consume) are decisive – investment adjusts automatically to accommodate any change in consumption behaviour. To see how this works, we need to recall how the model is derived. The diagram below shows the basic system (I’ve borrowed the figure from Nick Rowe).


The upward sloping ‘desired saving’ curve is derived on the assumption that people are ‘impatient’ – they prefer current consumption to future consumption. In order to induce people to save,  a return needs to be paid on their savings. As the return paid on savings increases, consumers are collectively willing to forgo a greater volume of current consumption in return for a future payoff.

The downward sloping investment curve is derived on standard neoclassical marginalist principles. ‘Factors of production’ (i.e. labour and capital) receive ‘what they are worth’ in competitive markets. The real wage is equal to the marginal productivity of labour and the return on ‘capital’ is likewise equal to the marginal productivity of capital. As the ‘quantity’ of capital increases, the marginal product – and thus the rate of return – falls.

So the S and I curves depict how much saving and investment would take place at each possible rate of interest. As long as the S and I curves are well-defined and ‘monotonic’ (a strong assumption), there is only one rate of interest at which the amount people wish to lend is equal to the amount (other) people would like to borrow. This is r*, the point of intersection between the curves. This rate of interest is often referred to as the Wicksellian ‘natural rate’.

Now, consider what happens if the collective impatience of society decreases. At any rate of interest, consumption as a share of income will be lower and desired saving correspondingly higher – the S curve moves to the right. As the S curve shifts to the right – assuming no change in the technology determining the slope and position of the I curve – a greater share of national income is ‘not consumed’. But by pushing down the rate of interest in the loanable funds market, reduced consumption – somewhat miraculously – leads to an automatic increase in investment. An outward shift in the S curve is accompanied by a shift along the I curve.

Consider what this means for macroeconomic aggregates. Assuming a closed system, income is, by definition, equal to consumption plus investment: Y = C + I. The LFT says is that in freely adjusting markets, reductions in C due to shifts in preferences are automatically offset by increases in I. Y will remain at the ‘full employment’ rate of output at all times.

The LFT therefore underpins ‘Say’s Law’ – summarised by Keynes as ‘supply creates its own demand’. It was thus a key target for Keynes’ attack on the ‘Law’ in his General Theory. Keynes argued against the notion that saving decisions are strongly influenced by the rate of interest. Instead, he argued consumption is mostly determined by income. If individuals consume a fixed proportion of their income, the S curve in the diagram is no longer well defined – at any given level of output, S is vertical, but the position of the curve shifts with output. This is quite different to the LFT which regards the position of the two curves as determined by the ‘deep’ structural parameters of the system – technology and preferences.

How then is the rate of interest determined in Keynes’ theory? – the answer is ‘liquidity preference’. Rather than desired saving determining the rate of interest, what matters is the composition of financial assets people use to hold their savings. Keynes simplifies the story by assuming only two assets: ‘money’ which pays no interest and ‘bonds’ which do pay interest. It is the interaction of supply and demand in the bond market – not the ‘loanable funds’ market – which determines the rate of interest.

There are two key points here: the first is that saving is a residual – it is determined by output and investment. As such, there is no mechanism to ensure that desired saving and desired investment will be equalised. This means that output, not the rate of interest, will adjust to ensure that saving is equal to investment. There is no mechanism which ensures that output is maintained at full employment levels. The second is that interest rates can move without any change in either desired saving or desired investment. If there is an increase in ‘liquidity preference’ – a desire to hold lower yielding but safer assets, this will cause an increase in the rate of interest on riskier assets.

How can the original question be framed using these two models? – What is the implication of increasing wealth concentration on yields and macro variables?

I think Noah is right that one can think of the mechanism in a loanable funds world. If redistribution towards the rich increases the average propensity to save, this will shift the S curve to the right – as in the example above – reducing the ‘natural’ rate of interest. This is the standard ‘secular stagnation’ story – a ‘global savings glut’ has pushed the natural rate below zero. However, in a loanable funds world this should – all else being equal – lead to an increase in investment. This doesn’t seem to fit the stylised facts: capital investment has been falling as a share of GDP in most advanced nations. (Critics will point out that I’m skirting the issue of the zero lower bound – I’ll have to save that for another time).

My non-LFT interpretation is the following. Firstly, I’d go further than Keynes and argue that the rate of interest is not only relatively unimportant for determining S – it also has little effect on I. There is evidence to suggest that firms’ investment decisions are fairly interest-inelastic. This means that both curves in the diagram above have a steep slope – and they shift as output changes. There is no ‘natural rate’ of interest which brings the macroeconomic system into equilibrium.

In terms of the S = I identity, this means that investment decisions are more important for the determination of macro variables than saving decisions. If total desired saving as a share of income increases – due to wealth concentration, for example – this will have little effect on investment. The volume of realised saving, however, is determined by (and identically equal to) the volume of capital investment. An increase in desired saving manifests itself not as a rise in investment – but as a fall in consumption and output.

In such a scenario – in which a higher share of nominal income is saved – the result will be weak demand for goods but strong demand for financial assets – leading to deflation in the goods market and inflation in the market for financial assets. Strong demand for financial assets will reduce rates of return – but only on financial assets: if investment is inelastic to interest rate there is no reason to believe there will be any shift in investment or in the return on fixed capital investment.

In order explain the relative rates of return on equity and bonds, a re-working of Keynes’ liquidity preference theory is required. Instead of a choice between ‘money’ and ‘bonds’, the choice faced by investors can be characterised as a choice between risky equity and less-risky bonds. Liquidity preference will then make itself felt as an increase in the price of bonds relative to equity – and a corresponding movement in the yields on each asset. On the other hand, an increase in total nominal saving will increase the price of all financial assets and thus reduce yields across the board. Given that it is likely that portfolio managers will have minimum target rates of return, this is will induce a shift into higher-risk assets.