economics

Thoughts on the NAIRU

Simon Wren-Lewis’s post attacking Matthew Klein’s critique of the NAIRU provoked some strong reactions. On reflection, my initial response was wide of the mark. Matthew responded saying he agreed with most of Simon’s piece.

So are we all in agreement? I think there are differences, but we need to first clarify the issues.

Matthew’s main point was empirical: if you want to use a relationship between employment and inflation as a policy target it needs to be relatively stable. The evidence suggests it is not.

But there is a deeper question of what the NAIRU actually means – what is a NAIRU? The simple definition is straightforward: it is the rate of unemployment at which inflation is stable. If policy is used to increase demand, reducing unemployment below the NAIRU, inflation will rise until excess demand is removed and unemployment allowed to increase again.

At first glance this appears all but identical to the ‘natural rate of unemployment’, a concept originating with Friedman’s monetarism and inherited by some New Keynesian models – in particular the ‘standard’ sticky-price DSGE model of Woodford and others. In this view, the economy has ‘natural rates’ of output and employment, beyond which any attempt by policy makers to increase demand becomes futile, leading only to ever-higher inflation. Since there is a direct correspondence between stabilizing inflation and fixing output and employment at their ‘natural’ rates, policy makers should simply adjust interest rates to hit an inflation target. In typically modest fashion, economists refer to this as the ‘Divine Coincidence‘ – despite the fact it is essentially imposed on the models by assumption.

Matthew’s piece skips over this part of the history, jumping straight from Bill Phillips’s empirical relationship to the NAIRU. But the NAIRU is a weaker claim than the natural rate. As Simon says, all that is required for a NAIRU is a relationship of the form inf = f(U, E[inf]), i.e. current inflation is some function of unemployment and expected inflation. At its simplest, agents could just assume inflation will be the same in the current period as the last period. Then, employment above some level would causing rising inflation and vice versa.

More sophisticated New Keynesian formulations of the NAIRU are a good distance removed from the ‘natural rate’ theory – these models include imperfections in the labour and product markets and a bargaining process between workers and firms. As a result, they incorporate (at least short-run) involuntary unemployment and see inflation as driven by competing claims on output rather than the ‘too much nominal demand chasing too few goods’ story of the monetarists and simple DSGE models.

It is also the case that such a relationship is found in many heterodox models. Engelbert Stockhammer explores heterodox views on the NAIRU in a provocatively-titled paper, ‘Is the NAIRU Theory a Monetarist, New Keynesian, Post Keynesian or Marxist Theory?’. He doesn’t identify a clear heterodox position – some Post-Keynesians reject the NAIRU outright, while others present models which incorporate NAIRU-like relationships.

Engelbert notes that arguably the earliest definition of the NAIRU is to be found in Joan Robinson’s 1937 Essays in the Theory of Employment:

In any given conditions of the labour market there is a certain more or less definite level of employment at which money wages will rise … there is a certain level of employment, determined by the general strategical position of the Trade Unions, at which money wages rise, and at that level of employment there is a certain level of real wages, determined by the technical conditions of production and the degree of monopoly’ (Robinson, 1937, pp. 4-5)

Recent Post-Keynesian models also include NAIRU-like relationships. For example, Godley and Lavoie’s textbook includes a model in which workers and firms compete by attempting to impose money-wage and price increases respectively. The size of wage increases demanded by workers is a function of the employment rate relative to some ‘full employment’ level. That sounds a lot like a NAIRU – but that isn’t how Godley and Lavoie see it:

Inflation under these assumptions does not necessarily accelerate if employment stays in excess of its ‘full employment’ level. Everything depends on the parameters and whether they change … An implication of the story proposed here is that there is no vertical long-run Phillips curve. There is no NAIRU. (Godley and Lavoie, 2007, p. 304, my emphasis)

The authors summarise their view with a quote from an earlier work by Godley:

Indeed if it is true that there is a unique NAIRU, that really is the end of discussion of macroeconomic policy. At present I happen not to believe it and that there is no evidence of it. And I am prepared to express the value judgment that moderately higher inflation rates are an acceptable price to pay for lower unemployment. But I do not accept that it is a foregone conclusion that inflation will be higher if unemployment is lower (Godley 1983: 170, my emphasis).

This highlights a key difference between Post-Keynesian and neoclassical approaches to the NAIRU: where Post-Keynesian models do include NAIRU-like relationships, the relevent employment level is endogenous, due to hysteresis effects for example. In other words, the NAIRU moves around and is influenced by demand-management policy. As such, the NAIRU is not an attractor for the unemployment rate as in many neoclassical models.

Marxist theory also contains something which looks a lot like a NAIRU: the ‘industrial reserve army’ of the unemployed. Marx argued that unemployment is the mechanism by which capitalists discipline workers and prevent wage claims rising to the point at which profits and capital accumulation are depleted. Periodic recessions are therefore a necessary part of the capitalist development process.

This led Nicholas Kaldor to describe Margaret Thatcher as ‘our first Marxist Prime Minister’ – not because she was an advocate of socialist revolution but because she understood the reserve army mechanism: ‘They have managed to create a pool – or a “reserve army” as Marx would have called it – of 3 million unemployed … the British working classes have been thoroughly cowed and frightened.’ (This point is passed over rather quickly in Simon’s piece. In the 1980s, he writes, ‘policy changed and increased unemployment and inflation fell.’)

So we should be careful about blanket dismissals of the NAIRU. Instead, we must be clear how our analysis differs: what are the mechanisms which generate inflationary pressure at low levels of unemployment – conflicting claims or excess nominal demand? Is the NAIRU stable and exogenous? Does it act as an attractor for the unemployment rate, and over what time period? What are the implications for policy?

Ultimately, I think this breaks down into an issue about semantics. How far from the unique, stable, vertical long-run Phillips curve can we get and still have something we call a NAIRU? Simon adopts a very loose definition:

There is a relationship between inflation and unemployment, but it is just very difficult to pin down. For most macroeconomists, the concept of the NAIRU really just stands for that basic macroeconomic truth.

I’d like to believe this were true. But I suspect most macroeconomists, trained on New Keynesian DSGE models, have a narrower view: they tend to think in terms of a stable short-run sticky-price Phillips curve and a unique long-run Phillips curve at the ‘natural’ rate of employment.

There is one other aspect to consider. Engelbert Stockhammer distinguishes between the New Keynesian NAIRU theory and the New Keynesian NAIRU story. He argues (writing in 2007, just before the crisis) that the NAIRU has been used as the basis for an account of unemployment which blames inflexible labour markets, over-generous welfare states, job protection measures and strong unions. The policy prescriptions are then straightforward: labour markets should be deregulated and welfare states scaled back. Demand management should not be used to reduce unemployment.

While economists have changed their tune substantially in the decade since the financial crisis, I suspect that the NAIRU story is one reason that defence of the NAIRU theory generates such strong reactions.

EDIT: Bruno Bonizzi points me to this piece at the INET blog with has an excellent discussion of the empirical evidence and theoretical implications of hysteresis effects and an unstable NAIRU.

 

Image reproduced from Wikipedia: https://en.wikipedia.org/wiki/File:NAIRU-SR-and-LR.svg

Full Reserve Banking: The Wrong Cure for the Wrong Disease

Towards the end of last year, the Guardian published an opinion piece arguing there is a link between climate change and the monetary system. The author, Jason Hickel, claims our current monetary system induces a need for continuous economic growth – and is therefore an important cause of global warming. As a solution, Hickel endorses the full reserve banking proposals put forward by the pressure group Positive Money (PM).

This is an argument I encounter regularly. It appears to have become the default position among many environmental activists: it is official Green Party policy. This is unfortunate because both the diagnosis of the problem and the proposed remedy are mistaken. It is one element of a broader set of arguments about money and banking put forward by PM. (Hickel is not part of PM, but his article was promoted by PM on social media, and similar arguments can be found on the PM website.)

The PM analysis starts from the observation that money in modern economies is mostly issued by private banks: most of what we think of as money is not physical cash but customer deposits at retail banks. Further, for a bank to make a loan, it does not require someone to first make a cash deposit. Instead, when a bank makes a loan it creates money ‘out of thin air’. Bank lending increases the amount of money in the system.

This is true. And, as Positive Money rightly note, neither the mechanism nor the implications are widely understood. But Positive Money do little to increase public understanding – instead of explaining the issues clearly, they imbue this money creation process with an unnecessary air of mysticism.

This isn’t difficult. As J. K. Galbraith famously observed: ‘The process by which banks create money is so simple the mind is repelled. With something so important, a deeper mystery seems only decent.’

To the average person, money appears as something solid, tangible, concrete. For most, money – or lack of it – is an important (if not overwhelming) constraint on their lives. How can money be something which is just created out of thin air? What awful joke is this?

This leads to what Perry Mehrling calls the ‘fetish of the real’ and ‘alchemy resistance’ – people instinctively feel they have been duped and look for a route back to solid ground. Positive Money exploit this unease but deepen the confusion by providing an inaccurate account of the functioning of the monetary and financial system.

There is nothing new about the ‘fetish of the real’. Economists have been trying to separate the ‘real’ economy from the financial system for centuries. Restrictive ‘tight money’ proposals have more commonly been associated with free-market economists on the political right, while economists inclined towards collectivism have favoured less monetary restriction. One reason is that the right tends to view inflation as the key macroeconomic danger while the left is more concerned with unemployment.

The original blueprint for the Positive Money proposal is known as the Chicago Plan, named after a group of University of Chicago economists who argued for the replacement of ‘fractional reserve’ banking with ‘full reserve banking’. To understand what this means, look at the balance sheet below.

bs1

The table shows a stylised list of the assets and liabilities on a bank balance sheet. On the asset side, banks hold loans made to customers and ‘reserve balances’ (or ‘reserves’ for short). The latter item is a claim on the Central Bank – for example, the Bank of England in the UK. These reserve balances are used when banks make payments among themselves. Reserves can also be swapped on demand for physical cash at the Central Bank. Since only the Central Bank can create and issue these reserves, alongside physical cash, they form the part of the ‘money supply’ which is under direct state control.

For banks, reserves therefore play a role similar to that of deposits for the general public – they allow them to obtain cash on demand or to make payments directly between their individual accounts at the Bank of England

The only thing on the liability side is customer deposits – what we think of as ‘money’. These deposits can increase for two reasons. If customers decide to ‘deposit’ cash with the bank, the bank accepts the cash (which it will probably swap for reserves at the Central Bank) and adds a deposit balance for that customer. Both sides of the bank balance sheet increase by the same amount: a deposit of £100 cash will lead to an increase in reserves of £100 and an increase in deposits of £100.

Most increases in deposits happen a different way, however. When a bank makes a loan, both sides of its balance sheet increase as in the above example – except this time ‘loans’ not ‘reserves’ increases on the asset side. When a bank lends £100 to a customer, both ‘loans’ and ‘deposits’ increase by £100. Absent any other changes, the amount of money in the world increases by £100: money has been created ’out of nothing’.

The Positive Money proposal – like the Chicago Plan of the 1930s – would outlaw this money-creating power. Under the proposal, banks would not be allowed to make loans: the only asset allowed on their balance sheet would be ‘reserves’ – hence the name ‘full reserve banking’. Since reserves can only be issued by the Central Bank, private banks would lose their ability to create new money when they make loans.

What’s wrong with the PM proposal? To answer, we first need to ask what problem PM are trying to solve. They list several issues on their website: environmental degradation, inequality, financial instability and a lack of decent jobs. How does Positive Money think the monetary system contributes to these problems? The following quote and diagram, taken from the Positive Money website, give the crux of the argument:

The ‘real’ (non-financial), productive economy needs money to function, but because all money is created as debt, that sector also has to pay interest to the banks in order to function. This means that the real-economy businesses – shops, offices, factories etc – end up subsidising the banking sector. The more private debt in the economy, the more money is sucked out of the real economy and into the financial sector.

positive-money-1

This illustrates the central misconception in PM’s description of money and banking. The ‘real economy’ needs money to operate – so individuals and business can make payments. This is correct. But PM imply that in order to obtain this money, the ‘real economy’ must borrow from the banks. And because the banks charge interest on this lending, they then end up sucking money back out of the ‘real economy’ as interest payments. In order to cover these payments, the ‘real economy’ must obtain more money – which it has to borrow at interest! And so on.

If this were a genuine description of the monetary system, the debts of the ‘real economy’ to the banks would grow uncontrollably and the system would have collapsed decades ago – PM essentially describes a pyramid scheme. The connection to the ‘infinite growth’ narrative is also clear – the ‘real economy’ is forced to produce ever more output just to feed the banks, destroying the environment in the process.

But neither the quote nor the diagram is accurate. To illustrate, look at the diagram below. It shows a bank, with a balance sheet as above, along with two individuals, Jack and Jill. Two steps are shown. In the first step, Jill takes out a loan from the bank – the bank creates new money as it lends. In the second step, Jill uses this money to buy something from Jack. Jack ends up holding a deposit while Jill is left with a loan to the bank outstanding. The bank sits between the two individuals.

frb1
The point here is twofold. First, the ultimate creditor – the person providing credit to Jill – is not the bank, but Jack. Jack has lent to Jill, with the bank acting as a ‘middleman’. The bank is not a net lender, but an intermediary between Jill and Jack – albeit one with a very important function: it guarantees Jill’s loan. If Jill doesn’t make good on her promise to pay, the bank will take the hit – not Jack. Second, the initial decision to lend wasn’t made by Jack – it was made by the bank. By inserting itself between Jack and Jill, and substituting Jill’s guarantee with its own, the bank allows Jill to borrow and spend without Jack first choosing to lend. But in accepting a deposit as a payment, Jack also makes a loan – to the bank. As well as acting as ‘money’, a bank deposit is a credit relationship: a loan from the deposit-holder to the bank.

A more accurate depiction of the outcome of bank lending is therefore the following:

positive-money-2

Jill will be charged interest on her loan – but Jack will also receive interest on his deposit. Interest payments don’t flow in only one direction – to the bank – as in the PM diagram. Instead interest flows both in and out of the bank, which makes its profits on the ‘spread’, (the difference) between the two interest rates: it will charge Jill a higher rate than it pays Jack. This is not to argue that there aren’t deep problems with the ways the banking system is able to generate large profits, often through unproductive or even fraudulent activity – but rather that money creation by banks does not cause the problems suggested by Positive Money.

So the banks don’t endlessly siphon off income from the ‘real economy’ – but isn’t it still the case that in order to obtain money for payments, someone has to borrow at interest and someone else has to lend?

To see why this is misleading, we need to consider not only how money is created but also how it is destroyed. We’ve already seen how new money is created when a bank makes a loan. The process also happens in reverse: money is destroyed when loans are repaid. For example, if after the steps above, Jack were to subsequently buy something from Jill, the deposit will return to her ownership and she can pay off her loan – extinguishing money in the process.

One possibility is that instead of selling goods to Jack – for example a phone or a bike – Jill ‘sells’ Jack an IOU: a private loan agreement between the two of them. In this case Jill can pay off her loan to the bank and replace it with a direct loan from Jack. This would leave the balance sheets looking as follows:

frb2

Note that after Jill repays her loan, the bank is no longer involved – there is only a direct credit relationship between Jack and Jill.

This mechanism operates constantly in the modern economy – individuals swap bank deposits for other financial assets, or pay a proportion of their wages into a pension scheme. In fact, the volume of non-bank financial intermediation outweighs the volume of bank lending. The implication is that the demand from individuals for interest-bearing financial instruments is greater than the demand for bank deposits as a means of payment. Rather than banks being able to force loans on people because of their need for money to make payments, the opposite is true: people save for their future by getting rid of money and swapping it for other financial assets.

The quantity of money in the system isn’t determined by bank lending, as in the PM account. Instead it is a residual – the amount of deposits remaining in customer accounts after firms borrow, hire and invest; workers receive wages, consume and save; and the financial systems matches savers to borrower directly through equity and bond markets, pension funds and other non-bank mechanisms.

So the monetary argument is wrong. What of the argument that lending at interest requires endless economic growth?

Economic growth can be broken down into two components: population increase and growth in output per person. For around the last 100 years, global GDP growth of around 3 per cent per year has been split evenly between these two factors: about 1.5 per cent was due to population growth. The economy is growing because there are more people in it. This is not caused by bank lending. Further, projections suggest that the global population will peak by around 2050 then begin to fall as a result of falling fertility rates.

What about growth of output per head? Again, the answer is no. There is simply no mechanistic link between lending at interest and economic growth. Interest flows distribute income from one group of people to another – from borrowers to lenders. Government taxation and social security payments play a similar role. Among other functions, lending and borrowing at interest provides a mechanism by which people can accumulate financial claims during their working life which allow them to receive an income after retirement when they consume out of previously acquired wealth.  This mechanism is perfectly compatible with zero or negative growth.

If anything, excessive lending is likely to cause lower growth in the long run: in the aftermath of big credit expansions and busts, economic growth declines as households and firms reduce spending in an attempt to pay down debt.

Even if we did want to reduce growth rates, history teaches us that using monetary means to do so is a very bad idea. During the monetarist experiment of the early 1980s, the Thatcher government tried exactly this: they restricted growth of the money supply, ostensibly in an attempt to reduce inflation. The result was a recession in which 3 million people were out of work.

Oddly, despite the environmental argument, we can also find arguments from PM about ways that monetary mechanisms can be used to induce higher output and employment. These proposals, which go by titles such as ‘Green QE’ and ‘People’s QE’, argue that the government should issue new money and use it to pay for infrastructure spending.

An increase in government infrastructure spending is undoubtedly a good idea. But we don’t need to change the monetary system to achieve it. The public sector can do what it has always done and issue bonds to finance expenditures. (This sentence will inevitably raise the ire of the Modern Money Theory crowd, but I don’t want to get sidetracked by that debate here.)

Further, the conflation of QE with the use of newly printed money for government spending is another example of sleight of hand by Positive Money. QE involves swapping one sort of financial asset for another – the central bank swaps reserves for government bonds. This is a different type of operation to government investment spending – but Positive Money present the case as if it were a straight choice between handing free money to banks and spending money on health and education.  It is not. It should also be emphasised that printing money to pay for government spending is an entirely distinct policy proposal to full reserve banking – which do would nothing in itself to raise infrastructure spending – but this is obfuscated because PM labels both proposals ‘Sovereign Money’.

The same is true of other issues raised by PM: inequality, excessive debt, and financial instability. All are serious issues which urgently need to be addressed. But PM is wrong to promise a simple fix for these problems. None would be solved by full reserve banking – on the contrary, it is likely to exacerbate some. For example, by narrowing the focus to the deposit-issuing banks, PM excludes the rest of the financial system – investment banks, hedge funds, insurance companies, money market funds and many others – from consideration. The relationship between retail banks and these ‘shadow’ banking institutions is complex, but in narrowing the focus of ‘financial stability’ to only the former, the PM proposals would potentially shift risk-taking activity away from the more regulated retail banking system to the less regulated sector.

Another justification PM provide for full reserve banking is that issuing money generates profits in itself. By stripping the banks of money creation powers, the government could instead gain this profit (known as ‘seigniorage’):

Government finances would receive a boost, as the Treasury would earn the profit on creating electronic money, instead of only on the creation of bank notes. The profit on the creation of bank notes has raised £16.7bn for the Treasury over the past decade. But by allowing banks to create electronic money, it has lost hundreds of billions of potential revenue – and taxpayers have ended up making up the difference.

This is incorrect. As explained above, banks make a profit on the ‘spread’ between rates of interest on deposits and loans. There is simply no reason why the act of issuing money generates profits in itself. It’s not clear where the £16.7bn figure is taken from in the above quote since no source is given. (While Martin Wolf appears to support this position, he instead seems to be referring to general banking profits from interest spreads, fees etc.)

None of the above should be taken to imply that there are not problems with the current system – there are many. The banks are too big, too systemically important and too powerful. Part of their power arises from the guarantees and backstops provided by the state: deposit insurance, central bank ‘lender of last resort’ facilities and, ultimately, tax-payer bailouts when losses arise as a result of banks taking on too much risk in the search for profits. QE is insufficient as a macroeconomic tool to deal with on-going repercussions of the 2008 crisis – government spending is needed – and has pernicious side effects such as widening wealth inequality. The state should use the guarantees proved to the banks as leverage to force much more substantial changes of behaviour.

Milton Friedman was a proponent of the original Chicago Plan, and the intellectual force behind the monetarist experiment of the early 1980s. He was also deeply opposed to Roosevelt’s New Deal – a programme of government borrowing and spending aimed at reviving the economy during the Great Depression. Friedman describing the New Deal as ‘the wrong cure for the wrong disease’ – in his view the problems of the 1930s were caused by a shrinking money supply due to bank failures. Like PM, he favoured a simple monetary solution: the Fed should print money to counteract the effect of bank failures.

He was wrong about the New Deal. But his description is fitting for Positive Money’s Friedman-inspired monetary solutions to an array of complex issues: lack of decent jobs, inequality, financial instability and environmental degradation. The causes of these problems run deeper than a faulty monetary system. There are no simple quick-fix solutions.

PM wrongly diagnose the problem when they focus on the monetary system – so their prescription is also faulty. Full reserve banking is the wrong cure for the wrong disease.

Economics, Ideology and Trump

So the post-mortem begins. Much electronic ink has already been spilled and predictable fault lines have emerged. Debate rages in particular on the question of whether Trump’s victory was driven by economic factors. Like Duncan Weldon, I think Torsten Bell gets it about right – economics is an essential part of the story even if the complete picture is more complex.

Neoliberalism is a word I usually try to avoid. It’s often used by people on the left as an easy catch-all to avoid engaging with difficult issues. Broadly speaking, however, it provides a short-hand for the policy status quo over the last thirty years or so: free movement of goods, labour and capital, fiscal conservatism, rules-based monetary policy, deregulated finance and a preference for supply-side measures in the labour market.

Some will argue this consensus has nothing to with the rise of far-right populism. I disagree. Both economics and economic policy have brought us here.

But to what extent has academic economics provided the basis for neoliberal policy? The question had been in my mind even before the Trump and Brexit votes. A few months back, Duncan Weldon posed the question, ‘whatever happened to deficit bias?’ In my view, the responses at the time missed the mark. More recently, Ann Pettifor and Simon Wren Lewis have been discussing the relationship between ideology, economics and fiscal austerity.

I have great respect for Simon – especially his efforts to combat the false media narratives around austerity. But I don’t think he gets it right on economics and ideology. His argument is that in a standard model – a sticky-price DSGE system – fiscal policy should be used when nominal rates are at the zero lower bound. Post-2008 austerity policies are therefore at odds with the academic consensus.

This is correct in simple terms, but I think misses the bigger picture of what academic economics has been saying for the last 30 years. To explain, I need to recap some history.

Fiscal policy as a macroeconomic management tool is associated with the ideas of Keynes. Against the academic consensus of his day, he argued that the economy could get stuck in periods of demand deficiency characterised by persistent involuntary unemployment. The monetarist counter-attack was led by Milton Friedman – who denied this possibility. In the long run, he argued, the economy has a ‘natural’ rate of unemployment to which it will gravitate automatically (the mechanism still remains to be explained). Any attempt to use activist fiscal or monetary policy to reduce unemployment below this natural rate will only lead to higher inflation. This led to the bitter disputes of the 1960s and 70s between Keynesians and Monetarists. The Monetarists emerged as victors – at least in the eyes of the orthodoxy – with the inflationary crises of the 1970s. This marks the beginning of the end for fiscal policy in the history of macroeconomics.

In Friedman’s world, short-term macro policy could be justified in a deflationary situation as a way to help the economy back to its ‘natural’ state. But, for Friedman, macro policy means monetary policy. In line with the doctrine that the consumer always knows best, government spending was proscribed as distortionary and inefficient. For Friedman, the correct policy response to deflation is a temporary increase in the rate of growth of the money supply.

It’s hard to view Milton Friedman’s campaign against Keynes as disconnected from ideological influence. Friedman’s role in the Mont Pelerin society is well documented. This group of economic liberals, led by Friedrich von Hayek, formed after World War II with the purpose of opposing the move towards collectivism of which Keynes was a leading figure. For a time at least, the group adopted the term ‘neoliberal’ to describe their political philosophy. This was an international group of economists whose express purpose was to influence politics and politicians – and they were successful.

Hayek’s thesis – which acquires a certain irony in light of Trump’s ascent – was that collectivism inevitably leads to authoritarianism and fascism. Friedman’s Chicago economics department formed one point in a triangular alliance with Lionel Robbins’ LSE in London, and Hayek’s fellow Austrians in Vienna. While in the 1930s, Friedman had expressed support for the New Deal, by the 1950s he had swung sharply in the direction of economic liberalism. As Brad Delong puts it:

by the early 1950s, his respect for even the possibility of government action was gone. His grudging approval of the New Deal was gone, too: Those elements that weren’t positively destructive were ineffective, diverting attention from what Friedman now believed would have cured the Great Depression, a substantial expansion of the money supply. The New Deal, Friedman concluded, had been ‘the wrong cure for the wrong disease.’

While Friedman never produced a complete formal model to describe his macroeconomic vision, his successor at Chicago, Robert Lucas did – the New Classical model. (He also successfully destroyed the Keynesian structural econometric modelling tradition with his ‘Lucas critique’.) Lucas’ New Classical colleagues followed in his footsteps, constructing an even more extreme version of the model: the so-called Real Business Cycle model. This simply assumes a world in which all markets work perfectly all of the time, and the single infinitely lived representative agent, on average, correctly predicts the future.

This is the origin of the ‘policy ineffectiveness hypothesis’ – in such a world, government becomes completely impotent. Any attempt at deficit spending will be exactly matched by a corresponding reduction in private spending – the so-called Ricardian Equivalence hypothesis. Fiscal policy has no effect on output and employment. Even monetary policy becomes totally ineffective: if the central bank chooses to loosen monetary policy, the representative agent instantly and correctly predicts higher inflation and adjusts her behaviour accordingly.

This vision, emerging from a leading centre of conservative thought, is still regarded by the academic economics community as a major scientific step forward. Simon describes it as `a progressive research programme’.

What does all this have to with the current status quo? The answer is that this model – with one single modification – is the ‘standard model’ which Simon and others point to when they argue that economics has no ideological bias. The modification is that prices in the goods market are slow to adjust to changes in demand. As a result, Milton Friedman’s result that policy is effective in the short run is restored. The only substantial difference to Friedman’s model is that the policy tool is the rate of interest, not the money supply. In a deflationary situation, the central bank should cut the nominal interest rate to raise demand and assist the automatic but sluggish transition back to the `natural’ rate of unemployment.

So what of Duncan’s question: what happened to deficit bias? – this refers to the assertion in economics textbooks that there will always be a tendency for governments to allow deficits to increase. The answer is that it was written out of the textbooks decades ago – because it is simply taken as given that fiscal policy is not the correct tool.

To check this, I went to our university library and looked through a selection of macroeconomics textbooks. Mankiw’s ‘Macroeconomics’ is probably the mostly widely used. I examined the 2007 edition – published just before the financial crisis. The chapter on ‘Stabilisation Policy’ dispenses with fiscal policy in half a page – a case study of Romer’s critique of Keynes is presented under the heading ‘Is the Stabilization of the Economy a Figment of the Data?’ The rest of the chapter focuses on monetary policy: time inconsistency, interest rate rules and central bank independence. The only appearance of the liquidity trap and the zero lower bound is in another half-page box, but fiscal policy doesn’t get a mention.

The post-crisis twelfth edition of Robert Gordon’s textbook does include a chapter on fiscal policy – entitled `The Government Budget, the Government Debt and the Limitations of Fiscal Policy’. While Gordon acknowledges that fiscal policy is an option during strongly deflationary periods when interest rates are at the zero lower bound, most of the chapter is concerned with the crowding out of private investment, the dangers of government debt and the conditions under which governments become insolvent. Of the textbooks I examined, only Blanchard’s contained anything resembling a balanced discussion of fiscal policy.

So, in Duncan’s words, governments are ‘flying a two engined plane but choosing to use only one motor’ not just because of media bias, an ill-informed public and misguided politicians – Simon’s explanation – but because they are doing what the macro textbooks tell them to do.

The reason is that the standard New Keynesian model is not a Keynesian model at all – it is a monetarist model. Aside from the mathematical sophistication, it is all but indistinguishable from Milton Friedman’s ideologically-driven description of the macroeconomy. In particular, Milton Friedman’s prohibition of fiscal policy is retained with – in more recent years – a caveat about the zero-lower bound (Simon makes essentially the same point about fiscal policy here).

It’s therefore odd that when Simon discusses the relationship between ideology and economics he chooses to draw a dividing line between those who use a sticky-price New Keynesian DSGE model and those who use a flexible-price New Classical version. The beliefs of the latter group are, Simon suggests, ideological, while those of the former group are based on ideology-free science. This strikes me as arbitrary. Simon’s justification is that, despite the evidence, the RBC model denies the possibility of involuntary unemployment. But the sticky-price version – which denies any role for inequality, finance, money, banking, liquidity, default, long-run unemployment, the use of fiscal policy away from the ZLB, supply-side hysteresis effects and plenty else besides – is acceptable. He even goes so far as to say ‘I have no problem seeing the RBC model as a flex-price NK model’ – even the RBC model is non-ideological so long as the hierarchical framing is right.

Even Simon’s key distinction – the New Keynesian model allows for involuntary unemployment – is open to question. Keynes’ definition of involuntary unemployment is that there exist people willing and able to work at the going wage who are unable to find employment. On this definition the New Keynesian model falls short – in the face of a short-run demand shortage caused by sticky prices the representative agent simply selects a new optimal labour supply. Workers are never off their labour supply curve. In the Smets Wouters model – a very widely used New Keynesian DSGE model – the labour market is described as follows: ‘household j chooses hours worked Lt(j)’. It is hard to reconcile involuntary unemployment with households choosing how much labour they supply.

What of the position taken by the profession in the wake of 2008? Reinhart and Rogoff’s contribution is by now infamous. Ann also draws attention to the 2010 letter signed by 20 top-ranking economists – including Rogoff – demanding austerity in the UK. Simon argues that Ann overlooks the fact that ‘58 equally notable economists signed a response arguing the 20 were wrong’.

It is difficult to agree that the signatories to the response letter, organised by Lord Skidelsky, are ‘equally notable’. Many are heterodox economists – critics of standard macroeconomics. Those mainstream economists on the list hold positions at lower-ranking institutions than the 20. I know many of the 58 personally – I know none of the 20. Simon notes:

Of course those that signed the first letter, and in particular Ken Rogoff, turned out to be a more prominent voice in the subsequent debate, but that is because he supported what policymakers were doing. He was mostly useful rather than influential.

For Simon, causality is unidirectional: policy-makers cherry-pick academic economics to fit their purpose but economists have no influence on policy. This seems implausible. It is undoubtedly true that pro-austerity economists provided useful cover for small-state ideologues like George Osborne. But the parallels between policy and academia are too strong for the causality to be unidirectional.

Osborne’s small state ideology is a descendent of Thatcherism – the point when neoliberalism first replaced Keynesianism. Is it purely coincidence that the 1980s was also the high-point for extreme free market Chicago economics such as Real Business Cycle models?

The parallel between policy and academia continues with the emergence of the sticky-price New Keynesian version as the ‘standard’ model in the 90s alongside the shift to the third way of Blair and Clinton. Blairism represents a modified, less extreme, version of Thatcherism. The all-out assault on workers and the social safety net was replaced with ‘workfare’ and ‘flexicurity’.

A similar story can be told for international trade, as laid out in this excellent piece by Martin Sandbu. In the 1990s, just as the ‘heyday of global trade integration was getting underway’, economists were busy making the case that globalisation had no negative implications for employment or inequality in rich nations. To do this, they came up with the ‘skill-biased technological change’ (SBTC) hypothesis. This states that as technology advances and the potential for automation grows, the demand for high-skilled labour increases. This introduces the hitch that higher educational standards are required before the gains from automation can be felt by those outside the top income percentiles. This leads to a `race between education and technology’ – a race which technology was winning, leading to weaker demand for middle and low-skill workers and rising ‘skill premiums’ for high skilled workers as a result.

Writing in the Financial Times shortly before the financial crisis, Jagdish Bagwati argued that those who looked to globalisation as an explanation for increasing inequality were misguided:

The culprit is not globalization but labour-saving technical change that puts pressure on the wages of the unskilled. Technical change prompts continual economies in the use of unskilled labour. Much empirical argumentation and evidence exists on this. (FT, January 4, 2007, p. 11)

As Krugman put it:

The hypothesis that technological change, by raising the demand for skill, has led to growing inequality is so widespread that at conferences economists often use the abbreviation SBTC – skill-biased technical change – without explanation, assuming that their listeners know what they are talking about (p. 132)

Over the course of his 2007 book, Krugman sets out on a voyage of discovery – ‘That, more or less, is the story I believed when I began working on this book’ (p. 6). He arrives at the astonishing conclusion – ‘[i]t sounds like economic heresy’ (p. 7) – that politics can influence inequality:

[I]nstitutions, norms and the political environment matter a lot more for the distribution of income – and … impersonal market forces matter less – than Economics 101 might lead you to believe (p. 8)

The idea that rising pay at the top of the scale mainly reflect social and political change, … strikes some people as … too much at odds with Economics 101.

If a left-leaning Nobel prize-winning economist has trouble escaping from the confines of Economics 101, what hope for the less sophisticated mind?

As deindustrialisation rolled through the advanced economies, wiping out jobs and communities, economists continued to deny any role for globalisation. As Martin Sandbu argues,

The blithe unconcern displayed by the economics profession and the political elites about whether trade was causing deindustrialisation, social exclusion and rising inequality has begun to seem Pollyannish at best, malicious at worst. Kevin O’Rourke, the Irish economist, and before him Lawrence Summers, former US Treasury Secretary, have called this “the Davos lie.”

For mainstream macroeconomists, inequality was not a subject of any real interest. While the explanation for inequality lay in the microeconomics – the technical forms of production functions – and would be solved by increasing educational attainment, in macroeconomic terms, the use of a representative agent and an aggregate production function simply assumed the problem away. As Stiglitz puts it:

[I]f the distribution of income (say between labor and capital) matters, for example, for aggregate demand and therefore for employment and output, then using an aggregate Cobb-Douglas production function which, with competition, implies that the share of labor is fixed, is not going to be helpful. (p.596)

Robert Lucas summed up his position as follows: ‘Of the tendencies that are harmful to sound economics, the most seductive, and in my opinion the most poisonous, is to focus on questions of distribution.’ It is hard to view this statement as more strongly informed by science than ideology.

But while economists were busy assuming away inequality in their models, incomes continued to diverge in most advanced economies. It was only with the publication of Piketty’s book that the economics profession belatedly began to turn its back on Lucas.

The extent to which economic insecurity in the US and the UK is driven by globalisation versus policy is still under discussion – my answer would be that it is a combination of both – but the skill-biased technical change hypothesis looks to be a dead end – and a costly one at that.

Similar stories can be told about the role of household debt, finance, monetary theory and labour bargaining power and monopoly – why so much academic focus on ‘structural reform’ in the labour market but none on anti-trust policy?  Heterodox economists were warning about the connections between finance, globalisation, current account imbalances, inequality, household debt and economic insecurity in the decades before the crisis. These warnings were dismissed as unscientific – in favour of a model which excluded all of these things by design.

Are economic factors – and economic policy – partly to blame for the Brexit and Trump votes? And are academic economists, at least in part, to blame for these polices? The answer to both questions is yes. To argue otherwise is to deny Keynes’ dictum that ‘the ideas of economists and political philosophers, both when they are right and when they are wrong are more powerful than is commonly understood.’

This quote, ‘mounted and framed, takes pride of place in the entrance hall of the Institute for Economic Affairs’ – the think-tank founded, with Hayek’s encouragement, by Anthony Fisher, as a way to promote and promulgate the ideas of the Mont Pelerin Society. The Institute was a success. Fisher was, in the words of Milton Friedman, ‘the single most important person in the development of Thatcherism’.

The rest, it seems, is history.

What is the Loanable Funds theory?

I had another stimulating discussion with Noah Smith last week. This time the topic was the ‘loanable funds’ theory of the rate of interest. The discussion was triggered by my suggestion that the ‘safe asset shortage’ and associated ‘reach for yield’ are in part caused by rising wealth concentration. The logic is straightforward: since the rich spend less of their income than the poor, wealth concentration tends to increase the rate of saving out of income. This means an increase in desired savings chasing the available stock of financial assets, pushing up the price and lowering the yield.

Noah viewed this as a plausible hypothesis but suggested it relies on the loanable funds model. My view was the opposite – I think this mechanism is incompatible with the loanable funds theory. Such disagreements are often enlightening – either one of us misunderstood the mechanisms under discussion, or we were using different definitions. My instinct was that it was the latter: we meant something different by ‘loanable funds theory’ (LFT hereafter).

To try and clear this up, Noah suggested Mankiw’s textbook as a starting point – and found a set of slides which set out the LFT clearly. The model described was exactly the one I had in mind – but despite agreeing that Mankiw’s exposition of the LFT was accurate it was clear we still didn’t agree about the original point of discussion.

The reason seems to be that Noah understands the LFT to describe any market for loans: there are some people willing to lend and some who wish to borrow. As the rate of interest rises, the volume of available lending increases but the volume of desired borrowing falls. In equilibrium, the rate of interest will settle at r* – the market-clearing  rate.

What’s wrong with this? – It certainly sounds like a market for ‘loanable funds’. The problem is that LFT is not a theory of loan market clearing per se. It’s a theory of macroeconomic equilibrium. It’s not a model of any old loan market: it’s a model of a one very specific market – the market which intermediates total (net) saving with total capital investment in a closed economic system.

OK, but saving equals investment by definition in macroeconomic terms: the famous S = I identity. How can there be a market which operates to ensure equality between two identically equal magnitudes?

The issue – as Keynes explained in the General Theory– is that in a modern capitalist economy, the person who saves and the person who undertakes fixed capital investment are not usually the same. Some mechanism needs to be in place to ensure that a decision to ‘not consume’ somewhere in the system – to save – is always matched by a decision to invest – to build a new machine, road or building – somewhere else in the economy.

To see the issue more clearly consider the ‘corn economy’ used in many standard macro models: one good – corn – is produced. This good can either be consumed or invested (by planting in the ground or storing corn for later consumption). The decision to plant or store corn is simultaneously both a decision to ‘not consume’ and to ‘invest’ (the rate of return on investment will depend on the mix of stored to planted corn). In this simple economy S = I because it can’t be any other way. A market for loanable funds is not required.

But this isn’t how modern capitalism works. Decisions to ‘not consume’ and decisions to invest are distributed throughout the economic system. How can we be sure that these decisions will lead to identical intended saving and investment – what ensures that S and I are equal? The loanable funds theory provides one possible answer to this question.

The theory states that decisions to save (i.e. to not consume) are decisive – investment adjusts automatically to accommodate any change in consumption behaviour. To see how this works, we need to recall how the model is derived. The diagram below shows the basic system (I’ve borrowed the figure from Nick Rowe).

lf

The upward sloping ‘desired saving’ curve is derived on the assumption that people are ‘impatient’ – they prefer current consumption to future consumption. In order to induce people to save,  a return needs to be paid on their savings. As the return paid on savings increases, consumers are collectively willing to forgo a greater volume of current consumption in return for a future payoff.

The downward sloping investment curve is derived on standard neoclassical marginalist principles. ‘Factors of production’ (i.e. labour and capital) receive ‘what they are worth’ in competitive markets. The real wage is equal to the marginal productivity of labour and the return on ‘capital’ is likewise equal to the marginal productivity of capital. As the ‘quantity’ of capital increases, the marginal product – and thus the rate of return – falls.

So the S and I curves depict how much saving and investment would take place at each possible rate of interest. As long as the S and I curves are well-defined and ‘monotonic’ (a strong assumption), there is only one rate of interest at which the amount people wish to lend is equal to the amount (other) people would like to borrow. This is r*, the point of intersection between the curves. This rate of interest is often referred to as the Wicksellian ‘natural rate’.

Now, consider what happens if the collective impatience of society decreases. At any rate of interest, consumption as a share of income will be lower and desired saving correspondingly higher – the S curve moves to the right. As the S curve shifts to the right – assuming no change in the technology determining the slope and position of the I curve – a greater share of national income is ‘not consumed’. But by pushing down the rate of interest in the loanable funds market, reduced consumption – somewhat miraculously – leads to an automatic increase in investment. An outward shift in the S curve is accompanied by a shift along the I curve.

Consider what this means for macroeconomic aggregates. Assuming a closed system, income is, by definition, equal to consumption plus investment: Y = C + I. The LFT says is that in freely adjusting markets, reductions in C due to shifts in preferences are automatically offset by increases in I. Y will remain at the ‘full employment’ rate of output at all times.

The LFT therefore underpins ‘Say’s Law’ – summarised by Keynes as ‘supply creates its own demand’. It was thus a key target for Keynes’ attack on the ‘Law’ in his General Theory. Keynes argued against the notion that saving decisions are strongly influenced by the rate of interest. Instead, he argued consumption is mostly determined by income. If individuals consume a fixed proportion of their income, the S curve in the diagram is no longer well defined – at any given level of output, S is vertical, but the position of the curve shifts with output. This is quite different to the LFT which regards the position of the two curves as determined by the ‘deep’ structural parameters of the system – technology and preferences.

How then is the rate of interest determined in Keynes’ theory? – the answer is ‘liquidity preference’. Rather than desired saving determining the rate of interest, what matters is the composition of financial assets people use to hold their savings. Keynes simplifies the story by assuming only two assets: ‘money’ which pays no interest and ‘bonds’ which do pay interest. It is the interaction of supply and demand in the bond market – not the ‘loanable funds’ market – which determines the rate of interest.

There are two key points here: the first is that saving is a residual – it is determined by output and investment. As such, there is no mechanism to ensure that desired saving and desired investment will be equalised. This means that output, not the rate of interest, will adjust to ensure that saving is equal to investment. There is no mechanism which ensures that output is maintained at full employment levels. The second is that interest rates can move without any change in either desired saving or desired investment. If there is an increase in ‘liquidity preference’ – a desire to hold lower yielding but safer assets, this will cause an increase in the rate of interest on riskier assets.

How can the original question be framed using these two models? – What is the implication of increasing wealth concentration on yields and macro variables?

I think Noah is right that one can think of the mechanism in a loanable funds world. If redistribution towards the rich increases the average propensity to save, this will shift the S curve to the right – as in the example above – reducing the ‘natural’ rate of interest. This is the standard ‘secular stagnation’ story – a ‘global savings glut’ has pushed the natural rate below zero. However, in a loanable funds world this should – all else being equal – lead to an increase in investment. This doesn’t seem to fit the stylised facts: capital investment has been falling as a share of GDP in most advanced nations. (Critics will point out that I’m skirting the issue of the zero lower bound – I’ll have to save that for another time).

My non-LFT interpretation is the following. Firstly, I’d go further than Keynes and argue that the rate of interest is not only relatively unimportant for determining S – it also has little effect on I. There is evidence to suggest that firms’ investment decisions are fairly interest-inelastic. This means that both curves in the diagram above have a steep slope – and they shift as output changes. There is no ‘natural rate’ of interest which brings the macroeconomic system into equilibrium.

In terms of the S = I identity, this means that investment decisions are more important for the determination of macro variables than saving decisions. If total desired saving as a share of income increases – due to wealth concentration, for example – this will have little effect on investment. The volume of realised saving, however, is determined by (and identically equal to) the volume of capital investment. An increase in desired saving manifests itself not as a rise in investment – but as a fall in consumption and output.

In such a scenario – in which a higher share of nominal income is saved – the result will be weak demand for goods but strong demand for financial assets – leading to deflation in the goods market and inflation in the market for financial assets. Strong demand for financial assets will reduce rates of return – but only on financial assets: if investment is inelastic to interest rate there is no reason to believe there will be any shift in investment or in the return on fixed capital investment.

In order explain the relative rates of return on equity and bonds, a re-working of Keynes’ liquidity preference theory is required. Instead of a choice between ‘money’ and ‘bonds’, the choice faced by investors can be characterised as a choice between risky equity and less-risky bonds. Liquidity preference will then make itself felt as an increase in the price of bonds relative to equity – and a corresponding movement in the yields on each asset. On the other hand, an increase in total nominal saving will increase the price of all financial assets and thus reduce yields across the board. Given that it is likely that portfolio managers will have minimum target rates of return, this is will induce a shift into higher-risk assets.

Consistent modelling and inconsistent terminology

Image reproduced from here

Simon Wren-Lewis has a couple of recent posts up on heterodox macro, and stock-flow consistent modelling in particular. His posts are constructive and engaging. I want to respond to some of the points raised.

Simon discusses the modelling approach originating with Wynne Godley, Francis Cripps and others at the Cambridge Economic Policy Group in the 1970s. More recently this approach is associated with the work of Marc Lavoie who co-wrote the key textbook on the topic with Godley.

The term ‘stock-flow consistent’ was coined by Claudio Dos Santos in his PhD thesis, ‘Three essays in stock flow consistent modelling’ and has been a source of misunderstanding ever since. Simon writes, ‘it is inferred that mainstream models fail to impose stock flow consistency.’ As I tried to emphasise  in the blog which Simon links to, this is not the intention: ‘any correctly specified closed mathematical macro model should be internally consistent and therefore stock-flow consistent. This is certainly true of DSGE models.’ (There is an important caveat here:  this consistency won’t be maintained after log-linearisation – a standard step in DSGE solution – and the further a linearised model gets from the steady state, the worse this inconsistency will become.)[1]

Marc Lavoie has emphasised that he regrets adopting the name, precisely because of the implication that consistency is not maintained in other modelling traditions. Instead, the term refers to a subset of models characterised by a number of specific features. These include the following: aggregate behavioural macro relationships informed by both empirical evidence and post-Keynesian theory; detailed, institutionally-specific modelling of the monetary and financial sector; and explicit feedback effects from financial balance sheets to economic behaviour and the stability of the macro system both in the short run and the long run.

A distinctive feature of these models is their rejection of the loanable funds theory of banking and money – a position endorsed in a recent Bank of England Quarterly Bulletin and Working Paper. Partially as a result of this view of the importance of money and money-values in the decision-making process, these models are usually specified in nominal magnitudes. As a result, they map more directly onto the national accounts than real-sector models which require complex transformations of data series using price deflators.

Since the behavioural features of these models are informed by a well-developed theoretical tradition, Simon’s assertion that SFC modelling is ‘accounting, not economics’ is inaccurate. Accounting is one important element in a broader methodological approach. Imposing detailed financial accounting alongside behavioural assumptions about how financial stocks and flows evolve imposes constraints across the entire system. Rather like trying to squeeze the air out of one part of a balloon, only to find another part inflating, chasing assets and liabilities around a closed system of linked balance sheets can be an informative exercise – because where leverage eventually turns up is not always clear at the outset. Likewise, SFC models may include detailed modelling of inventories, pricing and profits, or of changes in net worth due to asset price revaluation and price inflation. For such processes, even the accounting is non-trivial. Taking accounting seriously allows modellers to incorporate institutional complexity – something of increasing importance in today’s world.

The inclusion of detailed financial modelling allows the models to capture Godley’s view that agents aim to achieve certain stock-flow norms. These may include household debt-to-income ratios, inventories-to-sales ratios for firms and leverage ratios for banks. Many of the functional forms used implicitly capture these stock-flow ratios. This is the case for the simple consumption function used in the BoE paper discussed by Simon, as shown here. Of course, other functional specifications are possible, as in this model, for example, which includes a direct interest rate effect on consumption.

Simon notes that adding basic financial accounting to standard models is trivial but ‘in most mainstream models these balances are of no consequence’. This is an important point, and should set alarm bells ringing. Simon identifies one reason for the neutrality of finance in standard models: ‘the simplicity of the dominant mainstream model of intertemporal consumption’.

There are deeper reasons why the financial sector has little role in standard macro. In the majority of standard DSGE macro models, the system automatically tends towards some long-run supply side-determined full-employment equilibrium – in other words the models incorporate Milton Friedman’s long-run vertical Phillips Curve. Further, in most DSGE models, income distribution has no long-run effect on macroeconomic outcomes.

Post-Keynesian economics, which provides much of the underlying theoretical structure of SFC models, takes issue with these assumptions. Instead, it is argued, Keynes was correct in his assertion that demand deficiency can lead economies to become stuck in equilibria characterised by under-employment or stagnation.

Now, if the economic system is always in the process of returning to the flexible-price full-employment equilibrium, then financial stocks will be, at most, of transitory significance. They may serve to amplify macroeconomic fluctuations, as in the Bernanke-Gertler-Gilchrist models, but they will have no long-run effects. This is the reason that DSGE models which do attempt to incorporate financial leverage also require additional ‘ad-hoc’ adjustments to the deeper model assumptions – for example this model by Kumhof and Ranciere imposes an assumption of non-negative subsistence consumption for households. As a result, when income falls, households are unable to reduce consumption but instead run up debt. For similar reasons, if one tries to abandon the loanable funds theory in DSGE models – one of the key reasons for the insistence on accounting in SFC models – this likewise raises non-trivial issues, as shown in this paper by Benes and Kumhof  (to my knowledge the only attempt so far to produce such a model).

Non-PK-SFC models, such as the UK’s OBR model, can therefore incorporate modelling of sectoral balances and leverage ratios – but these stocks have little effect on the real outcomes of the model.

On the contrary, if long-run disequlibrium is considered a plausible outcome, financial stocks may persist and feedbacks from these stocks to the real economy will have non-trivial effects. In such a situation, attempts by individuals or sectors to achieve some stock-flow ratio can alter the long-run behaviour of the system. If a balance-sheet recession persists, it will have persistent effects on the real economy – such hysteresis effects are increasingly acknowledged in the profession.

This relates to an earlier point made in Simon’s post: ‘the fact that leverage was allowed to increase substantially before the crisis was not something that most macroeconomists were even aware of … it just wasn’t their field’. I’m surprised this is presented as evidence for the defence of mainstream macro.

The central point made by economists like Minsky and Godley was that financial dynamics should be part of our field. The fact that by 2007 it wasn’t, illustrates how badly mainstream macroeconomics went wrong. Between Real Business Cycle models, Rational Expectations, the Efficient Markets Hypothesis and CAPM, economists convinced themselves – and, more importantly, policy-makers – that the financial system was none of their business. The fact that economists forgot to look at leverage ratios wasn’t an absent-minded oversight. As Oliver Blanchard argues:

 ‘… mainstream macroeconomics had taken the financial system for granted. The typical macro treatment of finance was a set of arbitrage equations, under the assumption that we did not need to look at who was doing what on Wall Street. That turned out to be badly wrong.’

This is partially acknowledged by Simon when he argues that the ‘microfoundations revolution’ lies behind economists’ myopia on the financial system. Where I, of course, agree with Simon is that ‘had the microfoundations revolution been more tolerant of other methodologies … macroeconomics may well have done more to integrate the financial sector into their models before the crisis’. Putting aside the point that, for the most part, the microfoundations revolution didn’t actually lead to microfounded models, ‘integrating the financial sector’ into models is exactly what people like Godley, Lavoie and others were doing.

Simon also makes an important point in highlighting the lack of acknowledgement of antecedents by PK-SFC authors and, as a result, a lack of continuity between PK-SFC models and the earlier structural econometric models (SEMs) which were eventually killed off by the shift to microfounded models. There is a rich seam of work here – heterodox economists should both acknowledge this and draw on it in their own work. In many respects, I see the PK-SFC approach as a continuation of the SEM tradition – I was therefore pleased to read this paper in which Simon argues for a return to the use of SEMs alongside DSGE and VAR techniques.

To my mind, this is what is attempted in the Bank of England paper criticised by Simon – the authors develop a non-DSGE, econometrically estimated, structural model of the UK economy in which the financial system is taken seriously. Simon is right, however, that the theoretical justifications for the behavioural specifications and the connections to previous literature could have been spelled out more clearly.

The new Bank of England model is one of a relatively small group of empirically-oriented SFC models. Others include the Levy Institute model of the US, originally developed by Wynne Godley and now maintained by Gennaro Zezza, the UNCTAD Global Policy model, developed in collaboration with Godley’s old colleague Francis Cripps, and the Gudgin and Coutts model of the UK economy (the last of these is not yet fully stock-flow consistent but shares much of its theoretical structure with the other models).

One important area for improvement in these models lies with their econometric specification. The models tend to have large numbers of parameters, making them difficult to estimate other than through individual OLS regressions of behavioural relationships. PK-SFC authors can certainly learn from the older SEM tradition in this area.

I find another point of agreement in Simon’s statement that ‘heterodox economists need to stop being heterodox’. I wouldn’t state this so strongly – I think heterodox economists need to become less heterodox. They should identify and more explicitly acknowledge those areas in which there is common ground with mainstream economics.  In those areas where disagreement persists, they should try to explain more clearly why this is the case. Hopefully this will lead to more fruitful engagement in the future, rather than the negativity which has characterised some recent exchanges.

[1] Simon goes on to argue that stock-flow consistency is not ‘unique to Godley. When I was a young economist at the Treasury in the 1970s, their UK model was ‘stock-flow consistent’, and forecasts routinely looked at sector balances.’  During the 1970s, there was sustained debate between the Treasury and Godley’s Cambridge team, who were, aside from Milton Friedman’s monetarism, the most prominent critics of the Keynesian conventional wisdom of the time – there is an excellent history here. I don’t know the details but I wonder if the awareness of sectoral balances at the Treasury was partly due to Godley’s influence?

The Fable of the Ants, or Why the Representative Agent is No Such Thing

Image reproduced from here

Earlier in the summer, I had a discussion on Twitter with Tony Yates, Israel Arroyo and others on the use of the representative agent in macro modelling.

The starting point for representative agent macro is an insistence that all economic models must be ‘microfounded’. This means that model behaviour must be derived from the optimising behaviour of individuals – even when the object of study is aggregates such as employment, national output or the price level. But given the difficulty – more likely the impossibility – of building an individual-by-individual model of the entire economic system, a convenient short-cut is taken. The decision-making process of one type of agents as a whole (for example consumers or firms) is reduced to that of a single ‘representative’ individual – and  is taken to be identical to that assumed to characterise the behaviour of actual individuals.

For example, in the simple textbook DSGE models taught to macro students, the entire economic system is assumed to behave like a single consumer with fixed and externally imposed preferences over how much they wish to consume in the present relative to the future.

I triggered the Twitter debate by noting that this is equivalent to attempting to model the behaviour of a colony of ants by constructing a model of one large ‘average’ ant. The obvious issue illustrated by the analogy is that ants are relatively simple organisms with a limited range of behaviours – but the aggregate behaviour of an ant colony is both more complex and qualitatively different to that of an individual ant.

This is a well-known topic in computer science: a class of optimisation algorithms were developed by writing code which mimics the way that an ant colony collectively locates food. These algorithms are a sub-group of broader class of ‘swarm intelligence’ algorithms. The common feature is that interaction between ‘agents’ in a population, where the behaviour of each individual is specified as a simple set of rules, produces some emergent ‘intelligent’ behaviour at the population level.

In ants, one such behaviour is the collective food search: ants initially explore at random. If they find food, they lay down pheromone trails on their way back to base. This alters the behaviour of ants that subsequently set out to search for food: the trails attract ants to areas where food was previously located. It turns out this simple rules-based system produces a highly efficient colony-level algorithm for locating the shortest paths to food supplies.

The key point about these algorithms is that the emergent behaviour is qualitatively different from that of individual agents – and is typically robust to changes at the micro level: a reasonably wide degree of variation in ant behaviour at the individual level is possible without disruption to the behaviour of the colony. Further, these emergent properties cannot usually be identified by analysing a single agent in isolation – they will only occur as a result of the interaction between agents (and between agents and their environment).

But this is not how representative agent macro works. Instead, it is assumed that the aggregate behaviour is simply identical to that of individual agents. To take another analogy, it is like a physicist modelling the behaviour of a gas in a room by starting with the assumption of one room-sized molecule.

Presumably economists have good reason to believe that, in the case of economics, this simplifying assumption is valid?

On the contrary, microeconomists have known for a long time that the opposite is the case. Formal proofs demonstrate that a population of agents, each represented using a standard neoclassical inter-temporal utility function will not produce behaviour at the aggregate level which is consistent with a ‘representative’ utility function. In other words, such a system has emergent properties. As Kirman puts it:

“… there is no plausible formal justification for the assumption that the aggregate of individuals, even maximisers, acts itself like an individual maximiser. Individual maximisation does not engender collective rationality, nor does the fact that the collectivity exhibits a certain rationality necessarily imply that individuals act rationaly. There is simply no direct relation between individual and collective behaviour.”

Although the idea of the representative agent isn’t new – it appears in Edgeworth’s 1881 tract on ‘Mathematical Psychics’ – it attained its current dominance as a result of Robert Lucas’ critique of Keynesian structural macroeconomic models. Lucas argued that the behavioural relationships underpinning these models are not be invariant to changes in government policy and therefore should not be used to inform such policy. The conclusion drawn – involving a significant logical leap of faith – was that all macroeconomic models should be based on explicit microeconomic optimization.

This turned out to be rather difficult in practice. In order to produce models which are ‘well-behaved’ at the macro level, one has to impose highly implausible restrictions on individual agents.

A key restriction needed to ensure that microeconomic optimisation behaviour is preserved at the macro level is that of linear ‘Engel curves’. In cross-sectional analysis, this means individuals consume normal and inferior goods in fixed proportions, regardless of their income – a supermarket checkout worker will continue to consume baked beans and Swiss watches in unchanged proportions after she wins the lottery.

In an inter-temporal setting – i.e. in macroeconomic models – this translates to an assumption of constant relative risk aversion. This imposes the constraint that any individual’s aversion to losing a fixed proportion of her income remains constant even as her income changes.

Further, and unfortunately for Lucas, income distribution turns out to matter: if all individuals do not behave identically, then as income distribution changes, aggregate behaviour will also shift. As a result, aggregate utility functions will only be ‘well-behaved’ if, for example, individuals have identical and linear Engel curves, or if individuals have different linear Engel curves but income distribution is not allowed to change.

As well as assuming away any role for, say income distribution or financial interactions, these assumptions contradict well-established empirical facts. The composition of consumption shifts as income increases. It is hard to believe these restrictive special cases provide a sufficient basis on which to construct macro models which can inform policy decisions – but this is exactly what is done.

Kirman notes that ‘a lot of microeconomists said that this was not very good, but macroeconomists did not take that message on board at all. They simply said that we will just have to simplify things until we get to a situation where we do have uniqueness and stability. And then of course we arrive at the famous representative individual.’

The key point here is that a model in which the population as whole collectively solves an inter-temporal optimisation problem – identical to that assumed to be solved by individuals – cannot be held to be ‘micro-founded’ in any serious way. Instead, representative agent models are aggregative macroeconomic models – like Keynesian structural econometric models – but models which impose arbitrary and implausible restrictions on the behaviour of individuals. Instead of being ‘micro-founded’, these models are ‘micro-roofed’ (the term originates with Matheus Grasselli).

It can be argued that old-fashioned Keynesian structural macro behavioural assumptions can in fact stake a stronger claim to compatibility with plausible microeconomic behaviour – precisely because arbitrary restrictions on individual behaviour are not imposed. Like the ant-colony, it can be shown that under sensible assumptions, robust aggregate Keynesian consumption and saving functions can be derived from a range of microeconomic behaviours – both optimising and non-optimising.

So what of the Lucas Critique?

Given that representative agent models are not micro-founded but are aggregate macroeconomic representations, Peter Skott argues that ‘the appropriate definition of the agent will itself typically depend on the policy regime. Thus, the representative-agent models are themselves subject to the Lucas critique. In short, the Lucas inspired research program has been a failure.’

This does not mean that microeconomic behaviour doesn’t matter. Nor is it an argument for a return to the simplistic Keynesian macro modelling of the 1970s. As Hoover puts it:

‘This is not to deny the Lucas critique. Rather it is to suggest that its reach may be sufficiently moderated in aggregate data that there are useful macroeconomic relationships to model that are relatively invariant’

Instead, it should be accepted that some aggregate macroeconomic behavioural relationships are likely to be robust, at least in some contexts and over some periods of time. At the same time, we now have much greater scope to investigate the relationships between micro and macro behaviours. In particular, computing power allows for the use of agent-based simulations to analyse the emergent properties of complex social systems.

This seems a more promising line of enquiry than the dead end of representative agent DSGE modelling.

On ‘heterodox’ macroeconomics

Image reproduced from here

Noah Smith has a new post on the failure of mainstream macroeconomics and what he perceives as the lack of ‘heterodox’ alternatives. Noah is correct about the failure of mainstream macroeconomics, particularly the dominant DSGE modelling approach. This failure is increasingly – if reluctantly – accepted within the economics discipline. As Brad Delong puts it, DSGE macro has ‘… proven a degenerating research program and a catastrophic failure: thirty years of work have produced no tools for useful forecasting or policy analysis.’

I disagree with Noah, however, when he argues that ‘heterodox’ economics has little to offer as an alternative to the failed mainstream.

The term ‘heterodox economics’ is a difficult one. I dislike it and resisted adopting it for some time: I would much rather be ‘an economist’ than ‘a heterodox economist’. But it is clear that unless you accept – pretty much without criticism – the assumptions and methodology of the mainstream, you will not be accepted as ‘an economist’. This was not the case when Joan Robinson debated with Solow and Samuelson, or Kaldor debated with Hayek. But it is the case today.

The problem with ‘heterodox economics’ is that it is self-definition in terms of the other. It says ‘we are not them’ – but says nothing about what we are. This is because includes everything outside of the mainstream, from reasonably well-defined and coherent schools of thought such as Post Keynesians, Marxists and Austrians, to much more nebulous and ill-defined discontents of all hues. To put it bluntly, a broad definition of ‘people who disagree with mainstream economics’ is going to include a lot of cranks. People will place the boundary between serious non-mainstream economists and cranks differently, depending on their perspective.

Another problem is that these schools of thought have fundamental differences. Aside from rejecting standard neoclassical economics, the Marxists and the Austrians don’t have a great deal in common.

Noah seems to define heterodox economics as ‘non-mathematical’ economics. This is inaccurate. There is much formal modelling outside of the mainstream. The difference lies with the starting assumptions. Mainstream macro starts from the assumption of inter-temporal optimisation and a system which returns to the supply-side-determined full-employment equilibrium in the long run. Non-mainstream economists reject these in favour of assumptions which they regard as more empirically plausible.

It is true that there are some heterodox economists, for example Tony Lawson and Ben Fine who take the position that maths is an inappropriate tool for economics and should be rejected. (Incidentally, both were originally mathematicians.) This is a minority position, and one I disagree with. The view is influential, however. The highest-ranked heterodox economics journal, the Cambridge Journal of Economics, has recently changed its editorial policy to explicitly discourage the use of mathematics. This is a serious mistake in my opinion.

So Noah’s claim about mathematics is a straw man. He implicitly acknowledges this by discussing one class of mathematical Post Keynesian models, the so-called ‘stock-flow consistent’ models (SFC). He rightly notes that the name is confusing – any correctly specified closed mathematical macro model should be internally consistent and therefore stock-flow consistent. This is certainly true of DSGE models.

SFC refers to a narrower set of models which incorporate detailed modelling of the ‘plumbing’ of the financial system alongside traditional macro Keynesian behavioural assumptions – and reject the standard inter-temporal optimising assumptions of DSGE macro. Marc Lavoie, who originally came up with the name, admits it is misleading and, with hindsight, a more appropriate name should have been chosen. But names stick, so SFC joins a long tradition of badly-named concepts in economics such as ‘real business cycles’ and ‘rational expectations’.

Noah claims that ‘vague ideas can’t be tested against the data and rejected’.  While the characterisation of all heterodox economics as ‘vague ideas’ is another straw man, the falsifiability point is important. As Noah points out, ‘One of mainstream macro’s biggest failings is that theories that don’t fit the data continue to be regarded as good and useful models.’ He also notes that big SFC models have so many parameters that they are essentially impossible to fit to the data.

This raises an important question about what we want economic models to do, and what the criteria should be for acceptance or rejection. The belief that models should provide quantitative predictions of the future has been much too strongly held. Economists need to come to terms with the reality that the future is unknowable – no model will reliably predict the future. For a while, DSGE models seemed to do a reasonable job. With hindsight, this was largely because enough degrees of freedom were added when converting them to econometric equations that they could do a reasonably good job of projecting past trends forward, along with some mean reversion.  This predictive power collapsed totally with the crisis of 2008.

Models then should be seen as ways to gain insight over the mechanisms at work and to test the implications of combining assumptions. I agree with Narayana Kocherlakota when he argues that we need to return to smaller ‘toy models’ to think through economic mechanisms. Larger econometrically estimated models are useful for sketching out future scenarios – but the predictive power assigned to such models needs to be downplayed.

So the question is then – what are the correct assumptions to make when constructing formal macro models? Noah argues that Post Keynesian models ‘don’t take human behaviour into account – the equations are typically all in terms of macroeconomic aggregates – there’s a good chance that the models could fail if policy changes make consumers and companies act differently than expected’

This is of course Robert Lucas’s critique of structural econometric modelling. This critique was a key element in the ‘microfoundations revolution’ which ushered in the so-called Real Business Cycle models which form the core of the disastrous DSGE research programme.

The critique is misguided, however. Aggregate behavioural relationships do have a basis in individual behavour. As Bob Solow puts it:

The original impulse to look for better or more explicit micro foundations was probably reasonable. It overlooked the fact that macroeconomics as practiced by Keynes and Pigou was full of informal microfoundations. … Generalizations about aggregative consumption-saving patterns, investment patterns, money-holding patterns were always rationalized by plausible statements about individual – and, to some extent, market-behavior.

In many ways, aggregate behavioural specifications can make a stronger claim to be based in microeconomic behaviour than the representative agent DSGE models which came to dominate mainstream macro. (I will expand on this point in a separate blog.)

Mainstream macro has reached the point that only two extremes are admitted: formal, internally consistent DSGE models, and atheoretical testing of the data using VAR models. Anything in between – such as structural econometric modelling – is rejected. As Simon Wren-Lewis has argued, this theoretical extremism cannot be justified.

Crucial issues and ideas emphasised by heterodox economists were rejected for decades by the mainstream while it was in thrall to representative-agent DSGE models. These ideas included the role of income distribution, the importance of money, credit and financial structure, the possibility of long-term stagnation due to demand-side shortfalls, the inadequacy of reliance on monetary policy alone for demand management, and the possibility of demand affecting the supply side. All of these ideas are, to a greater or lesser extent, now gradually becoming accepted and absorbed by the mainstream – in many cases with no acknowledgement of the traditions which continued to discuss and study them even as the mainstream dismissed them.

Does this mean that there is a fully-fledged ‘heterodox economics’ waiting in the wings waiting to take over from mainstream macro? It depends what is meant – is there complete model of the economy sitting in a computer waiting for someone to turn it on? No – but there never will be, either within the mainstream or outside it. But Lavoie argues,

if by any bad luck neoclassical economics were to disappear completely from the surface of the Earth, this would leave economics utterly unaffected because heterodox economics has its own agenda, or agendas, and its own methodological approaches and models.

I think this conclusion is too strong – partly because I don’t think the boundary between neoclassical economics and heterodox economics is as clear as some claim. But it highlights the rich tradition of ideas and models outside of the mainstream – many of which have stood the test of time much better than DSGE macro. It is time these ideas are acknowledged.

What do immigration numbers tell us about the Brexit vote?

A couple of weeks ago I tweeted a chart from The Economist which plotted the percentage increase in the foreign-born population in UK local authority areas against the number of Leave votes in that area. I also quoted the accompanying article: ‘Where foreign-born populations increased by more than 200%, a Leave vote followed in 94% of cases.’

00-economist

This generated lots of responses, many of which rightly pointed out the problems with the causality implied in the quote. These included the following:

  • Using the percentage change in foreign-born population is problematic because this will be highly sensitive to the initial size of population.
  • Majority leave votes also occurred in many areas where the number of migrants had fallen.
  • Much of the result is driven by a relatively small number of outliers while the systemic relationship looks to be flat.
  • The number of points where foreign-born populations had increased by more than 200% were small relative to the total sample: around twenty points out of several hundred.

Al these criticisms are valid. With hindsight, the Economist probably shouldn’t have published the chart and article – and I shouldn’t have tweeted it. But the discussion on Twitter got me interested in whether the geographical data can tell us anything interesting about the Leave vote.

I started by trying to reproduce the Economist’s chart. The time period they use for the change in foreign-born population is 2001-2014. This presumably means they used census data for the 2001 numbers and ONS population estimates for 2014. My attempt to reproduce the graph using these datasets is shown below. The data points are colour-coded by geographical region and the size of the data point represents the size of the foreign-born population in 2014 as a percentage of the total. (The chart is slightly different to the one I previously tweeted, which had some data problems.)

01-chart-f-inc-hybrid-trans

Despite the problems described above, the significance of geography in the vote is clear – this is emphasised in the excellent analysis published recently by the Resolution Foundation and by Geoff Tily at the TUC (see also this in the FT and this in the Guadian).

Of the English and Welsh regions, it is clear that the Remain vote was overwhelmingly driven by London (The chart above excludes Scotland and Northern Ireland, both of which voted to Remain). Other areas which have seen substantial growth in foreign-born populations and also voted to Remain are cities such as Oxford, Cambridge, Bristol, Manchester and Liverpool.

A better way to look at this data is to plot the percentage point change in foreign population instead of the percentage increase. This will prevent small initial foreign-born populations producing large percentage increases. The result is shown below. For this, and rest of the analysis that follows, I’ve used the ONS estimates of the foreign-born population. This reduces the number of years to 2004-2014, but excludes possible errors due to incompatibility between the census data and ONS estimates. It also allows for inclusion of Scottish data (but not data from Northern Ireland). I’ve also flipped the X and Y axes: if we are thinking of the Leave vote as the thing we wish to explain, it makes more sense to follow convention and put it on the Y axis.

02-chart-f-pp-ons

There is no statistically significant relationship between the two variables in the chart above. The divergence between London, Scotland and the rest of the UK is clear, however. There also looks to be a positive relationship between the increase in foreign-born population and the Leave vote within London. This can be seen more clearly if the regions are plotted separately.

03-chart-f-region-pp-ons

The only region in which there is statistically significant relationship in a simple regression between the two variables is London. A one percent increase in the foreign-born population is associated with a 1.5 percent increase in the Leave vote (with an R-squared of about 0.4). The chart below shows the London data in isolation.

04-chart-f-pp-ons-london

The net inflow of migrants appears to have been greatest in the outer boroughs of London – and these regions also returned highest Leave votes. There are a number of possible explanations for this. One is that new migrants go to where housing is affordable – which means the outer regions of London. These are also the areas where incomes are likely to be lower. There is some evidence for this, as shown in the chart below: there is a negative relationship – albeit a weak one – between the increase in the foreign-born population and the median wage in the area.

05-chart-london-wage-pp-inc

Returning to the UK as a whole (excluding Northern Ireland), the Resolution foundation finds that there is a statistically significant relationship between the percentage point increase in foreign-born population and Leave vote when the size of the foreign-born population is controlled for. This is confirmed in the following simple regression, where FB.PP.Incr is the percentage point increase in the foreign-born population and FB.Pop.Pct is the foreign-born population as a percent of the total.

Coefficients:
 Estimate Std. Error t value Pr(>|t|) 
(Intercept) 57.19258 0.71282 80.235 < 2e-16 ***
FB.PP.Incr 0.90665 0.17060 5.314 1.87e-07 ***
FB.Pop.Pct -0.64344 0.05984 -10.752 < 2e-16 ***
---
Signif. codes: 0 ~***~ 0.001 ~**~ 0.01 ~*~ 0.05 ~.~ 0.1 ~ ~ 1

Residual standard error: 9.002 on 363 degrees of freedom
Multiple R-squared: 0.2475, Adjusted R-squared: 0.2433 
F-statistic: 59.69 on 2 and 363 DF, p-value: < 2.2e-16

It is clear that controlling for the foreign-born population is, in large part, controlling for London. This is illustrated in the chart below which shows the foreign-born population as a percentage of the total for each local authority in 2014, grouped by broad geographical region. The boxplots in the background show the mean and interquartile ranges of foreign-born population share by region. The size of the data points represents the size of the electorate in that local authority.

06-chart-f-ons-fp-electorate-boxes

This highlights a problem with the analysis so far – and for others doing regional analysis on the basis of local authority data. By taking each region as a single data point, statistical analysis misses the significance of differences in the size of electorates. This is important because it means, for example, that the Leave vote of 57% from Richmondshire, North Yorksire with around 27,000 votes cast is given the same weight as the Leave vote of 57% in County Durham, with around 270,000 votes cast.

This can be overcome by constructing an index of referendum voting weighted by the size of the electorate in each area. This index is constructed so that it is equal to zero where the Leave vote was 50%, negative for areas voting Remain, and positive for areas voting Leave. The magnitude of the index represents the strength of the contribution to the overall result. Plotting this index against the percentage point change in the foreign population produces the following chart. Data point sizes represent the number of votes in each area.

07-chart-leave-weighted

Again, there is no statistically significant relationship between the two variables, but as with the unweighted data, when controlling for the foreign population,  a positive relationship does exist between the increase in foreign-born and Leave votes.

The outliers are different to those seen in the unweighted voting data, however – particularly in areas with a strong leave vote. This can be seen more clearly by removing the two areas with the strongest Remain votes: London and Scotland. The data for the rest of England and Wales only are shown below.08-chart-leave-weighted-nss

There is a clear split between the strong Leave outliers and the strong Remain outliers. The latter are Bristol, Brighton, Manchester, Liverpool and Cardiff. When weighted by size of vote, The previous outliers for Leave – Eastern areas such as Boston and South Holland – are replaced by towns and cities in the West Midlands and Yorkshire and with the counties of Cornwall and County Durham.

Overall, while there is a relationship between net migration inflows and Leave votes – at least when controlling for the size of the foreign-born population – it is only a small part of the story. The most compelling discussions I’ve seen of the underlying causes of the Leave vote are those which emphasise the rise in precarity and the loss of social cohesion and identity in the lives of working people, such as John Lanchester’s piece in the London Review of Books (despite the errors), the excellent follow-up piece by blogger Flip-Chart Rick, and this piece by Tony Hockley. As Geoff Tily argues, the geographical distribution of votes strongly suggests economic dissatisfaction was a key driver of the Leave vote, which pitted ‘cosmopolitan cities’ against the rest of the country. This is compatible with the pattern shown above, where the strongest Leave votes are concentrated in ex-industrial areas and the strongest Remain votes in the ‘cosmopolitan cities’.

The chart below shows the weighted Leave vote plotted against median gross weekly pay.09-wages

Scotland as a whole is once again the outlier, while much of the relationship appears to be driven by London, where wages are higher and the majority voted Remain. Removing these two regions gives the following graph.

10-wages

Aside from the outlier Remain cities, there is a negative relationship between median pay and weighted Leave votes. The statistical strength of this relationship is relatively weak, however.

Putting all the variables together produces the following regression result:

Coefficients:
 Estimate Std. Error t value Pr(>|t|) 
(Intercept) 80.98722 12.18838 6.645 1.12e-10 ***
FB.PP.Incr 2.46269 0.57072 4.315 2.06e-05 ***
FB.Pop.Pct -1.61904 0.21781 -7.433 7.72e-13 ***
Median.Wage -0.12539 0.02404 -5.216 3.08e-07 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 29 on 362 degrees of freedom
Multiple R-squared: 0.2977, Adjusted R-squared: 0.2919 
F-statistic: 51.15 on 3 and 362 DF, p-value: < 2.2e-16

Leave votes are negatively associated with the size of the foreign-born population and with the median wage, and positively associated with increases in the foreign-born. The R^2 value of 0.3 suggests this model has some predictive power, but could certainly be improved.

Coefficients:
 Estimate Std. Error t value Pr(>|t|) 
(Intercept) 107.61139 13.30665 8.087 9.97e-15 ***
FB.PP.Incr 2.92817 0.49930 5.865 1.04e-08 ***
FB.Pop.Pct -2.34394 0.27140 -8.636 < 2e-16 ***
Median.Wage -0.14360 0.02313 -6.210 1.50e-09 ***
RegionEast Midlands -9.07601 5.44978 -1.665 0.09672 . 
RegionLondon 9.44698 8.34896 1.132 0.25861 
RegionNorth East -4.11112 8.02869 -0.512 0.60893 
RegionNorth West -16.69448 5.51048 -3.030 0.00263 ** 
RegionScotland -61.65217 5.76312 -10.698 < 2e-16 ***
RegionSouth East -4.60717 4.64123 -0.993 0.32156 
RegionSouth West -18.73821 5.55187 -3.375 0.00082 ***
RegionWales -27.65673 6.53577 -4.232 2.96e-05 ***
RegionWest Midlands 4.06613 5.83469 0.697 0.48633 
RegionYorkshire and The Humber 4.72398 6.61676 0.714 0.47574 
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 24 on 352 degrees of freedom
Multiple R-squared: 0.5323, Adjusted R-squared: 0.515 
F-statistic: 30.82 on 13 and 352 DF, p-value: < 2.2e-16


Adding regional dummy variables improves the fit of the model substantially – increasing the value of R^2 to around 0.5. This suggests – unsurprisingly – there are differences between regions which are not captured in the three variables included here.

Immigration brings both benefits and costs – but no reason to leave

If UK voters decide to leave the European Union, it will be for one reason above all. From the outset, nationalism bordering on xenophobia has been a defining feature of the Leave campaign. Having lost the argument on broader economic issues, it looks likely the Leave camp will fight the final month of the campaign on immigration. The scapegoating of migrants for the UK’s economic problems will become increasingly unrestrained as the referendum date approaches.

It is not difficult to understand why the Leave camp has chosen to focus on immigration: it is the issue which matters most to those likely to vote for Brexit. Fear that immigration undermines living standards and increases precarity is strong. The anti-European political right has harnessed this fear in a cynical attempt to exploit the insecurity of working class voters in the era of globalisation.

It is countered by Remain campaign statements emphasising that immigration is good for the economy: there are fiscal benefits, immigrants bring much-needed skills and –  because migrants are mostly of working age – immigration offsets the effects of an ageing population.

These claims are well-founded. But immigration has both positive and negative effects. Like other facets of globalisation, the impact of immigration is felt unevenly.

At its simplest, the pro-immigration argument is that migrants find work without displacing native workers, thus increasing the size of the economy. This argument is a valid way to dispel the ‘lump of labour’ fallacy and counter naive arguments that immigration automatically costs jobs. But it does not prove immigration is necessarily positive: an increasing population also puts pressure on housing, the environment and public services.

A stronger position is taken by those who claim that immigration increases GDP per capita – migrants raise labour productivity. It is difficult to interpret the evidence on this, since productivity is simultaneously determined by many factors. But even those who argue that the evidence supports this position find the effect to be very weak. Positive effects on productivity are likely to due to skilled migrants being hired as a result of the UK ‘skills gap’.

But not all – or even most – immigrants are in highly skilled work. Despite being well-educated, many come looking for whatever work they can find and are willing to work for low wages. A third of EU nationals in the UK are employed in ‘elementary and processing occupations’. What is the effect of an increasing pool of cheap labour looking for low-skilled work? The evidence suggests there is little effect on employment rates over the long run. There may, however, be displacement effects in the short run. In particular, when the labour market is slack – during recessions – the job prospects of low-paid and unskilled workers may be damaged by migrant inflows.

The evidence on wages likewise suggests effects are small, but again there appears to be some impact of immigration on the wages of low-skilled workers. There is also evidence of labour market segmentation: migrants are disproportionately represented in the seasonal, temporary and ‘flexible’ (i.e. precarious) workforce.

Further, much of the evidence on employment and wages comes from a period of high growth and strong economic performance. This may not be a reliable guide to the future. It is possible that more significant negative effects could emerge, particularly if the economy remains weak.

Economists on the Remain side downplay the negative effects of immigration, presenting it as unequivocally good for the UK economy. It is undoubtedly difficult to present a nuanced argument in the short space available for a media sound-bite. But it is possible that the line taken by the Remain camp plays into the hands of the Leave campaign.

Aside from the skills they bring – around a quarter of NHS doctors are foreign nationals – the main benefit of immigration is the effect on demographics. Without inward migration, the UK working age population would have already peaked. But ageing cannot be postponed indefinitely.

Rapid population growth leads to pressures on public services, housing and infrastructure unless there are on-going programmes of investment, upgrading of infrastructure and house building. Careful planning is required to ensure that public services are available before migrants arrive – otherwise there will be a period while services are under pressure before more capacity is added.

Long-run investment in public services, infrastructure and housing is exactly what the UK has not been doing. Instead, we are more than five years into an unnecessary austerity programme. Our infrastructure is ageing and suffers from lack of capacity. Wages have yet to recover to pre-crisis levels. Government services continue to be cut, even as the population increases.

Those who face pressure on their standard of life from weak wage growth and rising housing costs will understandably find it difficult to disentangle the causes of their problems. For many, immigration will not be the reason – but it will be more visible and tangible than austerity, lack of aggregate demand and weak labour bargaining power.

The root of the problem is that the UK is increasingly a low-wage, low-skill economy. There is a shortage of affordable housing and public services are facing the deepest cuts in decades. None of these problems would be solved by the reorganised Conservative government that would take power immediately following a vote to leave the EU. Instead, it is clear that much of the Leave camp favours a Thatcherite programme of further cuts and deregulation.

Campaigners for Leave will continue to use immigration as a way to take Britain out of the EU. They are wrong. This is cynical exploitation of genuine problems and fears faced by many low-wage workers.  Immigration is not a reason to leave the European Union.

But the status quo of high immigration alongside cuts to public services and wage stagnation cannot continue indefinitely. If high levels of migration are to continue, as looks likely, the UK government must consider how to accommodate the rapidly increasing population. Government services must keep pace with population increases. Pressures will be particularly acute in London and the South East.

We must also be more open in admitting that immigration has both costs and benefits – it does not affect the population evenly. Liberal commentators should acknowledge the concerns of those facing the negative effects of immigration. In doing so, they may lessen the chances that voters fall for the false promises of the Leave campaign.

 

This article is part of the EREP report on the EU referendum ‘Remain for Change‘. The authors of the report are:

John Weeks, Professor Emeritus of Development Economics at SOAS
Ann Pettifor, Director of Policy Research in Macroeconomics
Özlem Onaran, Professor of economics, Director of Greenwich Political Economy Research Centre
Jo Michell, Senior Lecturer in economics, University of the West of England
Howard Reed, Director of Landman Economics.
Andrew Simms, co-founder New Weather Institute, fellow of the New Economics Foundation.
John Grahl, Professor of European Integration, Middlesex University.
Engelbert Stockhammer, Professor, School of Economics, Politics and History, Kingston University
Giovanni Cozzi, Senior Lecturer in economics, Greenwich Political Economy Research Centre
Jeremy Smith, Co-director of Policy Research in Macroeconomics, convenor of EREP

 

 

Economics: science or politics? A reply to Kay and Romer

Romer’s article on ‘mathiness’ triggered a debate in the economics blogs last year. I didn’t pay a great deal of attention at the time; that economists were using relatively trivial yet abstruse mathematics to disguise their political leanings didn’t seem a particularly penetrating insight.

Later in the year, I read a comment piece by John Kay on the same subject in the Financial Times. Kay’s article, published under the headline ‘Economists should keep to the facts, not feelings’, was sufficiently cavalier with the facts that I felt compelled to respond. I was not the only one – Geoff Harcourt wrote a letter supporting my defence of Joan Robinson and correcting Kay’s inaccurate description of her as a Marxist.

After writing the letter, I found myself wondering why a serious writer like Kay would publish such carelessly inaccurate statements. Following a suggestion from Matteus Grasselli, I turned to Romer’s original paper:

Economists usually stick to science. Robert Solow was engaged in science when he developed his mathematical theory of growth. But they can get drawn into academic politics. Joan Robinson was engaged in academic politics when she waged her campaign against capital and the aggregate production function …

Solow’s mathematical theory of growth mapped the word ‘capital’ onto a variable in his mathematical equations, and onto both data from national income accounts and objects like machines or structures that someone could observe directly. The tight connection between the word and the equations gave the word a precise meaning that facilitated equally tight connections between theoretical and empirical claims. Gary Becker’s mathematical theory of wages gave the words ‘human capital’ the same precision …

Once again, the facts appear to have fallen by the wayside. The issue at the heart of the debates involving Joan Robinson, Robert Solow and others is whether it is valid to  represent a complex macroeconomic system (such as a country) with a single ‘aggregate’ production function. Solow had been working on the assumption that the macroeconomic system could be represented by the same microeconomic mathematical function used to model individual firms. In particular, Solow and his neoclassical colleagues assumed that a key property of the microeconomic version – that labour will be smoothly substituted for capital as the rate of interest rises – would also hold at the aggregate level. It would then be reasonable to produce simple macroeconomic models by assuming a single production function for the whole economy, as Solow did in his famous growth model.

Joan Robinson and her UK Cambridge colleagues showed this was not true. They demonstrated cases (capital reversing and reswitching) which contradicted the neoclassical conclusions about the relationship between the choice of technique and the rate of interest. One may accept the assumption that individual firms can be represented as neoclassical production functions, but concluding that the economy can then also be represented by such a function is a logical error.

One important reason is that the capital goods which enter production functions as inputs are not identical, but instead have specific properties. These differences make it all but impossible to find a way to measure the ‘size’ of any collection of capital goods. Further, in Solow’s model, the distinction between capital goods and consumption goods is entirely dissolved – the production function simply generates ‘output’ which may either be consumed or accumulated. What Robinson demonstrated was that it was impossible to accurately measure capital independently of prices and income distribution. But since, in an aggregate production function, income distribution is determined by marginal productivity – which in turn depends on quantities – it is impossible to avoid arguing in a circle . Romer’s assertion of a ‘tight connection between the word and the equations’ is a straightforward misrepresentation of the facts.

The assertion of ‘equally tight connections between theoretical and empirical claims’, is likewise misplaced. As Anwar Shaikh showed in 1974, is it straightforward to demonstrate that Solow’s ‘evidence’ for the aggregate production function is no such thing. In fact, what Solow and others were testing turned out to be national accounting identities. Shaikh demonstrated that, as long as labour and capital shares are roughly constant – the ‘Kaldor facts’ – then any structure of production will produce empirical results consistent with an aggregate Cobb-Douglas production function. The aggregate production function is therefore ‘not even wrong: it is not a behavioral relationship capable of being statistically refuted’.

As I noted in my letter to the FT, Robinson’s neoclassical opponents conceded the argument on capital reversing and reswitching: Kay’s assertion that Solow ‘won easily’ is inaccurate. In purely logical terms Robinson was the victor, as Samuelson acknowledged when he wrote, ‘If all this causes headaches for those nostalgic for the parables of neoclassical writing, we must remind ourselves that scholars are not born to live an easy existence. We must respect, and appraise, the facts of life.’

What matters, as Geoff Harcourt correctly points out, is that the conceptual implications of the debates remain unresolved. Neoclassical authors, such as Cohen and Harcourt’s co-editor, Christopher Bliss, argue that the logical results,  while correct in themselves, do not undermine marginalist theory to the extent claimed by (some) critics. In particular, he argues, the focus on capital aggregation is mistaken. One may instead, for example, drop Solow’s assumption that capital goods and consumer goods are interchangeable: ‘Allowing capital to be different from other output, particularly consumption, alters conclusions radically.’ (p. xviii). Developing models on the basis of disaggregated optimising agents will likewise produce very different, and less deterministic, results.

But Bliss also notes that this wasn’t the direction that macroeconomics chose. Instead, ‘Interest has shifted from general equilibrium style (high-dimension) models to simple, mainly one-good models … the representative agent is now usually the model’s driver.’ Solow himself characterised this trend as ‘dumb and dumber in macroeconomics’. As the great David Laidler – like Robinson, no Marxist –  observes, the now unquestioned use of representative agents and aggregate production functions means that ‘largely undiscussed problems of capital theory still plague much modern macroeconomics’.

It should by now be clear that the claim of ‘mathiness’ is a bizarre one to level against Joan Robinson: she won a theoretical debate at the level of pure logic, even if the broader implications remain controversial. Why then does Paul Romer single her out as the villain of the piece? – ‘Where would we be now if Solow’s math had been swamped by Joan Robinson’s mathiness?’

One can only speculate, but it may not be coincidence that Romer has spent his career constructing models based on aggregate production functions – the so called ‘neoclassical endogenous growth models’ that Ed Balls once claimed to be so enamoured with. Romer has repeatedly been tipped for the Nobel Prize, despite the fact that his work doesn’t appear to explain very much about the real world. In Krugman’s words ‘too much of it involved making assumptions about how unmeasurable things affected other unmeasurable things.’ So much for those tight connections between theoretical and empirical claims.

So where does this leave macroeconomics? Bliss is correct that the results of the Controversy do not undermine the standard toolkit of methodological individualism: marginalism, optimisation and equilibrium. Robinson and her colleagues demonstrated that one specific tool in the box – the aggregate production function – suffers from deep internal logical flaws. But the Controversy is only one example of the tensions generated when one insists on modelling social structures as the outcome of adversarial interactions between  individuals. Other examples include the Sonnenschein-Mantel-Debreu results and Arrow’s Impossibility Theorem.

As Ben Fine has pointed out, there are well-established results from the philosophy of mathematics and science that suggest deep problems for those who insist on methodological individualism as the only way to understand social structures. Trying to conceptualise a phenomenon such as money on the basis of aggregation over self-interested individuals is a dead end. But economists are not interested in philosophy or methodology. They no longer even enter into debates on the subject – instead, the laziest dismissals suffice.

But where does methodological individualism stop? What about language, for example? Can this be explained as a way for self-interested individuals to overcome transaction costs? The result of this myopia, Fine argues, is that economists ‘work with notions of mathematics and science that have been rejected by mathematicians and scientists themselves for a hundred years and more.’

This brings us back to ‘mathiness’. DeLong characterises this as ‘restricting your microfoundations in advance to guarantee a particular political result and hiding what you are doing in a blizzard of irrelevant and ungrounded algebra.’ What is very rarely discussed, however, is the insistence that microfounded models are the only acceptable form of economic theory. But the New Classical revolution in economics, which ushered in the era of microfounded macroeconomics was itself a political project. As its leading light, Nobel-prize winner Robert Lucas, put it, ‘If these developments succeed, the term “macroeconomic” will simply disappear from use and the modifier “micro” will become superfluous.’ The statement is not greatly different in intent and meaning from Thatcher’s famous claim that ‘there is no such thing as society’. Lucas never tried particularly hard to hide his political leanings: in 2004 he declared, ‘Of the tendencies that are harmful to sound economics, the most seductive, and in my opinion the most poisonous, is to focus on questions of distribution.’ (He also declared, five years before the crisis of 2008, that the ‘central problem of depression-prevention has been solved, for all practical purposes, and has in fact been solved for many decades.’)

As a result of Lucas’ revolution, the academic economics profession purged those who dared to argue that some economic phenomena cannot be explained by competition between selfish individuals. Abstract microfounded theory replaced empirically-based macroeconomic models, despite generating results which are of little relevance for real-world policy-making. As Simon Wren-Lewis puts it, ‘students are taught that [non-microfounded] methods of analysing the economy are fatally flawed, and that simulating DSGE models is the only proper way of doing policy analysis. This is simply wrong.’

I leave the reader to decide where the line between science and politics should be drawn.