economics

On ‘heterodox’ macroeconomics

Image reproduced from here

Noah Smith has a new post on the failure of mainstream macroeconomics and what he perceives as the lack of ‘heterodox’ alternatives. Noah is correct about the failure of mainstream macroeconomics, particularly the dominant DSGE modelling approach. This failure is increasingly – if reluctantly – accepted within the economics discipline. As Brad Delong puts it, DSGE macro has ‘… proven a degenerating research program and a catastrophic failure: thirty years of work have produced no tools for useful forecasting or policy analysis.’

I disagree with Noah, however, when he argues that ‘heterodox’ economics has little to offer as an alternative to the failed mainstream.

The term ‘heterodox economics’ is a difficult one. I dislike it and resisted adopting it for some time: I would much rather be ‘an economist’ than ‘a heterodox economist’. But it is clear that unless you accept – pretty much without criticism – the assumptions and methodology of the mainstream, you will not be accepted as ‘an economist’. This was not the case when Joan Robinson debated with Solow and Samuelson, or Kaldor debated with Hayek. But it is the case today.

The problem with ‘heterodox economics’ is that it is self-definition in terms of the other. It says ‘we are not them’ – but says nothing about what we are. This is because includes everything outside of the mainstream, from reasonably well-defined and coherent schools of thought such as Post Keynesians, Marxists and Austrians, to much more nebulous and ill-defined discontents of all hues. To put it bluntly, a broad definition of ‘people who disagree with mainstream economics’ is going to include a lot of cranks. People will place the boundary between serious non-mainstream economists and cranks differently, depending on their perspective.

Another problem is that these schools of thought have fundamental differences. Aside from rejecting standard neoclassical economics, the Marxists and the Austrians don’t have a great deal in common.

Noah seems to define heterodox economics as ‘non-mathematical’ economics. This is inaccurate. There is much formal modelling outside of the mainstream. The difference lies with the starting assumptions. Mainstream macro starts from the assumption of inter-temporal optimisation and a system which returns to the supply-side-determined full-employment equilibrium in the long run. Non-mainstream economists reject these in favour of assumptions which they regard as more empirically plausible.

It is true that there are some heterodox economists, for example Tony Lawson and Ben Fine who take the position that maths is an inappropriate tool for economics and should be rejected. (Incidentally, both were originally mathematicians.) This is a minority position, and one I disagree with. The view is influential, however. The highest-ranked heterodox economics journal, the Cambridge Journal of Economics, has recently changed its editorial policy to explicitly discourage the use of mathematics. This is a serious mistake in my opinion.

So Noah’s claim about mathematics is a straw man. He implicitly acknowledges this by discussing one class of mathematical Post Keynesian models, the so-called ‘stock-flow consistent’ models (SFC). He rightly notes that the name is confusing – any correctly specified closed mathematical macro model should be internally consistent and therefore stock-flow consistent. This is certainly true of DSGE models.

SFC refers to a narrower set of models which incorporate detailed modelling of the ‘plumbing’ of the financial system alongside traditional macro Keynesian behavioural assumptions – and reject the standard inter-temporal optimising assumptions of DSGE macro. Marc Lavoie, who originally came up with the name, admits it is misleading and, with hindsight, a more appropriate name should have been chosen. But names stick, so SFC joins a long tradition of badly-named concepts in economics such as ‘real business cycles’ and ‘rational expectations’.

Noah claims that ‘vague ideas can’t be tested against the data and rejected’.  While the characterisation of all heterodox economics as ‘vague ideas’ is another straw man, the falsifiability point is important. As Noah points out, ‘One of mainstream macro’s biggest failings is that theories that don’t fit the data continue to be regarded as good and useful models.’ He also notes that big SFC models have so many parameters that they are essentially impossible to fit to the data.

This raises an important question about what we want economic models to do, and what the criteria should be for acceptance or rejection. The belief that models should provide quantitative predictions of the future has been much too strongly held. Economists need to come to terms with the reality that the future is unknowable – no model will reliably predict the future. For a while, DSGE models seemed to do a reasonable job. With hindsight, this was largely because enough degrees of freedom were added when converting them to econometric equations that they could do a reasonably good job of projecting past trends forward, along with some mean reversion.  This predictive power collapsed totally with the crisis of 2008.

Models then should be seen as ways to gain insight over the mechanisms at work and to test the implications of combining assumptions. I agree with Narayana Kocherlakota when he argues that we need to return to smaller ‘toy models’ to think through economic mechanisms. Larger econometrically estimated models are useful for sketching out future scenarios – but the predictive power assigned to such models needs to be downplayed.

So the question is then – what are the correct assumptions to make when constructing formal macro models? Noah argues that Post Keynesian models ‘don’t take human behaviour into account – the equations are typically all in terms of macroeconomic aggregates – there’s a good chance that the models could fail if policy changes make consumers and companies act differently than expected’

This is of course Robert Lucas’s critique of structural econometric modelling. This critique was a key element in the ‘microfoundations revolution’ which ushered in the so-called Real Business Cycle models which form the core of the disastrous DSGE research programme.

The critique is misguided, however. Aggregate behavioural relationships do have a basis in individual behavour. As Bob Solow puts it:

The original impulse to look for better or more explicit micro foundations was probably reasonable. It overlooked the fact that macroeconomics as practiced by Keynes and Pigou was full of informal microfoundations. … Generalizations about aggregative consumption-saving patterns, investment patterns, money-holding patterns were always rationalized by plausible statements about individual – and, to some extent, market-behavior.

In many ways, aggregate behavioural specifications can make a stronger claim to be based in microeconomic behaviour than the representative agent DSGE models which came to dominate mainstream macro. (I will expand on this point in a separate blog.)

Mainstream macro has reached the point that only two extremes are admitted: formal, internally consistent DSGE models, and atheoretical testing of the data using VAR models. Anything in between – such as structural econometric modelling – is rejected. As Simon Wren-Lewis has argued, this theoretical extremism cannot be justified.

Crucial issues and ideas emphasised by heterodox economists were rejected for decades by the mainstream while it was in thrall to representative-agent DSGE models. These ideas included the role of income distribution, the importance of money, credit and financial structure, the possibility of long-term stagnation due to demand-side shortfalls, the inadequacy of reliance on monetary policy alone for demand management, and the possibility of demand affecting the supply side. All of these ideas are, to a greater or lesser extent, now gradually becoming accepted and absorbed by the mainstream – in many cases with no acknowledgement of the traditions which continued to discuss and study them even as the mainstream dismissed them.

Does this mean that there is a fully-fledged ‘heterodox economics’ waiting in the wings waiting to take over from mainstream macro? It depends what is meant – is there complete model of the economy sitting in a computer waiting for someone to turn it on? No – but there never will be, either within the mainstream or outside it. But Lavoie argues,

if by any bad luck neoclassical economics were to disappear completely from the surface of the Earth, this would leave economics utterly unaffected because heterodox economics has its own agenda, or agendas, and its own methodological approaches and models.

I think this conclusion is too strong – partly because I don’t think the boundary between neoclassical economics and heterodox economics is as clear as some claim. But it highlights the rich tradition of ideas and models outside of the mainstream – many of which have stood the test of time much better than DSGE macro. It is time these ideas are acknowledged.

Advertisements

What do immigration numbers tell us about the Brexit vote?

A couple of weeks ago I tweeted a chart from The Economist which plotted the percentage increase in the foreign-born population in UK local authority areas against the number of Leave votes in that area. I also quoted the accompanying article: ‘Where foreign-born populations increased by more than 200%, a Leave vote followed in 94% of cases.’

00-economist

This generated lots of responses, many of which rightly pointed out the problems with the causality implied in the quote. These included the following:

  • Using the percentage change in foreign-born population is problematic because this will be highly sensitive to the initial size of population.
  • Majority leave votes also occurred in many areas where the number of migrants had fallen.
  • Much of the result is driven by a relatively small number of outliers while the systemic relationship looks to be flat.
  • The number of points where foreign-born populations had increased by more than 200% were small relative to the total sample: around twenty points out of several hundred.

Al these criticisms are valid. With hindsight, the Economist probably shouldn’t have published the chart and article – and I shouldn’t have tweeted it. But the discussion on Twitter got me interested in whether the geographical data can tell us anything interesting about the Leave vote.

I started by trying to reproduce the Economist’s chart. The time period they use for the change in foreign-born population is 2001-2014. This presumably means they used census data for the 2001 numbers and ONS population estimates for 2014. My attempt to reproduce the graph using these datasets is shown below. The data points are colour-coded by geographical region and the size of the data point represents the size of the foreign-born population in 2014 as a percentage of the total. (The chart is slightly different to the one I previously tweeted, which had some data problems.)

01-chart-f-inc-hybrid-trans

Despite the problems described above, the significance of geography in the vote is clear – this is emphasised in the excellent analysis published recently by the Resolution Foundation and by Geoff Tily at the TUC (see also this in the FT and this in the Guadian).

Of the English and Welsh regions, it is clear that the Remain vote was overwhelmingly driven by London (The chart above excludes Scotland and Northern Ireland, both of which voted to Remain). Other areas which have seen substantial growth in foreign-born populations and also voted to Remain are cities such as Oxford, Cambridge, Bristol, Manchester and Liverpool.

A better way to look at this data is to plot the percentage point change in foreign population instead of the percentage increase. This will prevent small initial foreign-born populations producing large percentage increases. The result is shown below. For this, and rest of the analysis that follows, I’ve used the ONS estimates of the foreign-born population. This reduces the number of years to 2004-2014, but excludes possible errors due to incompatibility between the census data and ONS estimates. It also allows for inclusion of Scottish data (but not data from Northern Ireland). I’ve also flipped the X and Y axes: if we are thinking of the Leave vote as the thing we wish to explain, it makes more sense to follow convention and put it on the Y axis.

02-chart-f-pp-ons

There is no statistically significant relationship between the two variables in the chart above. The divergence between London, Scotland and the rest of the UK is clear, however. There also looks to be a positive relationship between the increase in foreign-born population and the Leave vote within London. This can be seen more clearly if the regions are plotted separately.

03-chart-f-region-pp-ons

The only region in which there is statistically significant relationship in a simple regression between the two variables is London. A one percent increase in the foreign-born population is associated with a 1.5 percent increase in the Leave vote (with an R-squared of about 0.4). The chart below shows the London data in isolation.

04-chart-f-pp-ons-london

The net inflow of migrants appears to have been greatest in the outer boroughs of London – and these regions also returned highest Leave votes. There are a number of possible explanations for this. One is that new migrants go to where housing is affordable – which means the outer regions of London. These are also the areas where incomes are likely to be lower. There is some evidence for this, as shown in the chart below: there is a negative relationship – albeit a weak one – between the increase in the foreign-born population and the median wage in the area.

05-chart-london-wage-pp-inc

Returning to the UK as a whole (excluding Northern Ireland), the Resolution foundation finds that there is a statistically significant relationship between the percentage point increase in foreign-born population and Leave vote when the size of the foreign-born population is controlled for. This is confirmed in the following simple regression, where FB.PP.Incr is the percentage point increase in the foreign-born population and FB.Pop.Pct is the foreign-born population as a percent of the total.

Coefficients:
 Estimate Std. Error t value Pr(>|t|) 
(Intercept) 57.19258 0.71282 80.235 < 2e-16 ***
FB.PP.Incr 0.90665 0.17060 5.314 1.87e-07 ***
FB.Pop.Pct -0.64344 0.05984 -10.752 < 2e-16 ***
---
Signif. codes: 0 ~***~ 0.001 ~**~ 0.01 ~*~ 0.05 ~.~ 0.1 ~ ~ 1

Residual standard error: 9.002 on 363 degrees of freedom
Multiple R-squared: 0.2475, Adjusted R-squared: 0.2433 
F-statistic: 59.69 on 2 and 363 DF, p-value: < 2.2e-16

It is clear that controlling for the foreign-born population is, in large part, controlling for London. This is illustrated in the chart below which shows the foreign-born population as a percentage of the total for each local authority in 2014, grouped by broad geographical region. The boxplots in the background show the mean and interquartile ranges of foreign-born population share by region. The size of the data points represents the size of the electorate in that local authority.

06-chart-f-ons-fp-electorate-boxes

This highlights a problem with the analysis so far – and for others doing regional analysis on the basis of local authority data. By taking each region as a single data point, statistical analysis misses the significance of differences in the size of electorates. This is important because it means, for example, that the Leave vote of 57% from Richmondshire, North Yorksire with around 27,000 votes cast is given the same weight as the Leave vote of 57% in County Durham, with around 270,000 votes cast.

This can be overcome by constructing an index of referendum voting weighted by the size of the electorate in each area. This index is constructed so that it is equal to zero where the Leave vote was 50%, negative for areas voting Remain, and positive for areas voting Leave. The magnitude of the index represents the strength of the contribution to the overall result. Plotting this index against the percentage point change in the foreign population produces the following chart. Data point sizes represent the number of votes in each area.

07-chart-leave-weighted

Again, there is no statistically significant relationship between the two variables, but as with the unweighted data, when controlling for the foreign population,  a positive relationship does exist between the increase in foreign-born and Leave votes.

The outliers are different to those seen in the unweighted voting data, however – particularly in areas with a strong leave vote. This can be seen more clearly by removing the two areas with the strongest Remain votes: London and Scotland. The data for the rest of England and Wales only are shown below.08-chart-leave-weighted-nss

There is a clear split between the strong Leave outliers and the strong Remain outliers. The latter are Bristol, Brighton, Manchester, Liverpool and Cardiff. When weighted by size of vote, The previous outliers for Leave – Eastern areas such as Boston and South Holland – are replaced by towns and cities in the West Midlands and Yorkshire and with the counties of Cornwall and County Durham.

Overall, while there is a relationship between net migration inflows and Leave votes – at least when controlling for the size of the foreign-born population – it is only a small part of the story. The most compelling discussions I’ve seen of the underlying causes of the Leave vote are those which emphasise the rise in precarity and the loss of social cohesion and identity in the lives of working people, such as John Lanchester’s piece in the London Review of Books (despite the errors), the excellent follow-up piece by blogger Flip-Chart Rick, and this piece by Tony Hockley. As Geoff Tily argues, the geographical distribution of votes strongly suggests economic dissatisfaction was a key driver of the Leave vote, which pitted ‘cosmopolitan cities’ against the rest of the country. This is compatible with the pattern shown above, where the strongest Leave votes are concentrated in ex-industrial areas and the strongest Remain votes in the ‘cosmopolitan cities’.

The chart below shows the weighted Leave vote plotted against median gross weekly pay.09-wages

Scotland as a whole is once again the outlier, while much of the relationship appears to be driven by London, where wages are higher and the majority voted Remain. Removing these two regions gives the following graph.

10-wages

Aside from the outlier Remain cities, there is a negative relationship between median pay and weighted Leave votes. The statistical strength of this relationship is relatively weak, however.

Putting all the variables together produces the following regression result:

Coefficients:
 Estimate Std. Error t value Pr(>|t|) 
(Intercept) 80.98722 12.18838 6.645 1.12e-10 ***
FB.PP.Incr 2.46269 0.57072 4.315 2.06e-05 ***
FB.Pop.Pct -1.61904 0.21781 -7.433 7.72e-13 ***
Median.Wage -0.12539 0.02404 -5.216 3.08e-07 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 29 on 362 degrees of freedom
Multiple R-squared: 0.2977, Adjusted R-squared: 0.2919 
F-statistic: 51.15 on 3 and 362 DF, p-value: < 2.2e-16

Leave votes are negatively associated with the size of the foreign-born population and with the median wage, and positively associated with increases in the foreign-born. The R^2 value of 0.3 suggests this model has some predictive power, but could certainly be improved.

Coefficients:
 Estimate Std. Error t value Pr(>|t|) 
(Intercept) 107.61139 13.30665 8.087 9.97e-15 ***
FB.PP.Incr 2.92817 0.49930 5.865 1.04e-08 ***
FB.Pop.Pct -2.34394 0.27140 -8.636 < 2e-16 ***
Median.Wage -0.14360 0.02313 -6.210 1.50e-09 ***
RegionEast Midlands -9.07601 5.44978 -1.665 0.09672 . 
RegionLondon 9.44698 8.34896 1.132 0.25861 
RegionNorth East -4.11112 8.02869 -0.512 0.60893 
RegionNorth West -16.69448 5.51048 -3.030 0.00263 ** 
RegionScotland -61.65217 5.76312 -10.698 < 2e-16 ***
RegionSouth East -4.60717 4.64123 -0.993 0.32156 
RegionSouth West -18.73821 5.55187 -3.375 0.00082 ***
RegionWales -27.65673 6.53577 -4.232 2.96e-05 ***
RegionWest Midlands 4.06613 5.83469 0.697 0.48633 
RegionYorkshire and The Humber 4.72398 6.61676 0.714 0.47574 
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 24 on 352 degrees of freedom
Multiple R-squared: 0.5323, Adjusted R-squared: 0.515 
F-statistic: 30.82 on 13 and 352 DF, p-value: < 2.2e-16


Adding regional dummy variables improves the fit of the model substantially – increasing the value of R^2 to around 0.5. This suggests – unsurprisingly – there are differences between regions which are not captured in the three variables included here.

Immigration brings both benefits and costs – but no reason to leave

If UK voters decide to leave the European Union, it will be for one reason above all. From the outset, nationalism bordering on xenophobia has been a defining feature of the Leave campaign. Having lost the argument on broader economic issues, it looks likely the Leave camp will fight the final month of the campaign on immigration. The scapegoating of migrants for the UK’s economic problems will become increasingly unrestrained as the referendum date approaches.

It is not difficult to understand why the Leave camp has chosen to focus on immigration: it is the issue which matters most to those likely to vote for Brexit. Fear that immigration undermines living standards and increases precarity is strong. The anti-European political right has harnessed this fear in a cynical attempt to exploit the insecurity of working class voters in the era of globalisation.

It is countered by Remain campaign statements emphasising that immigration is good for the economy: there are fiscal benefits, immigrants bring much-needed skills and –  because migrants are mostly of working age – immigration offsets the effects of an ageing population.

These claims are well-founded. But immigration has both positive and negative effects. Like other facets of globalisation, the impact of immigration is felt unevenly.

At its simplest, the pro-immigration argument is that migrants find work without displacing native workers, thus increasing the size of the economy. This argument is a valid way to dispel the ‘lump of labour’ fallacy and counter naive arguments that immigration automatically costs jobs. But it does not prove immigration is necessarily positive: an increasing population also puts pressure on housing, the environment and public services.

A stronger position is taken by those who claim that immigration increases GDP per capita – migrants raise labour productivity. It is difficult to interpret the evidence on this, since productivity is simultaneously determined by many factors. But even those who argue that the evidence supports this position find the effect to be very weak. Positive effects on productivity are likely to due to skilled migrants being hired as a result of the UK ‘skills gap’.

But not all – or even most – immigrants are in highly skilled work. Despite being well-educated, many come looking for whatever work they can find and are willing to work for low wages. A third of EU nationals in the UK are employed in ‘elementary and processing occupations’. What is the effect of an increasing pool of cheap labour looking for low-skilled work? The evidence suggests there is little effect on employment rates over the long run. There may, however, be displacement effects in the short run. In particular, when the labour market is slack – during recessions – the job prospects of low-paid and unskilled workers may be damaged by migrant inflows.

The evidence on wages likewise suggests effects are small, but again there appears to be some impact of immigration on the wages of low-skilled workers. There is also evidence of labour market segmentation: migrants are disproportionately represented in the seasonal, temporary and ‘flexible’ (i.e. precarious) workforce.

Further, much of the evidence on employment and wages comes from a period of high growth and strong economic performance. This may not be a reliable guide to the future. It is possible that more significant negative effects could emerge, particularly if the economy remains weak.

Economists on the Remain side downplay the negative effects of immigration, presenting it as unequivocally good for the UK economy. It is undoubtedly difficult to present a nuanced argument in the short space available for a media sound-bite. But it is possible that the line taken by the Remain camp plays into the hands of the Leave campaign.

Aside from the skills they bring – around a quarter of NHS doctors are foreign nationals – the main benefit of immigration is the effect on demographics. Without inward migration, the UK working age population would have already peaked. But ageing cannot be postponed indefinitely.

Rapid population growth leads to pressures on public services, housing and infrastructure unless there are on-going programmes of investment, upgrading of infrastructure and house building. Careful planning is required to ensure that public services are available before migrants arrive – otherwise there will be a period while services are under pressure before more capacity is added.

Long-run investment in public services, infrastructure and housing is exactly what the UK has not been doing. Instead, we are more than five years into an unnecessary austerity programme. Our infrastructure is ageing and suffers from lack of capacity. Wages have yet to recover to pre-crisis levels. Government services continue to be cut, even as the population increases.

Those who face pressure on their standard of life from weak wage growth and rising housing costs will understandably find it difficult to disentangle the causes of their problems. For many, immigration will not be the reason – but it will be more visible and tangible than austerity, lack of aggregate demand and weak labour bargaining power.

The root of the problem is that the UK is increasingly a low-wage, low-skill economy. There is a shortage of affordable housing and public services are facing the deepest cuts in decades. None of these problems would be solved by the reorganised Conservative government that would take power immediately following a vote to leave the EU. Instead, it is clear that much of the Leave camp favours a Thatcherite programme of further cuts and deregulation.

Campaigners for Leave will continue to use immigration as a way to take Britain out of the EU. They are wrong. This is cynical exploitation of genuine problems and fears faced by many low-wage workers.  Immigration is not a reason to leave the European Union.

But the status quo of high immigration alongside cuts to public services and wage stagnation cannot continue indefinitely. If high levels of migration are to continue, as looks likely, the UK government must consider how to accommodate the rapidly increasing population. Government services must keep pace with population increases. Pressures will be particularly acute in London and the South East.

We must also be more open in admitting that immigration has both costs and benefits – it does not affect the population evenly. Liberal commentators should acknowledge the concerns of those facing the negative effects of immigration. In doing so, they may lessen the chances that voters fall for the false promises of the Leave campaign.

 

This article is part of the EREP report on the EU referendum ‘Remain for Change‘. The authors of the report are:

John Weeks, Professor Emeritus of Development Economics at SOAS
Ann Pettifor, Director of Policy Research in Macroeconomics
Özlem Onaran, Professor of economics, Director of Greenwich Political Economy Research Centre
Jo Michell, Senior Lecturer in economics, University of the West of England
Howard Reed, Director of Landman Economics.
Andrew Simms, co-founder New Weather Institute, fellow of the New Economics Foundation.
John Grahl, Professor of European Integration, Middlesex University.
Engelbert Stockhammer, Professor, School of Economics, Politics and History, Kingston University
Giovanni Cozzi, Senior Lecturer in economics, Greenwich Political Economy Research Centre
Jeremy Smith, Co-director of Policy Research in Macroeconomics, convenor of EREP

 

 

Economics: science or politics? A reply to Kay and Romer

Romer’s article on ‘mathiness’ triggered a debate in the economics blogs last year. I didn’t pay a great deal of attention at the time; that economists were using relatively trivial yet abstruse mathematics to disguise their political leanings didn’t seem a particularly penetrating insight.

Later in the year, I read a comment piece by John Kay on the same subject in the Financial Times. Kay’s article, published under the headline ‘Economists should keep to the facts, not feelings’, was sufficiently cavalier with the facts that I felt compelled to respond. I was not the only one – Geoff Harcourt wrote a letter supporting my defence of Joan Robinson and correcting Kay’s inaccurate description of her as a Marxist.

After writing the letter, I found myself wondering why a serious writer like Kay would publish such carelessly inaccurate statements. Following a suggestion from Matteus Grasselli, I turned to Romer’s original paper:

Economists usually stick to science. Robert Solow was engaged in science when he developed his mathematical theory of growth. But they can get drawn into academic politics. Joan Robinson was engaged in academic politics when she waged her campaign against capital and the aggregate production function …

Solow’s mathematical theory of growth mapped the word ‘capital’ onto a variable in his mathematical equations, and onto both data from national income accounts and objects like machines or structures that someone could observe directly. The tight connection between the word and the equations gave the word a precise meaning that facilitated equally tight connections between theoretical and empirical claims. Gary Becker’s mathematical theory of wages gave the words ‘human capital’ the same precision …

Once again, the facts appear to have fallen by the wayside. The issue at the heart of the debates involving Joan Robinson, Robert Solow and others is whether it is valid to  represent a complex macroeconomic system (such as a country) with a single ‘aggregate’ production function. Solow had been working on the assumption that the macroeconomic system could be represented by the same microeconomic mathematical function used to model individual firms. In particular, Solow and his neoclassical colleagues assumed that a key property of the microeconomic version – that labour will be smoothly substituted for capital as the rate of interest rises – would also hold at the aggregate level. It would then be reasonable to produce simple macroeconomic models by assuming a single production function for the whole economy, as Solow did in his famous growth model.

Joan Robinson and her UK Cambridge colleagues showed this was not true. They demonstrated cases (capital reversing and reswitching) which contradicted the neoclassical conclusions about the relationship between the choice of technique and the rate of interest. One may accept the assumption that individual firms can be represented as neoclassical production functions, but concluding that the economy can then also be represented by such a function is a logical error.

One important reason is that the capital goods which enter production functions as inputs are not identical, but instead have specific properties. These differences make it all but impossible to find a way to measure the ‘size’ of any collection of capital goods. Further, in Solow’s model, the distinction between capital goods and consumption goods is entirely dissolved – the production function simply generates ‘output’ which may either be consumed or accumulated. What Robinson demonstrated was that it was impossible to accurately measure capital independently of prices and income distribution. But since, in an aggregate production function, income distribution is determined by marginal productivity – which in turn depends on quantities – it is impossible to avoid arguing in a circle . Romer’s assertion of a ‘tight connection between the word and the equations’ is a straightforward misrepresentation of the facts.

The assertion of ‘equally tight connections between theoretical and empirical claims’, is likewise misplaced. As Anwar Shaikh showed in 1974, is it straightforward to demonstrate that Solow’s ‘evidence’ for the aggregate production function is no such thing. In fact, what Solow and others were testing turned out to be national accounting identities. Shaikh demonstrated that, as long as labour and capital shares are roughly constant – the ‘Kaldor facts’ – then any structure of production will produce empirical results consistent with an aggregate Cobb-Douglas production function. The aggregate production function is therefore ‘not even wrong: it is not a behavioral relationship capable of being statistically refuted’.

As I noted in my letter to the FT, Robinson’s neoclassical opponents conceded the argument on capital reversing and reswitching: Kay’s assertion that Solow ‘won easily’ is inaccurate. In purely logical terms Robinson was the victor, as Samuelson acknowledged when he wrote, ‘If all this causes headaches for those nostalgic for the parables of neoclassical writing, we must remind ourselves that scholars are not born to live an easy existence. We must respect, and appraise, the facts of life.’

What matters, as Geoff Harcourt correctly points out, is that the conceptual implications of the debates remain unresolved. Neoclassical authors, such as Cohen and Harcourt’s co-editor, Christopher Bliss, argue that the logical results,  while correct in themselves, do not undermine marginalist theory to the extent claimed by (some) critics. In particular, he argues, the focus on capital aggregation is mistaken. One may instead, for example, drop Solow’s assumption that capital goods and consumer goods are interchangeable: ‘Allowing capital to be different from other output, particularly consumption, alters conclusions radically.’ (p. xviii). Developing models on the basis of disaggregated optimising agents will likewise produce very different, and less deterministic, results.

But Bliss also notes that this wasn’t the direction that macroeconomics chose. Instead, ‘Interest has shifted from general equilibrium style (high-dimension) models to simple, mainly one-good models … the representative agent is now usually the model’s driver.’ Solow himself characterised this trend as ‘dumb and dumber in macroeconomics’. As the great David Laidler – like Robinson, no Marxist –  observes, the now unquestioned use of representative agents and aggregate production functions means that ‘largely undiscussed problems of capital theory still plague much modern macroeconomics’.

It should by now be clear that the claim of ‘mathiness’ is a bizarre one to level against Joan Robinson: she won a theoretical debate at the level of pure logic, even if the broader implications remain controversial. Why then does Paul Romer single her out as the villain of the piece? – ‘Where would we be now if Solow’s math had been swamped by Joan Robinson’s mathiness?’

One can only speculate, but it may not be coincidence that Romer has spent his career constructing models based on aggregate production functions – the so called ‘neoclassical endogenous growth models’ that Ed Balls once claimed to be so enamoured with. Romer has repeatedly been tipped for the Nobel Prize, despite the fact that his work doesn’t appear to explain very much about the real world. In Krugman’s words ‘too much of it involved making assumptions about how unmeasurable things affected other unmeasurable things.’ So much for those tight connections between theoretical and empirical claims.

So where does this leave macroeconomics? Bliss is correct that the results of the Controversy do not undermine the standard toolkit of methodological individualism: marginalism, optimisation and equilibrium. Robinson and her colleagues demonstrated that one specific tool in the box – the aggregate production function – suffers from deep internal logical flaws. But the Controversy is only one example of the tensions generated when one insists on modelling social structures as the outcome of adversarial interactions between  individuals. Other examples include the Sonnenschein-Mantel-Debreu results and Arrow’s Impossibility Theorem.

As Ben Fine has pointed out, there are well-established results from the philosophy of mathematics and science that suggest deep problems for those who insist on methodological individualism as the only way to understand social structures. Trying to conceptualise a phenomenon such as money on the basis of aggregation over self-interested individuals is a dead end. But economists are not interested in philosophy or methodology. They no longer even enter into debates on the subject – instead, the laziest dismissals suffice.

But where does methodological individualism stop? What about language, for example? Can this be explained as a way for self-interested individuals to overcome transaction costs? The result of this myopia, Fine argues, is that economists ‘work with notions of mathematics and science that have been rejected by mathematicians and scientists themselves for a hundred years and more.’

This brings us back to ‘mathiness’. DeLong characterises this as ‘restricting your microfoundations in advance to guarantee a particular political result and hiding what you are doing in a blizzard of irrelevant and ungrounded algebra.’ What is very rarely discussed, however, is the insistence that microfounded models are the only acceptable form of economic theory. But the New Classical revolution in economics, which ushered in the era of microfounded macroeconomics was itself a political project. As its leading light, Nobel-prize winner Robert Lucas, put it, ‘If these developments succeed, the term “macroeconomic” will simply disappear from use and the modifier “micro” will become superfluous.’ The statement is not greatly different in intent and meaning from Thatcher’s famous claim that ‘there is no such thing as society’. Lucas never tried particularly hard to hide his political leanings: in 2004 he declared, ‘Of the tendencies that are harmful to sound economics, the most seductive, and in my opinion the most poisonous, is to focus on questions of distribution.’ (He also declared, five years before the crisis of 2008, that the ‘central problem of depression-prevention has been solved, for all practical purposes, and has in fact been solved for many decades.’)

As a result of Lucas’ revolution, the academic economics profession purged those who dared to argue that some economic phenomena cannot be explained by competition between selfish individuals. Abstract microfounded theory replaced empirically-based macroeconomic models, despite generating results which are of little relevance for real-world policy-making. As Simon Wren-Lewis puts it, ‘students are taught that [non-microfounded] methods of analysing the economy are fatally flawed, and that simulating DSGE models is the only proper way of doing policy analysis. This is simply wrong.’

I leave the reader to decide where the line between science and politics should be drawn.

2015: Private Debt and the UK Housing Market

This report is taken from the EREP’s Review of the UK Economy in 2015.

In his 2015 Autumn Statement, Chancellor George Osborne gave a bravura performance. He congratulated himself on record growth and employment, falling public debt, surging business investment and a narrowing trade deficit. He announced projections of continuous growth and falling public debt over the next parliament.

While much of this was a straightforward misrepresentation of the facts – capital investment has yet to recover from the 2008 crisis and the current account deficit continues to widen – other sound bites came courtesy of the Office for Budget Responsibility. The OBR delivered the Chancellor an early Christmas present in the form of a set of revised projections showing better-than-expected public finances over the next five years.

When, previously, the OBR inconveniently delivered negative revisions, the Chancellor responded by pushing back the date he claims he will achieve a budget surplus. In response to the OBR’s gift, however, he chose instead to spend the windfall.  This is a risky strategy because any negative shock to the economy means he will miss his current fiscal targets – targets he has already missed repeatedly since coming to office.

As it turns out, these negative shocks have materialised rather quickly. Since the Chancellor made his statement a month ago, UK GDP growth has been revised down, the trade deficit has widened and estimates of borrowing for the current year have increased.

ca-forecasts

In reality, the OBR projections never looked plausible. The UK’s current account deficit – the amount borrowed each year from the rest of the world – is at an all- time high of around 5% of GDP. Every six months for the last three years, the OBR forecast that the deficit would start to close within a year; every time they were proved wrong (see figure above).  Their current assertion – that the trend will be broken in 2016 and the deficit will steadily narrow to around 2% of GDP in 2020 – must be taken with a large pinch of salt.

The current account deficit measures the combined overseas borrowing of the UK public and private sectors. In the unlikely event that George Osborne was to achieve his stated aim of a budget surplus, the whole of this foreign borrowing would be accounted for by the private sector. This is exactly what the OBR is projecting. Specifically, they predict that the household sector will run a deficit of around 2% per year for the next five years. They note that “this persistent and relatively large household deficit would be unprecedented”.

This projection has been the basis of recent stories in the press which have declared that the Chancellor has set the economy on a path to almost-certain financial meltdown within the current parliament. This is too simplistic an analysis. Financial imbalances can persist for a long time. The last UK financial crisis originated not in the UK lending markets but in UK banks’ exposure to overseas lending.

But the Chancellor’s strategy entails serious financial risks. Even though there is no real chance of achieving a surplus by 2020, further cuts to government spending will squeeze spending out of the economy, placing ever more of the burden on household consumption spending to maintain growth.

The figure below shows the annual growth in lending to households. While total credit growth remains subdued, unsecured lending has, in the words of Andy Haldane, chief economist at the Bank of England, been “picking up at a rate of knots”.

debt-growth

Moderate growth in the mortgage market may conceal deeper problems: household debt-to-income ratios have fallen since the crisis but, at around 140% of GDP, remain high both in historical terms and compared to other advanced nations. The majority of new mortgage lending since 2008 has been extended to buy-to-let landlords. These speculative buyers now face the prospect of rising interest rates and tax changes that will take a large chunk out of their property income. Many non-buy-to-let borrowers are badly exposed: a sixth of mortgage debt is held by those who have less than £200 a month left after spending on essentials.

The Financial Policy Committee has noted that these trends “… could pose direct risks to the resilience of the UK banking system, and indirect risks via its impact on economic stability”.

What is often left out of the more apocalyptic visions of a coming credit meltdown is that underlying all this is an unprecedented housing crisis in which an entire generation are locked out of home ownership. Instead of tackling this crisis, Osborne is using the housing market as a casino in the hope of keeping economic growth on track during another five years of austerity. It is a high-risk strategy. His luck may soon run out.

The report’s authors include:

John Weeks on fiscal policy

Ann Pettifor on monetary policy

Richard Murphy on taxation

Özlem Onaran on inequality and wage stagnation

Jeremy Smith on labour productivity

Andrew Simms on climate change and energy

Jo Michell on private debt

The full report is can be downloaded here.

Information on EREP is available here.

Happy Christmas from the Office of Budget Responsibility

Image reproduced from here

The sectoral balances approach to economic forecasting has come under scrutiny recently. It is certainly the case that when used carelessly, projections based on accounting identities have the potential to be either meaningless or misleading. This will be the case if accounting identities are mistakenly taken to imply causal relationships, if projections are presented without a clear statement of the assumptions about what drives the system or if changes taking place in ‘invisible’ variables such as the rate of growth of GDP are not identified (balances are usually presented as percentages of GDP).

Used with care, however (or luck, depending on your perspective), the approach is not without its merits – as I have argued previously. If nothing else, the impossibility of escaping from the fact that in a closed system lending must equal borrowing imposes logical restrictions on the projections that can be made about the future paths of borrowing in a ‘closed’ macroeconomic system.

Which brings us to the Chancellor’s Autumn Statement and the OBR’s rather helpful projections. As Duncan Weldon notes, the OBR are likely to receive a rather warmly written card from the Chancellor’s office this Christmas. While true that the OBR have, in the past, been less than helpful to the Chancellor, one can’t help but wonder about the justification for announcing the OBR projections at the same time as the Chancellors’ statements. Why are the OBR projections not made known to the public at the same time that they are made available to the Chancellor?

But back to sectoral balances. The model used by the OBR produces projections which comply with sectoral balance accounting identities. Four are used: those of the public sector, the household sector, the corporate sector and the rest of the world. The most closely watched is of course the public sector balance. The headline result of the OBR forecasts is that the public sector will run a surplus by 2019. What has so far received less attention (at least since Frances Coppola examined the projections from the March 2015 OBR forecasts) is the implication of this for the other three balances. The most recent OBR projections are shown below.

Fig-1-November-2015

Since the government is projected to run a small surplus from mid-2019, the other three sectors must collectively run a deficit of equal size. The OBR projects that the current account deficit will fall from its current level of around five per cent of GDP to around two per cent of GDP. The UK private sector must be in deficit. Interesting details lie in both the distribution of this deficit between the household and corporate sectors, and in the changes in figures since the last OBR reports in March and July.

In order to show how the numbers have changed since the previous forecasts, I have collected the data series from all three releases into individual charts.

The OBR series from these three releases for the public sector financial balance are shown below. Other than postponing the date at which the government achieves a surplus (and some revisions to the historical data) there is little difference between the three releases.

Fig-2-Public

Changes to the projections for the current account deficit are more significant. The latest projections include improvements in the projected deficit of between 0.5% and 1% of GDP, compared with the July predictions. With the current account deficit at record levels in excess of 5% of GDP, I think it is fair to say the projections look optimistic. I note that in each of the three OBR series, the deficit starts to close in the first projected quarter. Put another way, the inflection point has been postponed three times out of three.

Fig-3-ROW

Things start to get interesting when we turn to the corporate sector. Here the projections have changed rather more significantly. Whereas the previous two data series showed the corporate sector reversing its decade-long surplus in 2014 and finally returning to where many think the corporate sector should be – borrowing to invest – the new series contains significant revisions to the historical data. As it turns out, the corporate sector has remained in surplus, lending one per cent of GDP in Q2 2015. The corporate sector is not now projected to return to deficit until Q3 2018.

Fig-4-Corporate

Since the net financial balance for any sector is the difference between ex post saving – profits in the case of the corporate sector – and investment, these revisions imply either falling corporate investment, rising profits, or both.

The data series for corporate investment are shown below. The historical data have been revised down significantly. Investment in Q2 2015 is 1% of GDP lower than previously recorded. (This is hard to square with Osborne’s statement that ‘business investment has grown more than twice as fast as consumption’.) The reduction compared to previous forecasts widens in the projection out to 2020. Nonetheless, it is hard to escape the conclusion that the projections are extremely optimistic. By 2020, business investment is expected to reach twelve per cent of GDP, higher than any year back to 1980.

Fig-5.Investment

What of business profits? These are shown in the table below, taken from the OBR report. It seems that corporate profit grew at 10% year-on-year in 2014-15, despite GDP growth of around 2.5%. While projected growth rates decline, corporate profit is expected to grow at over 4% annually in every year of the projection out to 2021 (in a context of steady 2.5% GDP growth). There is not much sign of GoodhartNangle in these projections.

Fig-6-Profits

So, to recap: by 2020 we have government running a surplus just under 1% of GDP, a current account deficit of 2% of GDP and a corporate sector deficit around 1% of GDP. Those with a facility for mental arithmetic will have already arrived at the punchline – the household sector will be running a deficit of around 2% of GDP. In fact, given data revisions, the household sector appears to be already running a deficit close to 2% of GDP – a deficit which is projected to remain until 2021 (see figure below).

Fig-7-HHAs a comparison, note that in the period preceding the 2008 crisis, the household sector ran a deficit of not much over 1% of GDP, and for a shorter period than currently projected.

The OBR has this to say on its projections:

Recent data revisions have increased the size of the household deficit in 2014 and we expect little change in the household net position over the forecast period, with gradual increases in household saving offset by ongoing growth of household investment. Available historical data suggest that this persistent and relatively large household deficit would be unprecedented. This may be consistent with the unprecedented scale of the ongoing fiscal consolidation and market expectations for monetary policy to remain extremely accommodative over the next five years, but it also illustrates how the adjustment to fiscal consolidation assumed in our central forecast is subject to considerable uncertainty.  (p. 81)

Perhaps there is something to the sectoral balances approach approach after all. One can only wonder what Godley would make of all this.

Jo Michell

Corbyn and the Peoples’ Bank of England

Jeremy Corbyn’s proposal for ‘Peoples’ Quantitative Easing’ – public investment paid for using money printed by the Bank of England – has provoked criticism, including an intervention by Labour’s shadow Chancellor Chris Leslie. It seems the anti-Corbyn wing of the Labour party has finally decided to engage with Corbyn’s policy agenda after several weeks of simply dismissing him out of hand.

Critics of the plan make two main points: that the policy will be inflationary and that it dissolves the boundary between fiscal policy and monetary policy. It would therefore, they claim, fatally undermine the independence of the Bank of England.

The first point is inevitably followed by the observation that inflation and the policy response to inflation – interest rate hikes and recession – hurts the poor. As ever, the first line of attack on economic policies proposed by the left is to claim they will hurt the very people they aim to help. Leslie falls back on the old trope that the state must `live within its means’. It is well-known that this government-as-household analogy is nonsense. But what of the monetary argument?

Inflation is not caused by printing money per se. It is instead the result of a combination of factors: wage increases, supply not keeping pace with demand, and shortages of commodities, many of which are imported.

By these measures, inflationary pressure is currently low – official CPI is around zero. Since this measure tends to over-estimate true inflation, the UK is probably in deflation. There is finally evidence of rising wages – but this comes after both a sharp drop in wages due to the financial crisis and an extended period in which wages have grown at a slower rate than output. The pound is strong, reducing price pressure from imports.

More importantly, the purpose of investment is to increase productive capacity and raise labour productivity. Discussion of monetary policy usually revolves around the ‘output gap’ – the difference between the demand for goods and services and the potential supply. Putting to one side the problems with this immeasurable metric, the point is that investment spending increases potential output as well as stimulating demand, so the medium-run effect on the output gap cannot be determined a priori.

The issue of central bank independence is more subtle – certainly more subtle than the binary choice presented by Corbyn’s critics. That central banks should be free from the malign influence of democratically elected policy-makers has been an article of faith since 1997 when the Labour government granted the Bank of England operational independence. But, as Frances Coppola has argued, central bank independence is an illusion. The Bank’s mandate and inflation target are set by the government. In extremis, the government can choose to revoke ‘independence’.

More relevant to the current debate is the fact that the post-crisis period has already seen significant blurring of the distinction between monetary and fiscal policy. In using its balance sheet to purchase £375bn of securities – mostly government bonds – the Bank of England has, to all intents and purposes, funded the government deficit. The assertion that the barrier is maintained by allowing debt to be purchased only in the secondary market is sleight of hand: while the government was selling new bonds to private financial institutions the Bank was simultanously buying previously issued government bonds from much the same financial institutions.

At this point, critics will object that the Bank was operating within its mandate: QE was enacted in an attempt to hit the inflation target. This is most likely true, although during the inflation spike in 2011, there were suggestions the Bank was deliberately under-forecasting inflation in order to be able to run looser policy; as it turned out, the Banks’ forecasts over-estimated inflation.

None of this alters the fact that quantitative easing both increases the ability of the government to finance deficit spending and has distributional consequences; QE reduced the interest rate on government bonds while increasing the wealth of the already wealthy. Crucially, there won’t be a return to ‘conventional’ monetary policy any time soon. At a panel discussion at the FT’s Alphaville conference on ‘Central Banking After the Crisis’ featuring George Magnus and Claudio Borio among others, there was consensus that we have entered a new era in which the distinction between monetary and fiscal policy holds little relevance; there will be no return to the ‘haven of familiar monetary practice‘ in which steering of short-term interest rates is the primary mechanism of macroeconomic control.

The issue which has triggered this debate is the long-term decline in UK capital expenditure – both public and private. An increase in investment is desperately needed. Corbyn isn’t the first to suggest ‘QE for the people’ – a number of respectable economic commentators have recently called for such measures in letters to the Financial Times and Guardian. Martin Wolf, chief economics commentator at the FT, recently argued that ‘the case for using the state’s power to create credit and money in support of public spending is strong’. Former Chairman of the Financial Services Authority, Adair Turner, has made similar proposals.

I agree, however, with the view that it makes more sense to fund public investment the old-fashioned way – using bonds issued by the Treasury. Where I disagree with Corbyn’s critics is on the sanctity of `independent’ monetary policy; the Bank should stand ready to ensure that these bonds can be issued at an affordable rate of interest.

Why has Corbyn – supposedly a throwback to the 1980s – proposed this new-fangled monetary mechanism? Rather than some sort of populist gesture, I suspect this reflects a status quo which has elevated the status of monetary policy while downgrading fiscal policy. This, in turn, reflects the belief that the government can’t be trusted to make decisions about the direction of the economy; only the private sector has the correct incentive structures in place to guide us to an optimal equilibrium. Monetary policy is the macroeconomic tool of choice because it respects the primacy of the market.

Given that the boundary between fiscal and monetary policy has broken down at least semi-permanently, that status quo no longer holds. It is now time for a serious discussion about the correct approach to macroeconomic stabilisation, the state’s role in directing and financing investment and the distributional implications of monetary policy. It is to Corbyn’s credit that these issues are at last being debated.

What if Reinhart and Rogoff had adopted a more Keynesian perspective?

Illustration by Ingram Pinn (Financial Times)

Illustration by Ingram Pinn (Financial Times)

In two very influential papers, Reinhart and Rogoff (2010) and Reinhart et al. (2012) investigated the relationship between public debt and economic growth. By classifying the annual observations of their data set into public debt categories (low debt, medium debt, high debt, very high debt) and identifying public debt overhang episodes, they indicated that higher public debt-to-GDP ratios are related to lower economic growth. They also emphasised that this relationship is non-linear: although the debt-to-growth correlation is weak below the 90 per cent debt-to-GDP threshold, it becomes much stronger above it. As is well-known, these results were used by many policy makers in support of the austerity policies that have been implemented over the last years in various countries.

In their popular critique Herndon et al. (2013, 2014) called the results of Reinhart and Rogoff into question. They pointed out three problems: (i) coding errors; (ii) selective exclusion of available data; and (iii) inappropriate weighting of summary statistics. They showed that when these problems are tackled, economic growth does not dramatically reduce when the public debt-to-GDP ratio passes the 90 per cent threshold. Reinhart and Rogoff (2013) responded by acknowledging the coding errors in their estimations; however, they disagreed that their weighting method is inappropriate and that they made selective exclusion of data. They themselves presented some corrected estimations according to which the negative relationship between growth and debt remains, but ceases to become stronger above the 90 per cent threshold.

An interesting perspective to this debate is that the whole discussion about the relationship between public debt and economic growth would have been completely different if Reinhart and Rogoff had decided to focus on the adverse effects of low growth on public indebtedness rather than on the adverse effects of high public indebtedness on growth; in other words, if they had analysed their data set using a more Keynesian perspective that emphasises the role of automatic stabilisers and the direct favourable impact of a higher GDP on the debt-to-GDP ratio. In a note that I recently published (Dafermos, 2015) I show what their results would be in that case. Using the same descriptive statistics techniques that Reinhart and Rogoff utilised in their papers, I classify the annual observations of their data set into economic growth categories (low growth, medium growth, high growth, very high growth) and I indicate that the public debt-to-GDP ratio increases as economic growth declines. I also identify low growth episodes and I show that in most countries these episodes are characterised by higher public indebtedness. Therefore, if Reinhart and Rogoff had decided to present their data in this way, the main implication of their analysis would have been that growth policies need to be adopted by policy makers in order to avoid high public indebtedness; and not that policy makers need to focus on the reduction of public debt in order to avoid low growth.

Of course, Reinhart and Rogoff are careful about this issue: they clearly state that their analysis does not capture causality. However, by classifying their data set into public debt categories and identifying debt overhang episodes they unavoidably concentrated on the growth-reducing effects of high debt, relegating the debt-increasing effects of low growth to the sidelines. On the contrary, if they had adopted a more Keynesian perspective, they could have focused on the debt-increasing effects of low growth. In that case, their conclusions, which informed the policy debate, would have been completely different.

It is also important that the econometric research that followed the publication of their papers was substantially affected by the decision of Reinhart and Rogoff to focus on the growth-reducing effects of high public debt: most researchers have paid attention to the adverse effects of high debt on growth and not the other way round. Interestingly, the literature has not so far provided strong support to the causality from public debt to economic growth (see footnote 1 in my note). This implies that the empirical research needs to investigate the debt-increasing effects of low growth in greater depth; as would have probably been the case if Reinhart and Rogoff had decided to analyse their dataset using a more Keynesian perspective, or if they had explicitly presented both ‘halves’ of the public debt-economic growth relationship.

Yannis Dafermos

Models, maths and macro: A defence of Godley

To put it bluntly, the discipline of economics has yet to get over its childish passion for mathematics and for purely theoretical and often highly ideological speculation, at the expense of historical research and collaboration with the other social sciences.

The quote is, of course, from Piketty’s Capital in the 21st Century. Judging by Noah Smith’s recent blog entry, there is still progress to be made.

Smith observes that the performance of DSGE models is dependably poor in predicting future macroeconomic outcomes—precisely the task for which they are widely deployed. Critics of DSGE are however dismissed because—in a nutshell—there’s nothing better out there.

This argument is deficient in two respects. First, there is a self-evident flaw in a belief that, despite overwhelming and damning evidence that a particular tool is faulty—and dangerously so—that tool should not be abandoned because there is no obvious replacement.

The second deficiency relates to the claim that there is no alternative way to approach macroeconomics:

When I ask angry “heterodox” people “what better alternative models are there?”, they usually either mention some models but fail to provide links and then quickly change the subject, or they link me to reports that are basically just chartblogging.

Although Smith is too polite to accuse me directly, this refers to a Twitter exchange
from a few days earlier. This was triggered when I took offence at a previous post
of his in which he argues that the triumph of New Keynesian sticky-price models over their Real Business Cycle predecessors was proof that “if you just keep pounding away with theory and evidence, even the toughest orthodoxy in a mean, confrontational field like macroeconomics will eventually have to give you some respect”.

When I put it to him that, rather then supporting his point, the failure of the New Keynesian model to be displaced—despite sustained and substantiated criticism—rather undermined it, he responded—predictably—by asking what should replace it.

The short answer is that there is no single model that will adequately tell you all you need to know about a macroeconomic system. A longer answer requires a discussion of methodology and the way that we, as economists, think about the economy. To diehard supporters of the ailing DSGE tradition, “a model” means a collection of dynamic simultaneous equations constructed on the basis of a narrow set of assumptions around what individual “agents” do—essentially some kind of optimisation problem. Heterodox economists argue for a much broader approach to understanding the economic system in which mathematical models are just one tool to aid us in thinking about economic processes.

What all this means is that it is very difficult to have a discussion with people for whom the only way to view the economy is through the lens of mathematical models—and a particularly narrowly defined class of mathematical models—because those individuals can only engage with an argument by demanding to be shown a sheet of equations.

In repsonse to such a demand, I conceded ground by noting that the sectoral balances approach, most closely associated with the work of Wynne Godley, was one example of mathematical formalism in heterodox economics. I highlighted Godley’s famous 1999 paper
in which, on the basis of simulations from a formal macro model, he produces a remarkably prescient prediction of the 2008 financial crisis:

…Moreover, if, per impossibile, the growth in net lending and the growth in money supply growth were to continue for another eight years, the implied indebtedness of the private sector would then be so extremely large that a sensational day of reckoning could then be at hand.

This prediction was based on simulations of the private sector debt-to-income ratio in a system of equations constructed around the well-known identity that the financial balances of the private, public and foreign sector must sum to zero. Godley’s assertion was that, at some point, the growth of private sector debt relative to income must come to an end, triggering a deflationary deleveraging cycle—and so it turned out.

Despite these predictions being generated on the basis of a fully-specified mathematical model, they are dismissed by Smith as “chartblogging” (see the quote above). If “chartblogging” refers to constructing an argument by highlighting trends in graphical representations of macroeconomic data, this seems an entirely admissible approach to macroeconomic analysis. Academics and policy-makers in the 2000s could certainly have done worse than to examine a chart of the household debt-to-income ratio. This would undoubtedly have proved more instructive than adding another mathematical trill to one of the polynomials of their beloved DSGE models—models, it must be emphasised, once again, in which money, banks and debt are, at best, an afterthought.

But the “chartblogging” slur is not even half-way accurate. The macroeconomic model used by Godley grew out of research at the Cambridge Economic Policy Group in the 1970s when Godley and his colleagues Francis Cripps and Nicholas Kaldor were advisors to the Treasury. It is essentially an old-style macroeconometric model combined with financial and monetary stock-flow accounting. The stock-flow modelling methodology has subsequently developed in a number of directions and detailed expositions are to be found in a wide range of publications including the well-known textbook by Lavoie and Godley—a book which surely contains enough equations to satisfy even Smith. Other well-known macroeconometric models include the model used by the UK Office of Budget Responsibility, the Fair model in the US, and MOSES in Scandinavia, alongside similar models in Norway and Denmark. Closer in spirit to DSGE are the NIESR model and the IMF quarterly forecasting model. On the other hand, there is the CVAR method of Johansen and Juselius and similar approaches of Pesaran et al. These are only a selection of examples—and there is an equally wide range of more theoretically oriented work.

This demonstrates the total ignorance of the mainstream of the range and vibrancy of theoretical and empirical research and debate taking place outside the realm of microfounded general equilibrium modelling. The increasing defensiveness exhibited by neoclassical economists when faced with criticism suggests, moreover, an uncomfortable awareness that all is not well with the orthodoxy. Instead of acknowleding the existence of a formal literature outside the myopia of mainstream academia, the reaction is to try and shut down discussion with inaccurate blanket dismissals.

I conclude by noting that Smith isn’t Godley’s highest-profile detractor. A few years after he died—Godley, that is—Krugman wrote an unsympathetic review of his approach to economics, deriding him—oddly for someone as wedded to the IS-LM system as Krugman—for his “hydraulic Keynesianism”. In Krugman’s view, Godley’s method has been superseded by superior microfounded optimising-agent models:

So why did hydraulic macro get driven out? Partly because economists like to think of agents as maximizers—it’s at the core of what we’re supposed to know—so that other things equal, an analysis in terms of rational behavior always trumps rules of thumb. But there were also some notable predictive failures of hydraulic macro, failures that it seemed could have been avoided by thinking more in maximizing terms.

Predictive failures? Of all the accusations that could be levelled against Godley, that one takes some chutzpah.

Jo Michell

Response to Tony Yates’ critique of Teaching Economics After the Crash

Tony Yates has written a critical rejoinder to Aditya Chakrabortty’s Radio 4 documentary on student demands for changes to university teaching of economics. Yates’ contribution is welcome as a rare example of a mainstream economist publicly engaging with the issues raised by dissatisfied students. For too long, the response of the mainstream has been to ignore criticism. Yates’ willingness to enter into dialogue – even if motivated by unhappiness with the content of the programme – is encouraging. Further, it clarifies the view of (some) mainstream economists on the teaching debate.

Yates’ first complaint is that the programme is an opinion piece rather than a report in which equal space is given to each side. It is true that the bulk of the programme focused on the grievances raised by the student movement – this was after all the subject of the piece – and provided only brief slots for dissenting voices. Criticising the programme on this basis ignores the bigger picture of total dominance by mainstream economics – not only in academia but also in the media and public debate. The number of critical economists who appear regularly on television and radio can be counted on one hand. Chakrabortty’s programme and the student movement that pushed it onto the agenda are welcome, yet remain a drop in the ocean.

Yates might reflect on the following question: Were a programme broadcast that defined economics in the terms he believes – a rigorous scientific discipline systematically discovering objective truths and discarding past mistakes – would he object to such an equally one-sided narrative?  For decades, this narrative has dominated to the extent that, until recently, there was no publicly audible debate. It is to the enormous credit of student groups that they have raised the volume of critical voices such that Chakrabortty’s programme could be made.

The more substantive criticisms made by Yates relate to what he regards as manifold factual inaccuracies peddled by interviewees and allowed to go unchallenged – in particular, inaccuracies about the assumptions of mainstream economics.

There are two important problems with Yates’ argument.  First, Chakrabortty’s programme was explicitly concerned with teaching economics – teaching economics at undergraduate level specifically.  Yates’ response is mainly concerned with academic and professional economics in general and, in particular, the higher reaches of contemporary research programmes. Second, and more importantly, Yates condenses students’ calls for increased methodological pluralism into a debate between rational choice theory and its (neoclassical) alternatives. One of the first students interviewed by Chakrabortty complains about a “lack of alternative perspectives, lack of history or context, that could include politics . . . lack of critical thinking, and lack of real world application” in undergraduate degrees. Yates’ response entirely fails to address this key issue.

The “caricatures” of mainstream economics to which Yates takes offence include rational choice, rational expectations, perfect markets, quantifiable risk, and an ignorance of money, banking and finance. Yates argues that this characterisation fails to take account of recent innovations such as bounded rationality, asymmetric information, monopolistic competition, learning effects, uncertainty, sticky prices, credit frictions, and so on. Moreover, Yates has previously argued that a course based on these types of models could adequately replace the course on Bubbles, Panics and Crashes which Manchester University cancelled.

Putting aside, for the moment, issues of methodological pluralism and historical context, does Yates really believe that Farmer’s multiple equilibrium models, internal rationality in intertemporal optimisation, or search models of money and credit should be taught in undergraduate degrees? One of us (Jump) took an MSc on which John Hardman Moore taught. Even there, the “collected works of the Kiyotaki-Moore collaboration” didn’t make it onto the syllabus. One can hardly criticise a programme about teaching economics – and, by extension, those involved with the various student movements – for ignoring papers that most PhD students find difficult to follow.  Regardless of the validity of the approach, “crunching exotic nonlinear ordinary differential equations” is unlikely to become part of the undergraduate economics syllabus any time soon.

A squabble over the exact models taught is not, however, the real issue.  While true that, since the heyday of real business cycle models, the mainstream has pulled back from the most egregious extremes of asserting a world of continuous full employment and total policy ineffectiveness, subsequent modifications to general equilibrium models – sticky prices for instant price adjustment, internal rationality for rational expectations, asymmetric information for full information – are always assumed to be “frictions” and “imperfections”; deviations from some socially optimal baseline.  Arguing about which specific unrealistic assumption has been dropped in this or that model misses the wood for the trees. The students want to be allowed to engage with different methodological approaches to economics – not to be told that if they study for another two years they can learn the Bernanke-Gertler financial accelerator model instead of the Woodford version with “perfect capital markets”.

The methodological approach of neoclassical economics – equilibria derived from optimisation problems couched in ever-more complicated mathematical settings – is highly restrictive, ideologically loaded, and universally imposed on undergraduates. The result of the complete elimination of any other approach from the curriculum is that students spend all their time learning how to manipulate abstract mathematical models which appear to hold little relevance for the real-world problems they are interested in addressing – as is made clear from the interviews conducted by Chakrabortty.

An important consequence of this methodological narrowing has been the (almost complete) eradication of economic history and the history of economic thought from the undergraduate curriculum.  This is a point conceded by Karl Whelan who argues, in his response to Chakrabortty’s programme, that mixing the formal neoclassical syllabus with “broader knowledge” would produce more rounded students – a conclusion also reached by the RES steering group on teaching economics.

Yates admits that he doesn’t believe that “any of the monetary policymakers I worked for or read believed much of [the workhorse NK model].  They worked off hunches, gut instinct, practical experiences.” (This is ironic given that Gali and Gertler – key architects and advocates of the models Yates claims policy-makers weren’t using – believe the models were introduced because previous versions were so inaccurate that “monetary policymakers turned to a combination of instinct, judgment, and raw hunches to assess the implications of different policy paths for the economy”.) What are such hunches and instincts based upon?  Aside from personal experience, one imagines that historical knowledge of previous crises played a part here (e.g., Ben Bernanke). Re-introducing this type of material into economics teaching would, as Whelan argues, produce more capable graduates.  Moreover, knowledge of the way that theory has evolved alongside economic events would provide valuable context for the “exotic non-linear equations” – but it would also cultivate an awareness of the dramatic methodological narrowing within the subject.

One of us (Michell) put this point to Yates on Twitter – admittedly not the ideal medium for careful debate. His response was approximately the following: economic history and history of economic thought are irrelevant – at best, a fun diversion for bath-time reading. This is because economics continually progresses so that the history of the discipline only reveals things “either discarded or whose husks were bettered and extracted”. As an example: “I don’t need to read Keynes to understand the liquidity trap … Wallace and Woodford suffices”.

At this point, one arrives at the inevitable argument that, whilst increasing methodological pluralism in undergraduate degrees may be a good thing, “heterodox economics” is best consigned to optional modules, or discarded altogether.  This misses a point of considerable importance: academic heterodoxy in economics is, more often than not, associated with methodological disagreement.  This is most clear in the further reaches of Post Keynesian and Austrian economics – e.g. Shackle, Lachmann – and in Marxian political economy where historical analysis is central.

If, for example, one wanted to teach the economics of financial crisis, surely the history of financial crises and inductive theory are the correct places to start?  Kindleberger and Minsky are the obvious candidates – after which more formal models could be considered.  This is not to say that the various heterodox approaches do not have their problems, but they are useful springboards to a deeper understanding of economic phenomena. Such empirically-based study would surely be a better starting point than learning Euler equations – despite the fact that the standard consumption Euler equation is known to fail miserably when taken to the data – or the standard model of a representative firm’s investment decision – despite the on-going failure of econometricians to find a robust relation between short run capital investment and the real interest rate.

Let us finish by returning to Yates’ Whig-historical view of the liquidity trap – a view which encapsulates much of the problem with mainstream economics. In modern neoclassical parlance, the liquidity trap refers to a situation in which nominal interest rates are equal to zero and quantitative easing is ineffective because changes in the quantity of (base) money have no effect on the (rational expectations) equilibrium future inflation path. As a result, the central bank is unable to reduce the real rate of interest and stimulate spending. All this matters because the economy fails to bring itself back to equilibrium in a timely fashion due to slow price adjustment.

This is unrecognisable to any serious scholar of Keynes. The liquidity trap refers to a situation in which fundamental uncertainty about the future leads people to hoard cash in preference to other financial assets, no matter how cheap those assets become. At the same time, uncertainty means firms may not commit to investment even if interest rates fall to a point that would previously have stimulated spending. The stickiness or otherwise of prices and wages is irrelevant because changes in output and employment provide the mechanism by which saving and investment are brought into equilibrium.

This brings other contentious topics to the fore, such as uncertainty, animal spirits and the neoclassical treatment of money. Each of these is highlighted by Yates as used in the programme to unfairly attack mainstream economics – he does concede that money as a veil over barter is a fair description for the most part.

Recall the definition of uncertainty emphasised by Knight and Keynes: a situation in which the future simply cannot be predicted, in contrast to a ‘risky’ situation in which all possible events are known, along with the probability of each.  This differentiation is fairly basic, and has been textbook material in game theory since (at least) Luce and Raiffa.  Now consider one example using Yates’ favoured approach to modelling uncertainty in macroeconomics: The central bank, unable to determine which of its three Phillips Curve models is correct, uses Bayesian inference to decide which model to use. This is almost beyond parody – simply a branding exercise which conceals the fact that the model has nothing whatsoever to do with the true meaning of the concept. Other “Keynesian” features of modern neoclassical economics highlighted by Yates are similarly grotesque caricatures of the original concepts.

By not studying Keynes in the original – or any other important economist from more than forty years ago – students are prevented from discovering such inconsistencies and are forced to take at face value the distortions and misrepresentations of mainstream economics. They are prevented from understanding how historical circumstance plays a role in the development and acceptance of economic theory: the Great Depression for Keynes and the stagflation of the 1970s for Friedman, for example. They also – crucially – fail to appreciate that economic and political power matters: mainstream economic theory is “history as written by those perceived to have been the intellectual victors of key debates”.

Yates describes Aditya Chakrabortty’s Radio 4 documentary as “a distorting dramatisation, on account of allowing multiple silly, uninformed critiques to go unchallenged in the program. Yet presented as a reasonable, impartial take on what is going on in economics.” This is unfair to the students involved in the reform movement and misses the main point of the programme. While we would not defend every claim made in the programme, we strongly support the call for a widening of the economics curriculum.

Given the role of the profession in contributing to the 2008 crisis, and in justifying the inexcusable policy packages imposed in response to the post-crisis expansion of sovereign debt, we might – at the very least – display some humility when addressing the inevitable public backlash. Beyond this, we must act on student demands and address past failings by implementing a fundamental overhaul of the economics curriculum.

 

Rob Jump
Jo Michell