capture1

Consistent modelling and inconsistent terminology

Image reproduced from here

Simon Wren-Lewis has a couple of recent posts up on heterodox macro, and stock-flow consistent modelling in particular. His posts are constructive and engaging. I want to respond to some of the points raised.

Simon discusses the modelling approach originating with Wynne Godley, Francis Cripps and others at the Cambridge Economic Policy Group in the 1970s. More recently this approach is associated with the work of Marc Lavoie who co-wrote the key textbook on the topic with Godley.

The term ‘stock-flow consistent’ was coined by Claudio Dos Santos in his PhD thesis, ‘Three essays in stock flow consistent modelling’ and has been a source of misunderstanding ever since. Simon writes, ‘it is inferred that mainstream models fail to impose stock flow consistency.’ As I tried to emphasise  in the blog which Simon links to, this is not the intention: ‘any correctly specified closed mathematical macro model should be internally consistent and therefore stock-flow consistent. This is certainly true of DSGE models.’ (There is an important caveat here:  this consistency won’t be maintained after log-linearisation – a standard step in DSGE solution – and the further a linearised model gets from the steady state, the worse this inconsistency will become.)[1]

Marc Lavoie has emphasised that he regrets adopting the name, precisely because of the implication that consistency is not maintained in other modelling traditions. Instead, the term refers to a subset of models characterised by a number of specific features. These include the following: aggregate behavioural macro relationships informed by both empirical evidence and post-Keynesian theory; detailed, institutionally-specific modelling of the monetary and financial sector; and explicit feedback effects from financial balance sheets to economic behaviour and the stability of the macro system both in the short run and the long run.

A distinctive feature of these models is their rejection of the loanable funds theory of banking and money – a position endorsed in a recent Bank of England Quarterly Bulletin and Working Paper. Partially as a result of this view of the importance of money and money-values in the decision-making process, these models are usually specified in nominal magnitudes. As a result, they map more directly onto the national accounts than real-sector models which require complex transformations of data series using price deflators.

Since the behavioural features of these models are informed by a well-developed theoretical tradition, Simon’s assertion that SFC modelling is ‘accounting, not economics’ is inaccurate. Accounting is one important element in a broader methodological approach. Imposing detailed financial accounting alongside behavioural assumptions about how financial stocks and flows evolve imposes constraints across the entire system. Rather like trying to squeeze the air out of one part of a balloon, only to find another part inflating, chasing assets and liabilities around a closed system of linked balance sheets can be an informative exercise – because where leverage eventually turns up is not always clear at the outset. Likewise, SFC models may include detailed modelling of inventories, pricing and profits, or of changes in net worth due to asset price revaluation and price inflation. For such processes, even the accounting is non-trivial. Taking accounting seriously allows modellers to incorporate institutional complexity – something of increasing importance in today’s world.

The inclusion of detailed financial modelling allows the models to capture Godley’s view that agents aim to achieve certain stock-flow norms. These may include household debt-to-income ratios, inventories-to-sales ratios for firms and leverage ratios for banks. Many of the functional forms used implicitly capture these stock-flow ratios. This is the case for the simple consumption function used in the BoE paper discussed by Simon, as shown here. Of course, other functional specifications are possible, as in this model, for example, which includes a direct interest rate effect on consumption.

Simon notes that adding basic financial accounting to standard models is trivial but ‘in most mainstream models these balances are of no consequence’. This is an important point, and should set alarm bells ringing. Simon identifies one reason for the neutrality of finance in standard models: ‘the simplicity of the dominant mainstream model of intertemporal consumption’.

There are deeper reasons why the financial sector has little role in standard macro. In the majority of standard DSGE macro models, the system automatically tends towards some long-run supply side-determined full-employment equilibrium – in other words the models incorporate Milton Friedman’s long-run vertical Phillips Curve. Further, in most DSGE models, income distribution has no long-run effect on macroeconomic outcomes.

Post-Keynesian economics, which provides much of the underlying theoretical structure of SFC models, takes issue with these assumptions. Instead, it is argued, Keynes was correct in his assertion that demand deficiency can lead economies to become stuck in equilibria characterised by under-employment or stagnation.

Now, if the economic system is always in the process of returning to the flexible-price full-employment equilibrium, then financial stocks will be, at most, of transitory significance. They may serve to amplify macroeconomic fluctuations, as in the Bernanke-Gertler-Gilchrist models, but they will have no long-run effects. This is the reason that DSGE models which do attempt to incorporate financial leverage also require additional ‘ad-hoc’ adjustments to the deeper model assumptions – for example this model by Kumhof and Ranciere imposes an assumption of non-negative subsistence consumption for households. As a result, when income falls, households are unable to reduce consumption but instead run up debt. For similar reasons, if one tries to abandon the loanable funds theory in DSGE models – one of the key reasons for the insistence on accounting in SFC models – this likewise raises non-trivial issues, as shown in this paper by Benes and Kumhof  (to my knowledge the only attempt so far to produce such a model).

Non-PK-SFC models, such as the UK’s OBR model, can therefore incorporate modelling of sectoral balances and leverage ratios – but these stocks have little effect on the real outcomes of the model.

On the contrary, if long-run disequlibrium is considered a plausible outcome, financial stocks may persist and feedbacks from these stocks to the real economy will have non-trivial effects. In such a situation, attempts by individuals or sectors to achieve some stock-flow ratio can alter the long-run behaviour of the system. If a balance-sheet recession persists, it will have persistent effects on the real economy – such hysteresis effects are increasingly acknowledged in the profession.

This relates to an earlier point made in Simon’s post: ‘the fact that leverage was allowed to increase substantially before the crisis was not something that most macroeconomists were even aware of … it just wasn’t their field’. I’m surprised this is presented as evidence for the defence of mainstream macro.

The central point made by economists like Minsky and Godley was that financial dynamics should be part of our field. The fact that by 2007 it wasn’t, illustrates how badly mainstream macroeconomics went wrong. Between Real Business Cycle models, Rational Expectations, the Efficient Markets Hypothesis and CAPM, economists convinced themselves – and, more importantly, policy-makers – that the financial system was none of their business. The fact that economists forgot to look at leverage ratios wasn’t an absent-minded oversight. As Oliver Blanchard argues:

 ‘… mainstream macroeconomics had taken the financial system for granted. The typical macro treatment of finance was a set of arbitrage equations, under the assumption that we did not need to look at who was doing what on Wall Street. That turned out to be badly wrong.’

This is partially acknowledged by Simon when he argues that the ‘microfoundations revolution’ lies behind economists’ myopia on the financial system. Where I, of course, agree with Simon is that ‘had the microfoundations revolution been more tolerant of other methodologies … macroeconomics may well have done more to integrate the financial sector into their models before the crisis’. Putting aside the point that, for the most part, the microfoundations revolution didn’t actually lead to microfounded models, ‘integrating the financial sector’ into models is exactly what people like Godley, Lavoie and others were doing.

Simon also makes an important point in highlighting the lack of acknowledgement of antecedents by PK-SFC authors and, as a result, a lack of continuity between PK-SFC models and the earlier structural econometric models (SEMs) which were eventually killed off by the shift to microfounded models. There is a rich seam of work here – heterodox economists should both acknowledge this and draw on it in their own work. In many respects, I see the PK-SFC approach as a continuation of the SEM tradition – I was therefore pleased to read this paper in which Simon argues for a return to the use of SEMs alongside DSGE and VAR techniques.

To my mind, this is what is attempted in the Bank of England paper criticised by Simon – the authors develop a non-DSGE, econometrically estimated, structural model of the UK economy in which the financial system is taken seriously. Simon is right, however, that the theoretical justifications for the behavioural specifications and the connections to previous literature could have been spelled out more clearly.

The new Bank of England model is one of a relatively small group of empirically-oriented SFC models. Others include the Levy Institute model of the US, originally developed by Wynne Godley and now maintained by Gennaro Zezza, the UNCTAD Global Policy model, developed in collaboration with Godley’s old colleague Francis Cripps, and the Gudgin and Coutts model of the UK economy (the last of these is not yet fully stock-flow consistent but shares much of its theoretical structure with the other models).

One important area for improvement in these models lies with their econometric specification. The models tend to have large numbers of parameters, making them difficult to estimate other than through individual OLS regressions of behavioural relationships. PK-SFC authors can certainly learn from the older SEM tradition in this area.

I find another point of agreement in Simon’s statement that ‘heterodox economists need to stop being heterodox’. I wouldn’t state this so strongly – I think heterodox economists need to become less heterodox. They should identify and more explicitly acknowledge those areas in which there is common ground with mainstream economics.  In those areas where disagreement persists, they should try to explain more clearly why this is the case. Hopefully this will lead to more fruitful engagement in the future, rather than the negativity which has characterised some recent exchanges.

[1] Simon goes on to argue that stock-flow consistency is not ‘unique to Godley. When I was a young economist at the Treasury in the 1970s, their UK model was ‘stock-flow consistent’, and forecasts routinely looked at sector balances.’  During the 1970s, there was sustained debate between the Treasury and Godley’s Cambridge team, who were, aside from Milton Friedman’s monetarism, the most prominent critics of the Keynesian conventional wisdom of the time – there is an excellent history here. I don’t know the details but I wonder if the awareness of sectoral balances at the Treasury was partly due to Godley’s influence?

aco

The Fable of the Ants, or Why the Representative Agent is No Such Thing

Image reproduced from here

Earlier in the summer, I had a discussion on Twitter with Tony Yates, Israel Arroyo and others on the use of the representative agent in macro modelling.

The starting point for representative agent macro is an insistence that all economic models must be ‘microfounded’. The means that model behaviour must be derived from the optimising behaviour of individuals – even when the object of study is aggregates such as employment, national output or the price level. But given the difficulty – more likely the impossibility – of building an individual-by-individual model of the entire economic system, a convenient short-cut is taken. The decision-making process of one type of agents as a whole (for example consumers or firms) is reduced to that of a single ‘representative’ individual – and  is taken to be identical to that assumed to characterise the behaviour of actual individuals.

For example, in the simple textbook DSGE models taught to macro students, the entire economic system is assumed to behave like a single consumer with fixed and externally imposed preferences over how much they wish to consume in the present relative to the future.

I triggered the Twitter debate by noting that this is equivalent to attempting to model the behaviour of a colony of ants by constructing a model of one large ‘average’ ant. The obvious issue illustrated by the analogy is that ants are relatively simple organisms with a limited range of behaviours – but the aggregate behaviour of an ant colony is both more complex and qualitatively different to that of an individual ant.

This is a well-known topic in computer science: a class of optimisation algorithms were developed by writing code which mimics the way that an ant colony collectively locates food. These algorithms are a sub-group of broader class of ‘swarm intelligence’ algorithms. The common feature is that interaction between ‘agents’ in a population, where the behaviour of each individual is specified as a simple set of rules, produces some emergent ‘intelligent’ behaviour at the population level.

In ants, one such behaviour is the collective food search: ants initially explore at random. If they find food, they lay down pheromone trails on their way back to base which alters the behaviour of ants that subsequently set out to search for food: the trails attract ants to areas where food was previously located. It turns out that this simple rules-based system produces a highly efficient colony-level algorithm for locating the shortest paths to food supplies.

The key point about these algorithms is that the emergent behaviour is qualitatively different from that of individual agents – and is typically robust to changes at the micro level: a reasonably wide degree of variation in ant behaviour at the individual level is possible without disruption to the behaviour of the colony. Further, these emergent properties cannot usually be identified by analysing a single agent in isolation – they will only occur as a result of the interaction between agents (and between agents and their environment).

But this is not how representative agent macro works. Instead, it is assumed that the aggregate behaviour is simply identical to that of individual agents. To take another analogy, it is like a physicist modelling the behaviour of a gas in a room by starting with the assumption of one room-sized molecule.

Presumably economists have good reason to believe that, in the case of economics, this simplifying assumption is valid?

On the contrary, microeconomists have known for a long time that the opposite is the case. Formal proofs demonstrate that a population of agents, each represented using a standard neoclassical inter-temporal utility function will not produce behaviour at the aggregate level which is consistent with a ‘representative’ utility function. In other words, such a system has emergent properties. As Kirman puts it:

“… there is no plausible formal justification for the assumption that the aggregate of individuals, even maximisers, acts itself like an individual maximiser. Individual maximisation does not engender collective rationality, not does the fact that the collectivity exhibits a certain rationality necessarily imply that individuals act rationaly. There is simply no direct relation between individual and collective behaviour.”

Although the idea of the representative agent isn’t new – it appears in Edgeworth’s 1881 tract on ‘Mathematical Psychics’ – it attained its current dominance as a result of Robert Lucas’ critique of Keynesian structural macroeconomic models. Lucas argued that the behavioural relationships underpinning these models are not be invariant to changes in government policy and therefore should not be used to inform such policy. The conclusion drawn – involving a significant logical leap of faith – was that all macroeconomic models should be based on explicit microeconomic optimization.

This turned out to be rather difficult in practice. In order to produce models which are ‘well-behaved’ at the macro level, one has to impose highly implausible restrictions on individual agents.

A key restriction needed to ensure that microeconomic optimisation behaviour is preserved at the macro level is that of linear ‘Engel curves’. In cross-sectional analysis, this means individuals consume normal and inferior goods in fixed proportions, regardless of their income – a supermarket checkout worker will continue to consume baked beans and Swiss watches in unchanged proportions after she wins the lottery.

In an inter-temporal setting – i.e. in macroeconomic models – this translates to an assumption of constant relative risk aversion. This imposes the constraint that any individual’s aversion to losing a fixed proportion of her income remains constant even as her income changes.

Further, and unfortunately for Lucas, income distribution turns out to matter: if all individuals do not behave identically, then as income distribution changes, aggregate behaviour will also shift. As a result, aggregate utility functions will only be ‘well-behaved’ if, for example, individuals have identical and linear Engel curves, or if individuals have different linear Engel curves but income distribution is not allowed to change.

As well as assuming away any role for, say income distribution or financial interactions, these assumptions contradict well-established empirical facts. The composition of consumption shifts as income increases. It is hard to believe these restrictive special cases provide a sufficient basis on which to construct macro models which can inform policy decisions – but this is exactly what is done.

Kirman notes that ‘a lot of microeconomists said that this was not very good, but macroeconomists did not take that message on board at all. They simply said that we will just have to simplify things until we get to a situation where we do have uniqueness and stability. And then of course we arrive at the famous representative individual.’

The key point here is that a model in which the population as whole collectively solves an inter-temporal optimisation problem – identical to that assumed to be solved by individuals – cannot be held to be ‘micro-founded’ in any serious way. Instead, representative agent models are aggregative macroeconomic models – like Keynesian structural econometric models – but models which impose arbitrary and implausible restrictions on the behaviour of individuals. Instead of being ‘micro-founded’, these models are ‘micro-roofed’ (the term originates with Matheus Grasselli).

It can be argued that old-fashioned Keynesian structural macro behavioural assumptions can in fact stake a stronger claim to compatibility with plausible microeconomic behaviour – precisely because arbitrary restrictions on individual behaviour are not imposed. Like the ant-colony, it can be shown that under sensible assumptions, robust aggregate Keynesian consumption and saving functions can be derived from a range of microeconomic behaviours – both optimising and non-optimising.

So what of the Lucas Critique?

Given that representative agent models are not micro-founded but are aggregate macroeconomic representations, Peter Skott argues that ‘the appropriate definition of the agent will itself typically depend on the policy regime. Thus, the representative-agent models are themselves subject to the Lucas critique. In short, the Lucas inspired research program has been a failure.’

This does not mean that microeconomic behaviour doesn’t matter. Nor is it an argument for a return to the simplistic Keynesian macro modelling of the 1970s. As Hoover puts it:

‘This is not to deny the Lucas critique. Rather it is to suggest that its reach may be sufficiently moderated in aggregate data that there are useful macroeconomic relationships to model that are relatively invariant’

Instead, it should be accepted that some aggregate macroeconomic behavioural relationships are likely to be robust, at least in some contexts and over some periods of time. At the same time, we now have much greater scope to investigate the relationships between micro and macro behaviours. In particular, computing power allows for the use of agent-based simulations to analyse the emergent properties of complex social systems.

This seems a more promising line of enquiry than the dead end of representative agent DSGE modelling.

schools

On ‘heterodox’ macroeconomics

Image reproduced from here

Noah Smith has a new post on the failure of mainstream macroeconomics and what he perceives as the lack of ‘heterodox’ alternatives. Noah is correct about the failure of mainstream macroeconomics, particularly the dominant DSGE modelling approach. This failure is increasingly – if reluctantly – accepted within the economics discipline. As Brad Delong puts it, DSGE macro has ‘… proven a degenerating research program and a catastrophic failure: thirty years of work have produced no tools for useful forecasting or policy analysis.’

I disagree with Noah, however, when he argues that ‘heterodox’ economics has little to offer as an alternative to the failed mainstream.

The term ‘heterodox economics’ is a difficult one. I dislike it and resisted adopting it for some time: I would much rather be ‘an economist’ than ‘a heterodox economist’. But it is clear that unless you accept – pretty much without criticism – the assumptions and methodology of the mainstream, you will not be accepted as ‘an economist’. This was not the case when Joan Robinson debated with Solow and Samuelson, or Kaldor debated with Hayek. But it is the case today.

The problem with ‘heterodox economics’ is that it is self-definition in terms of the other. It says ‘we are not them’ – but says nothing about what we are. This is because includes everything outside of the mainstream, from reasonably well-defined and coherent schools of thought such as Post Keynesians, Marxists and Austrians, to much more nebulous and ill-defined discontents of all hues. To put it bluntly, a broad definition of ‘people who disagree with mainstream economics’ is going to include a lot of cranks. People will place the boundary between serious non-mainstream economists and cranks differently, depending on their perspective.

Another problem is that these schools of thought have fundamental differences. Aside from rejecting standard neoclassical economics, the Marxists and the Austrians don’t have a great deal in common.

Noah seems to define heterodox economics as ‘non-mathematical’ economics. This is inaccurate. There is much formal modelling outside of the mainstream. The difference lies with the starting assumptions. Mainstream macro starts from the assumption of inter-temporal optimisation and a system which returns to the supply-side-determined full-employment equilibrium in the long run. Non-mainstream economists reject these in favour of assumptions which they regard as more empirically plausible.

It is true that there are some heterodox economists, for example Tony Lawson and Ben Fine who take the position that maths is an inappropriate tool for economics and should be rejected. (Incidentally, both were originally mathematicians.) This is a minority position, and one I disagree with. The view is influential, however. The highest-ranked heterodox economics journal, the Cambridge Journal of Economics, has recently changed its editorial policy to explicitly discourage the use of mathematics. This is a serious mistake in my opinion.

So Noah’s claim about mathematics is a straw man. He implicitly acknowledges this by discussing one class of mathematical Post Keynesian models, the so-called ‘stock-flow consistent’ models (SFC). He rightly notes that the name is confusing – any correctly specified closed mathematical macro model should be internally consistent and therefore stock-flow consistent. This is certainly true of DSGE models.

SFC refers to a narrower set of models which incorporate detailed modelling of the ‘plumbing’ of the financial system alongside traditional macro Keynesian behavioural assumptions – and reject the standard inter-temporal optimising assumptions of DSGE macro. Marc Lavoie, who originally came up with the name, admits it is misleading and, with hindsight, a more appropriate name should have been chosen. But names stick, so SFC joins a long tradition of badly-named concepts in economics such as ‘real business cycles’ and ‘rational expectations’.

Noah claims that ‘vague ideas can’t be tested against the data and rejected’.  While the characterisation of all heterodox economics as ‘vague ideas’ is another straw man, the falsifiability point is important. As Noah points out, ‘One of mainstream macro’s biggest failings is that theories that don’t fit the data continue to be regarded as good and useful models.’ He also notes that big SFC models have so many parameters that they are essentially impossible to fit to the data.

This raises an important question about what we want economic models to do, and what the criteria should be for acceptance or rejection. The belief that models should provide quantitative predictions of the future has been much too strongly held. Economists need to come to terms with the reality that the future is unknowable – no model will reliably predict the future. For a while, DSGE models seemed to do a reasonable job. With hindsight, this was largely because enough degrees of freedom were added when converting them to econometric equations that they could do a reasonably good job of projecting past trends forward, along with some mean reversion.  This predictive power collapsed totally with the crisis of 2008.

Models then should be seen as ways to gain insight over the mechanisms at work and to test the implications of combining assumptions. I agree with Narayana Kocherlakota when he argues that we need to return to smaller ‘toy models’ to think through economic mechanisms. Larger econometrically estimated models are useful for sketching out future scenarios – but the predictive power assigned to such models needs to be downplayed.

So the question is then – what are the correct assumptions to make when constructing formal macro models? Noah argues that Post Keynesian models ‘don’t take human behaviour into account – the equations are typically all in terms of macroeconomic aggregates – there’s a good chance that the models could fail if policy changes make consumers and companies act differently than expected’

This is of course Robert Lucas’s critique of structural econometric modelling. This critique was a key element in the ‘microfoundations revolution’ which ushered in the so-called Real Business Cycle models which form the core of the disastrous DSGE research programme.

The critique is misguided, however. Aggregate behavioural relationships do have a basis in individual behavour. As Bob Solow puts it:

The original impulse to look for better or more explicit micro foundations was probably reasonable. It overlooked the fact that macroeconomics as practiced by Keynes and Pigou was full of informal microfoundations. … Generalizations about aggregative consumption-saving patterns, investment patterns, money-holding patterns were always rationalized by plausible statements about individual – and, to some extent, market-behavior.

In many ways, aggregate behavioural specifications can make a stronger claim to be based in microeconomic behaviour than the representative agent DSGE models which came to dominate mainstream macro. (I will expand on this point in a separate blog.)

Mainstream macro has reached the point that only two extremes are admitted: formal, internally consistent DSGE models, and atheoretical testing of the data using VAR models. Anything in between – such as structural econometric modelling – is rejected. As Simon Wren-Lewis has argued, this theoretical extremism cannot be justified.

Crucial issues and ideas emphasised by heterodox economists were rejected for decades by the mainstream while it was in thrall to representative-agent DSGE models. These ideas included the role of income distribution, the importance of money, credit and financial structure, the possibility of long-term stagnation due to demand-side shortfalls, the inadequacy of reliance on monetary policy alone for demand management, and the possibility of demand affecting the supply side. All of these ideas are, to a greater or lesser extent, now gradually becoming accepted and absorbed by the mainstream – in many cases with no acknowledgement of the traditions which continued to discuss and study them even as the mainstream dismissed them.

Does this mean that there is a fully-fledged ‘heterodox economics’ waiting in the wings waiting to take over from mainstream macro? It depends what is meant – is there complete model of the economy sitting in a computer waiting for someone to turn it on? No – but there never will be, either within the mainstream or outside it. But Lavoie argues,

if by any bad luck neoclassical economics were to disappear completely from the surface of the Earth, this would leave economics utterly unaffected because heterodox economics has its own agenda, or agendas, and its own methodological approaches and models.

I think this conclusion is too strong – partly because I don’t think the boundary between neoclassical economics and heterodox economics is as clear as some claim. But it highlights the rich tradition of ideas and models outside of the mainstream – many of which have stood the test of time much better than DSGE macro. It is time these ideas are acknowledged.

What do immigration numbers tell us about the Brexit vote?

A couple of weeks ago I tweeted a chart from The Economist which plotted the percentage increase in the foreign-born population in UK local authority areas against the number of Leave votes in that area. I also quoted the accompanying article: ‘Where foreign-born populations increased by more than 200%, a Leave vote followed in 94% of cases.’

00-economist

This generated lots of responses, many of which rightly pointed out the problems with the causality implied in the quote. These included the following:

  • Using the percentage change in foreign-born population is problematic because this will be highly sensitive to the initial size of population.
  • Majority leave votes also occurred in many areas where the number of migrants had fallen.
  • Much of the result is driven by a relatively small number of outliers while the systemic relationship looks to be flat.
  • The number of points where foreign-born populations had increased by more than 200% were small relative to the total sample: around twenty points out of several hundred.

Al these criticisms are valid. With hindsight, the Economist probably shouldn’t have published the chart and article – and I shouldn’t have tweeted it. But the discussion on Twitter got me interested in whether the geographical data can tell us anything interesting about the Leave vote.

I started by trying to reproduce the Economist’s chart. The time period they use for the change in foreign-born population is 2001-2014. This presumably means they used census data for the 2001 numbers and ONS population estimates for 2014. My attempt to reproduce the graph using these datasets is shown below. The data points are colour-coded by geographical region and the size of the data point represents the size of the foreign-born population in 2014 as a percentage of the total. (The chart is slightly different to the one I previously tweeted, which had some data problems.)

01-chart-f-inc-hybrid-trans

Despite the problems described above, the significance of geography in the vote is clear – this is emphasised in the excellent analysis published recently by the Resolution Foundation and by Geoff Tily at the TUC (see also this in the FT and this in the Guadian).

Of the English and Welsh regions, it is clear that the Remain vote was overwhelmingly driven by London (The chart above excludes Scotland and Northern Ireland, both of which voted to Remain). Other areas which have seen substantial growth in foreign-born populations and also voted to Remain are cities such as Oxford, Cambridge, Bristol, Manchester and Liverpool.

A better way to look at this data is to plot the percentage point change in foreign population instead of the percentage increase. This will prevent small initial foreign-born populations producing large percentage increases. The result is shown below. For this, and rest of the analysis that follows, I’ve used the ONS estimates of the foreign-born population. This reduces the number of years to 2004-2014, but excludes possible errors due to incompatibility between the census data and ONS estimates. It also allows for inclusion of Scottish data (but not data from Northern Ireland). I’ve also flipped the X and Y axes: if we are thinking of the Leave vote as the thing we wish to explain, it makes more sense to follow convention and put it on the Y axis.

02-chart-f-pp-ons

There is no statistically significant relationship between the two variables in the chart above. The divergence between London, Scotland and the rest of the UK is clear, however. There also looks to be a positive relationship between the increase in foreign-born population and the Leave vote within London. This can be seen more clearly if the regions are plotted separately.

03-chart-f-region-pp-ons

The only region in which there is statistically significant relationship in a simple regression between the two variables is London. A one percent increase in the foreign-born population is associated with a 1.5 percent increase in the Leave vote (with an R-squared of about 0.4). The chart below shows the London data in isolation.

04-chart-f-pp-ons-london

The net inflow of migrants appears to have been greatest in the outer boroughs of London – and these regions also returned highest Leave votes. There are a number of possible explanations for this. One is that new migrants go to where housing is affordable – which means the outer regions of London. These are also the areas where incomes are likely to be lower. There is some evidence for this, as shown in the chart below: there is a negative relationship – albeit a weak one – between the increase in the foreign-born population and the median wage in the area.

05-chart-london-wage-pp-inc

Returning to the UK as a whole (excluding Northern Ireland), the Resolution foundation finds that there is a statistically significant relationship between the percentage point increase in foreign-born population and Leave vote when the size of the foreign-born population is controlled for. This is confirmed in the following simple regression, where FB.PP.Incr is the percentage point increase in the foreign-born population and FB.Pop.Pct is the foreign-born population as a percent of the total.

Coefficients:
 Estimate Std. Error t value Pr(>|t|) 
(Intercept) 57.19258 0.71282 80.235 < 2e-16 ***
FB.PP.Incr 0.90665 0.17060 5.314 1.87e-07 ***
FB.Pop.Pct -0.64344 0.05984 -10.752 < 2e-16 ***
---
Signif. codes: 0 ~***~ 0.001 ~**~ 0.01 ~*~ 0.05 ~.~ 0.1 ~ ~ 1

Residual standard error: 9.002 on 363 degrees of freedom
Multiple R-squared: 0.2475, Adjusted R-squared: 0.2433 
F-statistic: 59.69 on 2 and 363 DF, p-value: < 2.2e-16

It is clear that controlling for the foreign-born population is, in large part, controlling for London. This is illustrated in the chart below which shows the foreign-born population as a percentage of the total for each local authority in 2014, grouped by broad geographical region. The boxplots in the background show the mean and interquartile ranges of foreign-born population share by region. The size of the data points represents the size of the electorate in that local authority.

06-chart-f-ons-fp-electorate-boxes

This highlights a problem with the analysis so far – and for others doing regional analysis on the basis of local authority data. By taking each region as a single data point, statistical analysis misses the significance of differences in the size of electorates. This is important because it means, for example, that the Leave vote of 57% from Richmondshire, North Yorksire with around 27,000 votes cast is given the same weight as the Leave vote of 57% in County Durham, with around 270,000 votes cast.

This can be overcome by constructing an index of referendum voting weighted by the size of the electorate in each area. This index is constructed so that it is equal to zero where the Leave vote was 50%, negative for areas voting Remain, and positive for areas voting Leave. The magnitude of the index represents the strength of the contribution to the overall result. Plotting this index against the percentage point change in the foreign population produces the following chart. Data point sizes represent the number of votes in each area.

07-chart-leave-weighted

Again, there is no statistically significant relationship between the two variables, but as with the unweighted data, when controlling for the foreign population,  a positive relationship does exist between the increase in foreign-born and Leave votes.

The outliers are different to those seen in the unweighted voting data, however – particularly in areas with a strong leave vote. This can be seen more clearly by removing the two areas with the strongest Remain votes: London and Scotland. The data for the rest of England and Wales only are shown below.08-chart-leave-weighted-nss

There is a clear split between the strong Leave outliers and the strong Remain outliers. The latter are Bristol, Brighton, Manchester, Liverpool and Cardiff. When weighted by size of vote, The previous outliers for Leave – Eastern areas such as Boston and South Holland – are replaced by towns and cities in the West Midlands and Yorkshire and with the counties of Cornwall and County Durham.

Overall, while there is a relationship between net migration inflows and Leave votes – at least when controlling for the size of the foreign-born population – it is only a small part of the story. The most compelling discussions I’ve seen of the underlying causes of the Leave vote are those which emphasise the rise in precarity and the loss of social cohesion and identity in the lives of working people, such as John Lanchester’s piece in the London Review of Books (despite the errors), the excellent follow-up piece by blogger Flip-Chart Rick, and this piece by Tony Hockley. As Geoff Tily argues, the geographical distribution of votes strongly suggests economic dissatisfaction was a key driver of the Leave vote, which pitted ‘cosmopolitan cities’ against the rest of the country. This is compatible with the pattern shown above, where the strongest Leave votes are concentrated in ex-industrial areas and the strongest Remain votes in the ‘cosmopolitan cities’.

The chart below shows the weighted Leave vote plotted against median gross weekly pay.09-wages

Scotland as a whole is once again the outlier, while much of the relationship appears to be driven by London, where wages are higher and the majority voted Remain. Removing these two regions gives the following graph.

10-wages

Aside from the outlier Remain cities, there is a negative relationship between median pay and weighted Leave votes. The statistical strength of this relationship is relatively weak, however.

Putting all the variables together produces the following regression result:

Coefficients:
 Estimate Std. Error t value Pr(>|t|) 
(Intercept) 80.98722 12.18838 6.645 1.12e-10 ***
FB.PP.Incr 2.46269 0.57072 4.315 2.06e-05 ***
FB.Pop.Pct -1.61904 0.21781 -7.433 7.72e-13 ***
Median.Wage -0.12539 0.02404 -5.216 3.08e-07 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 29 on 362 degrees of freedom
Multiple R-squared: 0.2977, Adjusted R-squared: 0.2919 
F-statistic: 51.15 on 3 and 362 DF, p-value: < 2.2e-16

Leave votes are negatively associated with the size of the foreign-born population and with the median wage, and positively associated with increases in the foreign-born. The R^2 value of 0.3 suggests this model has some predictive power, but could certainly be improved.

Coefficients:
 Estimate Std. Error t value Pr(>|t|) 
(Intercept) 107.61139 13.30665 8.087 9.97e-15 ***
FB.PP.Incr 2.92817 0.49930 5.865 1.04e-08 ***
FB.Pop.Pct -2.34394 0.27140 -8.636 < 2e-16 ***
Median.Wage -0.14360 0.02313 -6.210 1.50e-09 ***
RegionEast Midlands -9.07601 5.44978 -1.665 0.09672 . 
RegionLondon 9.44698 8.34896 1.132 0.25861 
RegionNorth East -4.11112 8.02869 -0.512 0.60893 
RegionNorth West -16.69448 5.51048 -3.030 0.00263 ** 
RegionScotland -61.65217 5.76312 -10.698 < 2e-16 ***
RegionSouth East -4.60717 4.64123 -0.993 0.32156 
RegionSouth West -18.73821 5.55187 -3.375 0.00082 ***
RegionWales -27.65673 6.53577 -4.232 2.96e-05 ***
RegionWest Midlands 4.06613 5.83469 0.697 0.48633 
RegionYorkshire and The Humber 4.72398 6.61676 0.714 0.47574 
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 24 on 352 degrees of freedom
Multiple R-squared: 0.5323, Adjusted R-squared: 0.515 
F-statistic: 30.82 on 13 and 352 DF, p-value: < 2.2e-16


Adding regional dummy variables improves the fit of the model substantially – increasing the value of R^2 to around 0.5. This suggests – unsurprisingly – there are differences between regions which are not captured in the three variables included here.

Immigration brings both benefits and costs – but no reason to leave

If UK voters decide to leave the European Union, it will be for one reason above all. From the outset, nationalism bordering on xenophobia has been a defining feature of the Leave campaign. Having lost the argument on broader economic issues, it looks likely the Leave camp will fight the final month of the campaign on immigration. The scapegoating of migrants for the UK’s economic problems will become increasingly unrestrained as the referendum date approaches.

It is not difficult to understand why the Leave camp has chosen to focus on immigration: it is the issue which matters most to those likely to vote for Brexit. Fear that immigration undermines living standards and increases precarity is strong. The anti-European political right has harnessed this fear in a cynical attempt to exploit the insecurity of working class voters in the era of globalisation.

It is countered by Remain campaign statements emphasising that immigration is good for the economy: there are fiscal benefits, immigrants bring much-needed skills and –  because migrants are mostly of working age – immigration offsets the effects of an ageing population.

These claims are well-founded. But immigration has both positive and negative effects. Like other facets of globalisation, the impact of immigration is felt unevenly.

At its simplest, the pro-immigration argument is that migrants find work without displacing native workers, thus increasing the size of the economy. This argument is a valid way to dispel the ‘lump of labour’ fallacy and counter naive arguments that immigration automatically costs jobs. But it does not prove immigration is necessarily positive: an increasing population also puts pressure on housing, the environment and public services.

A stronger position is taken by those who claim that immigration increases GDP per capita – migrants raise labour productivity. It is difficult to interpret the evidence on this, since productivity is simultaneously determined by many factors. But even those who argue that the evidence supports this position find the effect to be very weak. Positive effects on productivity are likely to due to skilled migrants being hired as a result of the UK ‘skills gap’.

But not all – or even most – immigrants are in highly skilled work. Despite being well-educated, many come looking for whatever work they can find and are willing to work for low wages. A third of EU nationals in the UK are employed in ‘elementary and processing occupations’. What is the effect of an increasing pool of cheap labour looking for low-skilled work? The evidence suggests there is little effect on employment rates over the long run. There may, however, be displacement effects in the short run. In particular, when the labour market is slack – during recessions – the job prospects of low-paid and unskilled workers may be damaged by migrant inflows.

The evidence on wages likewise suggests effects are small, but again there appears to be some impact of immigration on the wages of low-skilled workers. There is also evidence of labour market segmentation: migrants are disproportionately represented in the seasonal, temporary and ‘flexible’ (i.e. precarious) workforce.

Further, much of the evidence on employment and wages comes from a period of high growth and strong economic performance. This may not be a reliable guide to the future. It is possible that more significant negative effects could emerge, particularly if the economy remains weak.

Economists on the Remain side downplay the negative effects of immigration, presenting it as unequivocally good for the UK economy. It is undoubtedly difficult to present a nuanced argument in the short space available for a media sound-bite. But it is possible that the line taken by the Remain camp plays into the hands of the Leave campaign.

Aside from the skills they bring – around a quarter of NHS doctors are foreign nationals – the main benefit of immigration is the effect on demographics. Without inward migration, the UK working age population would have already peaked. But ageing cannot be postponed indefinitely.

Rapid population growth leads to pressures on public services, housing and infrastructure unless there are on-going programmes of investment, upgrading of infrastructure and house building. Careful planning is required to ensure that public services are available before migrants arrive – otherwise there will be a period while services are under pressure before more capacity is added.

Long-run investment in public services, infrastructure and housing is exactly what the UK has not been doing. Instead, we are more than five years into an unnecessary austerity programme. Our infrastructure is ageing and suffers from lack of capacity. Wages have yet to recover to pre-crisis levels. Government services continue to be cut, even as the population increases.

Those who face pressure on their standard of life from weak wage growth and rising housing costs will understandably find it difficult to disentangle the causes of their problems. For many, immigration will not be the reason – but it will be more visible and tangible than austerity, lack of aggregate demand and weak labour bargaining power.

The root of the problem is that the UK is increasingly a low-wage, low-skill economy. There is a shortage of affordable housing and public services are facing the deepest cuts in decades. None of these problems would be solved by the reorganised Conservative government that would take power immediately following a vote to leave the EU. Instead, it is clear that much of the Leave camp favours a Thatcherite programme of further cuts and deregulation.

Campaigners for Leave will continue to use immigration as a way to take Britain out of the EU. They are wrong. This is cynical exploitation of genuine problems and fears faced by many low-wage workers.  Immigration is not a reason to leave the European Union.

But the status quo of high immigration alongside cuts to public services and wage stagnation cannot continue indefinitely. If high levels of migration are to continue, as looks likely, the UK government must consider how to accommodate the rapidly increasing population. Government services must keep pace with population increases. Pressures will be particularly acute in London and the South East.

We must also be more open in admitting that immigration has both costs and benefits – it does not affect the population evenly. Liberal commentators should acknowledge the concerns of those facing the negative effects of immigration. In doing so, they may lessen the chances that voters fall for the false promises of the Leave campaign.

 

This article is part of the EREP report on the EU referendum ‘Remain for Change‘. The authors of the report are:

John Weeks, Professor Emeritus of Development Economics at SOAS
Ann Pettifor, Director of Policy Research in Macroeconomics
Özlem Onaran, Professor of economics, Director of Greenwich Political Economy Research Centre
Jo Michell, Senior Lecturer in economics, University of the West of England
Howard Reed, Director of Landman Economics.
Andrew Simms, co-founder New Weather Institute, fellow of the New Economics Foundation.
John Grahl, Professor of European Integration, Middlesex University.
Engelbert Stockhammer, Professor, School of Economics, Politics and History, Kingston University
Giovanni Cozzi, Senior Lecturer in economics, Greenwich Political Economy Research Centre
Jeremy Smith, Co-director of Policy Research in Macroeconomics, convenor of EREP

 

 

There is nothing “simple” about the European Commission’s securitisation proposal

On May 23, 2016, 83 scholars from Europe wrote to the European Parliament to call for a careful consideration of the European Commission’s proposals for a new market for STS securitisations, part of the Capital Markets Union agenda. Members of the ECON Committee of the European Parliament are currently working on this proposal. Read the full letter here  – Open letter to MEPs – STS securitisation.

 

 

Why isn’t the Commission talking about government debt?

One more cue to how controversial government debt markets are in Euroland these days.

The European Commission’s progress report on Capital Markets Union, manages to make no reference whatsoever to the issue of government bond markets, their life after the ECB’s QE (bound to end someday) and their critical role in capital markets integration. It’s all about securitisation, corporate bond market liquidity and covered bonds.

Compare this with early views on what it takes to create a market-based financial system in Euroland. In May 1999, Alexandre Lamfalussy, recently appointed head of EuroMTS  and former head of the European Monetary Institute (that would become the ECB), had this to say:

 “We’ve seen an accelerated move to a market-centric system from the bank-centric system that has tended to prevail in Europe,” Lamfalussy said in London last month. “I have no doubt that a market-centric system is more efficient, but there’s a question whether it is stable.” The key to stability, he concludes – for the pricing of corporate as well as public debt – is a liquid and transparent government debt market.’

This is a story of shadow money – the ongoing struggle to define a social contract for liabilities issued against sovereign collateral.

Who is writing the IMF’s recent history?

No, this is not a blog about the impossible triangle IMF-Commission-Greece. I am skeptical anything new can be said about it.

It’s about something perhaps more fundamental: the IMF’s willingness to confront its inglorious past on the free movement of capital.

A couple of months ago, in February 2016, the Fund released a working paper by Atish Ghosh and Mahvash Qureshi, of the Research Department. That paper traces the historical processes through which capital controls became anathema to policy communities around the world, including the IMF. It doesn’t hide behind pretty memes (capital flow management) and technical language: visceral opposition to capital controls,  it argues, arose from the free market ideology of the 1980s and 1990s! It’s the politics.

The IMF Research Department, that paper shows, doesn’t need to hide behind closed doors to read Keynes, Eric Helleiner or Kevin Gallagher* . It can now do it in the open.

Skeptics of IMF’s revolutionary transformations (and I am one, as I argued here for IMF’s view of capital controls and here for global banking), would point to the institutional pathologies of the IMF. The Research Department has far greater liberty to engage in/with heterodox  alternatives, but that doesn’t always translate into profound institutional change.

What is different here: Lagarde has just nominated Atish Ghosh, together with the Princeton historian Harold James, to ‘chronicle defining moments in the Fund’s history’.

Professor James and Mr. Ghosh will write the Fund’s official history from 2000 to 2015, a period characterized by the global financial crisis, the crisis in Europe, and the growing role of emerging and developing countries in the world economy — all defining moments in the Fund’s history

This history  will include the pre-2008 near fall in oblivion (‘assisted’ by Venezuela’s oil money helping large countries pay back the IMF), the Eastern European and then Greek/Irish/Portuguese adventures, Blanchard’s reign with shifts on capital controls, on DSGE ‘supremacy’, on fiscal multipliers, on ‘we need to build analytical capacity for understanding global finance’. Cant wait to read it.

Daniela Gabor

*odd that the paper does not reference Helene Rey’s dilemma, but small miracles…

 

capital

Economics: science or politics? A reply to Kay and Romer

Romer’s article on ‘mathiness’ triggered a debate in the economics blogs last year. I didn’t pay a great deal of attention at the time; that economists were using relatively trivial yet abstruse mathematics to disguise their political leanings didn’t seem a particularly penetrating insight.

Later in the year, I read a comment piece by John Kay on the same subject in the Financial Times. Kay’s article, published under the headline ‘Economists should keep to the facts, not feelings’, was sufficiently cavalier with the facts that I felt compelled to respond. I was not the only one – Geoff Harcourt wrote a letter supporting my defence of Joan Robinson and correcting Kay’s inaccurate description of her as a Marxist.

After writing the letter, I found myself wondering why a serious writer like Kay would publish such carelessly inaccurate statements. Following a suggestion from Matteus Grasselli, I turned to Romer’s original paper:

Economists usually stick to science. Robert Solow was engaged in science when he developed his mathematical theory of growth. But they can get drawn into academic politics. Joan Robinson was engaged in academic politics when she waged her campaign against capital and the aggregate production function …

Solow’s mathematical theory of growth mapped the word ‘capital’ onto a variable in his mathematical equations, and onto both data from national income accounts and objects like machines or structures that someone could observe directly. The tight connection between the word and the equations gave the word a precise meaning that facilitated equally tight connections between theoretical and empirical claims. Gary Becker’s mathematical theory of wages gave the words ‘human capital’ the same precision …

Once again, the facts appear to have fallen by the wayside. The issue at the heart of the debates involving Joan Robinson, Robert Solow and others is whether it is valid to  represent a complex macroeconomic system (such as a country) with a single ‘aggregate’ production function. Solow had been working on the assumption that the macroeconomic system could be represented by the same microeconomic mathematical function used to model individual firms. In particular, Solow and his neoclassical colleagues assumed that a key property of the microeconomic version – that labour will be smoothly substituted for capital as the rate of interest rises – would also hold at the aggregate level. It would then be reasonable to produce simple macroeconomic models by assuming a single production function for the whole economy, as Solow did in his famous growth model.

Joan Robinson and her UK Cambridge colleagues showed this was not true. They demonstrated cases (capital reversing and reswitching) which contradicted the neoclassical conclusions about the relationship between the choice of technique and the rate of interest. One may accept the assumption that individual firms can be represented as neoclassical production functions, but concluding that the economy can then also be represented by such a function is a logical error.

One important reason is that the capital goods which enter production functions as inputs are not identical, but instead have specific properties. These differences make it all but impossible to find a way to measure the ‘size’ of any collection of capital goods. Further, in Solow’s model, the distinction between capital goods and consumption goods is entirely dissolved – the production function simply generates ‘output’ which may either be consumed or accumulated. What Robinson demonstrated was that it was impossible to accurately measure capital independently of prices and income distribution. But since, in an aggregate production function, income distribution is determined by marginal productivity – which in turn depends on quantities – it is impossible to avoid arguing in a circle . Romer’s assertion of a ‘tight connection between the word and the equations’ is a straightforward misrepresentation of the facts.

The assertion of ‘equally tight connections between theoretical and empirical claims’, is likewise misplaced. As Anwar Shaikh showed in 1974, is it straightforward to demonstrate that Solow’s ‘evidence’ for the aggregate production function is no such thing. In fact, what Solow and others were testing turned out to be national accounting identities. Shaikh demonstrated that, as long as labour and capital shares are roughly constant – the ‘Kaldor facts’ – then any structure of production will produce empirical results consistent with an aggregate Cobb-Douglas production function. The aggregate production function is therefore ‘not even wrong: it is not a behavioral relationship capable of being statistically refuted’.

As I noted in my letter to the FT, Robinson’s neoclassical opponents conceded the argument on capital reversing and reswitching: Kay’s assertion that Solow ‘won easily’ is inaccurate. In purely logical terms Robinson was the victor, as Samuelson acknowledged when he wrote, ‘If all this causes headaches for those nostalgic for the parables of neoclassical writing, we must remind ourselves that scholars are not born to live an easy existence. We must respect, and appraise, the facts of life.’

What matters, as Geoff Harcourt correctly points out, is that the conceptual implications of the debates remain unresolved. Neoclassical authors, such as Cohen and Harcourt’s co-editor, Christopher Bliss, argue that the logical results,  while correct in themselves, do not undermine marginalist theory to the extent claimed by (some) critics. In particular, he argues, the focus on capital aggregation is mistaken. One may instead, for example, drop Solow’s assumption that capital goods and consumer goods are interchangeable: ‘Allowing capital to be different from other output, particularly consumption, alters conclusions radically.’ (p. xviii). Developing models on the basis of disaggregated optimising agents will likewise produce very different, and less deterministic, results.

But Bliss also notes that this wasn’t the direction that macroeconomics chose. Instead, ‘Interest has shifted from general equilibrium style (high-dimension) models to simple, mainly one-good models … the representative agent is now usually the model’s driver.’ Solow himself characterised this trend as ‘dumb and dumber in macroeconomics’. As the great David Laidler – like Robinson, no Marxist –  observes, the now unquestioned use of representative agents and aggregate production functions means that ‘largely undiscussed problems of capital theory still plague much modern macroeconomics’.

It should by now be clear that the claim of ‘mathiness’ is a bizarre one to level against Joan Robinson: she won a theoretical debate at the level of pure logic, even if the broader implications remain controversial. Why then does Paul Romer single her out as the villain of the piece? – ‘Where would we be now if Solow’s math had been swamped by Joan Robinson’s mathiness?’

One can only speculate, but it may not be coincidence that Romer has spent his career constructing models based on aggregate production functions – the so called ‘neoclassical endogenous growth models’ that Ed Balls once claimed to be so enamoured with. Romer has repeatedly been tipped for the Nobel Prize, despite the fact that his work doesn’t appear to explain very much about the real world. In Krugman’s words ‘too much of it involved making assumptions about how unmeasurable things affected other unmeasurable things.’ So much for those tight connections between theoretical and empirical claims.

So where does this leave macroeconomics? Bliss is correct that the results of the Controversy do not undermine the standard toolkit of methodological individualism: marginalism, optimisation and equilibrium. Robinson and her colleagues demonstrated that one specific tool in the box – the aggregate production function – suffers from deep internal logical flaws. But the Controversy is only one example of the tensions generated when one insists on modelling social structures as the outcome of adversarial interactions between  individuals. Other examples include the Sonnenschein-Mantel-Debreu results and Arrow’s Impossibility Theorem.

As Ben Fine has pointed out, there are well-established results from the philosophy of mathematics and science that suggest deep problems for those who insist on methodological individualism as the only way to understand social structures. Trying to conceptualise a phenomenon such as money on the basis of aggregation over self-interested individuals is a dead end. But economists are not interested in philosophy or methodology. They no longer even enter into debates on the subject – instead, the laziest dismissals suffice.

But where does methodological individualism stop? What about language, for example? Can this be explained as a way for self-interested individuals to overcome transaction costs? The result of this myopia, Fine argues, is that economists ‘work with notions of mathematics and science that have been rejected by mathematicians and scientists themselves for a hundred years and more.’

This brings us back to ‘mathiness’. DeLong characterises this as ‘restricting your microfoundations in advance to guarantee a particular political result and hiding what you are doing in a blizzard of irrelevant and ungrounded algebra.’ What is very rarely discussed, however, is the insistence that microfounded models are the only acceptable form of economic theory. But the New Classical revolution in economics, which ushered in the era of microfounded macroeconomics was itself a political project. As its leading light, Nobel-prize winner Robert Lucas, put it, ‘If these developments succeed, the term “macroeconomic” will simply disappear from use and the modifier “micro” will become superfluous.’ The statement is not greatly different in intent and meaning from Thatcher’s famous claim that ‘there is no such thing as society’. Lucas never tried particularly hard to hide his political leanings: in 2004 he declared, ‘Of the tendencies that are harmful to sound economics, the most seductive, and in my opinion the most poisonous, is to focus on questions of distribution.’ (He also declared, five years before the crisis of 2008, that the ‘central problem of depression-prevention has been solved, for all practical purposes, and has in fact been solved for many decades.’)

As a result of Lucas’ revolution, the academic economics profession purged those who dared to argue that some economic phenomena cannot be explained by competition between selfish individuals. Abstract microfounded theory replaced empirically-based macroeconomic models, despite generating results which are of little relevance for real-world policy-making. As Simon Wren-Lewis puts it, ‘students are taught that [non-microfounded] methods of analysing the economy are fatally flawed, and that simulating DSGE models is the only proper way of doing policy analysis. This is simply wrong.’

I leave the reader to decide where the line between science and politics should be drawn.

UK Economy is more unbalanced than ever

This article is taken from EREP’s 2016 budget report.

At the end of February, Chancellor George Osborne made an admission: ‘the economy is smaller than we thought in Britain’. The tone has changed since November when, following the unexpected discovery of a spare £27bn by the OBR, the Chancellor triumphantly declared, ‘our long term economic plan is working.’ As it turns out, the UK economy is around one per cent, or £18bn, smaller than the OBR predicted, leaving the Chancellor with at least £5bn in missing tax revenues this year alone, and more in future years (estimated at £9bn per year by the Institute for Fiscal Studies). There is no chance he will keep to his own misguided fiscal rule.

EREP have consistently argued that the supply-side optimism implicit in the OBR forecasts was unwarranted. We were right. Economic indicators across the board have deteriorated significantly since the November forecast. Even the service sector, the single remaining engine of the UK’s imbalanced economy, is now showing signs of mechanical failure. The Markit UK services PMI – a key indicator of activity in the services sector – fell sharply in February. There is no chance that UK growth will be 2.4% in 2016, as claimed by the Chancellor in November.

Osborne’s tax shortfall is the result of much lower than predicted wages and prices. The broadest measure of inflation, the GDP deflator, has fallen to zero, while wage growth has slowed substantially to around two per cent – the OBR had predicted wage growth of three to four per cent over the rest of this parliament.

Despite weakening wage growth, retail sales have remained strong: the most recent figures showed year-on-year spending increases in excess of two per cent. Retail sales strength has driven in part by lower prices resulting from the sharp decline in oil prices. But while households in other major economies largely saved the windfall from lower oil prices, those in the UK spent it, and more. The UK household savings ratio, at 4.4% of disposable income, is now the lowest on record.

hh-s-ratio

And despite weakening wage growth, the UK economy is now entirely reliant on continued household consumption spending. Contrary to Osborne’s claim that growth ‘is more balanced than in the past’, the UK trade deficit is a drag on economic activity and business investment –  which only recently regained pre-crisis levels – fell sharply in February.

How have UK households increased spending despite wages remaining well below pre-crisis levels? Unsecured consumer credit is growing at around nine per cent per annum – the fastest rate since 2005. At over 140% of disposable income, UK household debt is higher than in the US, Japan or the largest European nations. Even the optimistic and now-discredited OBR forecasts predicted the household debt-to-income ratio would need to rise to 160% by 2020 for growth to be maintained and the deficit eliminated.

A recent report by the Money Advice Service – an independent body set up by the government – reports that 8.2 million adults in the UK – one in six of the population – are over-indebted. Among poorer regions, such as the Welsh valleys, the figure rises to one in four. The problem is particularly acute among young people, those in rented accommodation and those with children.

It is exactly these groups – working families and young people – whom the Chancellor will target in the next round of austerity. In the previous Parliament, austerity was targeted at the most marginalised: the sick, the disabled and the unemployed. Since these people have least voice in society, they are unable to put up resistance. Cutting the incomes of working families will be more difficult, as Osborne’s U-turn on tax credits shows.

By reducing working peoples’ incomes, Osborne is attempting push the burden of debt onto the household sector. The strategy will fail – without wage growth, consumer spending will eventually be constrained, dampening growth and pushing Osborne’s deficit-reduction strategy yet further off track. That deficit reduction is not really the ultimate aim of Osborne’s strategy is made plain by his intention to continue cutting tax for those on higher incomes.

There is no long-term economic plan; Osborne’s strategy is one of redistribution by taking from those who least can afford it. As the latest figures show, his strategy has backfired.

 

The report’s authors include:

Ann Pettifor & Jeremy Smith on “The British economy is even “smaller” than the Chancellor asserts”

John Weeks on the Chancellor’s “Growing record in fiscal mismanagement”

Jo Michell on “A weakening economy, reliant on consumption and debt”

Graham Gudgin & Ken Coutts on “A history of missing fiscal targets”

Richard Murphy on “Tax in the 2016 budget”

Information on EREP is available here.