macro

The Fable of the Ants, or Why the Representative Agent is No Such Thing

Image reproduced from here

Earlier in the summer, I had a discussion on Twitter with Tony Yates, Israel Arroyo and others on the use of the representative agent in macro modelling.

The starting point for representative agent macro is an insistence that all economic models must be ‘microfounded’. This means that model behaviour must be derived from the optimising behaviour of individuals – even when the object of study is aggregates such as employment, national output or the price level. But given the difficulty – more likely the impossibility – of building an individual-by-individual model of the entire economic system, a convenient short-cut is taken. The decision-making process of one type of agents as a whole (for example consumers or firms) is reduced to that of a single ‘representative’ individual – and  is taken to be identical to that assumed to characterise the behaviour of actual individuals.

For example, in the simple textbook DSGE models taught to macro students, the entire economic system is assumed to behave like a single consumer with fixed and externally imposed preferences over how much they wish to consume in the present relative to the future.

I triggered the Twitter debate by noting that this is equivalent to attempting to model the behaviour of a colony of ants by constructing a model of one large ‘average’ ant. The obvious issue illustrated by the analogy is that ants are relatively simple organisms with a limited range of behaviours – but the aggregate behaviour of an ant colony is both more complex and qualitatively different to that of an individual ant.

This is a well-known topic in computer science: a class of optimisation algorithms were developed by writing code which mimics the way that an ant colony collectively locates food. These algorithms are a sub-group of broader class of ‘swarm intelligence’ algorithms. The common feature is that interaction between ‘agents’ in a population, where the behaviour of each individual is specified as a simple set of rules, produces some emergent ‘intelligent’ behaviour at the population level.

In ants, one such behaviour is the collective food search: ants initially explore at random. If they find food, they lay down pheromone trails on their way back to base. This alters the behaviour of ants that subsequently set out to search for food: the trails attract ants to areas where food was previously located. It turns out this simple rules-based system produces a highly efficient colony-level algorithm for locating the shortest paths to food supplies.

The key point about these algorithms is that the emergent behaviour is qualitatively different from that of individual agents – and is typically robust to changes at the micro level: a reasonably wide degree of variation in ant behaviour at the individual level is possible without disruption to the behaviour of the colony. Further, these emergent properties cannot usually be identified by analysing a single agent in isolation – they will only occur as a result of the interaction between agents (and between agents and their environment).

But this is not how representative agent macro works. Instead, it is assumed that the aggregate behaviour is simply identical to that of individual agents. To take another analogy, it is like a physicist modelling the behaviour of a gas in a room by starting with the assumption of one room-sized molecule.

Presumably economists have good reason to believe that, in the case of economics, this simplifying assumption is valid?

On the contrary, microeconomists have known for a long time that the opposite is the case. Formal proofs demonstrate that a population of agents, each represented using a standard neoclassical inter-temporal utility function will not produce behaviour at the aggregate level which is consistent with a ‘representative’ utility function. In other words, such a system has emergent properties. As Kirman puts it:

“… there is no plausible formal justification for the assumption that the aggregate of individuals, even maximisers, acts itself like an individual maximiser. Individual maximisation does not engender collective rationality, nor does the fact that the collectivity exhibits a certain rationality necessarily imply that individuals act rationaly. There is simply no direct relation between individual and collective behaviour.”

Although the idea of the representative agent isn’t new – it appears in Edgeworth’s 1881 tract on ‘Mathematical Psychics’ – it attained its current dominance as a result of Robert Lucas’ critique of Keynesian structural macroeconomic models. Lucas argued that the behavioural relationships underpinning these models are not be invariant to changes in government policy and therefore should not be used to inform such policy. The conclusion drawn – involving a significant logical leap of faith – was that all macroeconomic models should be based on explicit microeconomic optimization.

This turned out to be rather difficult in practice. In order to produce models which are ‘well-behaved’ at the macro level, one has to impose highly implausible restrictions on individual agents.

A key restriction needed to ensure that microeconomic optimisation behaviour is preserved at the macro level is that of linear ‘Engel curves’. In cross-sectional analysis, this means individuals consume normal and inferior goods in fixed proportions, regardless of their income – a supermarket checkout worker will continue to consume baked beans and Swiss watches in unchanged proportions after she wins the lottery.

In an inter-temporal setting – i.e. in macroeconomic models – this translates to an assumption of constant relative risk aversion. This imposes the constraint that any individual’s aversion to losing a fixed proportion of her income remains constant even as her income changes.

Further, and unfortunately for Lucas, income distribution turns out to matter: if all individuals do not behave identically, then as income distribution changes, aggregate behaviour will also shift. As a result, aggregate utility functions will only be ‘well-behaved’ if, for example, individuals have identical and linear Engel curves, or if individuals have different linear Engel curves but income distribution is not allowed to change.

As well as assuming away any role for, say income distribution or financial interactions, these assumptions contradict well-established empirical facts. The composition of consumption shifts as income increases. It is hard to believe these restrictive special cases provide a sufficient basis on which to construct macro models which can inform policy decisions – but this is exactly what is done.

Kirman notes that ‘a lot of microeconomists said that this was not very good, but macroeconomists did not take that message on board at all. They simply said that we will just have to simplify things until we get to a situation where we do have uniqueness and stability. And then of course we arrive at the famous representative individual.’

The key point here is that a model in which the population as whole collectively solves an inter-temporal optimisation problem – identical to that assumed to be solved by individuals – cannot be held to be ‘micro-founded’ in any serious way. Instead, representative agent models are aggregative macroeconomic models – like Keynesian structural econometric models – but models which impose arbitrary and implausible restrictions on the behaviour of individuals. Instead of being ‘micro-founded’, these models are ‘micro-roofed’ (the term originates with Matheus Grasselli).

It can be argued that old-fashioned Keynesian structural macro behavioural assumptions can in fact stake a stronger claim to compatibility with plausible microeconomic behaviour – precisely because arbitrary restrictions on individual behaviour are not imposed. Like the ant-colony, it can be shown that under sensible assumptions, robust aggregate Keynesian consumption and saving functions can be derived from a range of microeconomic behaviours – both optimising and non-optimising.

So what of the Lucas Critique?

Given that representative agent models are not micro-founded but are aggregate macroeconomic representations, Peter Skott argues that ‘the appropriate definition of the agent will itself typically depend on the policy regime. Thus, the representative-agent models are themselves subject to the Lucas critique. In short, the Lucas inspired research program has been a failure.’

This does not mean that microeconomic behaviour doesn’t matter. Nor is it an argument for a return to the simplistic Keynesian macro modelling of the 1970s. As Hoover puts it:

‘This is not to deny the Lucas critique. Rather it is to suggest that its reach may be sufficiently moderated in aggregate data that there are useful macroeconomic relationships to model that are relatively invariant’

Instead, it should be accepted that some aggregate macroeconomic behavioural relationships are likely to be robust, at least in some contexts and over some periods of time. At the same time, we now have much greater scope to investigate the relationships between micro and macro behaviours. In particular, computing power allows for the use of agent-based simulations to analyse the emergent properties of complex social systems.

This seems a more promising line of enquiry than the dead end of representative agent DSGE modelling.

Advertisements

On ‘heterodox’ macroeconomics

Image reproduced from here

Noah Smith has a new post on the failure of mainstream macroeconomics and what he perceives as the lack of ‘heterodox’ alternatives. Noah is correct about the failure of mainstream macroeconomics, particularly the dominant DSGE modelling approach. This failure is increasingly – if reluctantly – accepted within the economics discipline. As Brad Delong puts it, DSGE macro has ‘… proven a degenerating research program and a catastrophic failure: thirty years of work have produced no tools for useful forecasting or policy analysis.’

I disagree with Noah, however, when he argues that ‘heterodox’ economics has little to offer as an alternative to the failed mainstream.

The term ‘heterodox economics’ is a difficult one. I dislike it and resisted adopting it for some time: I would much rather be ‘an economist’ than ‘a heterodox economist’. But it is clear that unless you accept – pretty much without criticism – the assumptions and methodology of the mainstream, you will not be accepted as ‘an economist’. This was not the case when Joan Robinson debated with Solow and Samuelson, or Kaldor debated with Hayek. But it is the case today.

The problem with ‘heterodox economics’ is that it is self-definition in terms of the other. It says ‘we are not them’ – but says nothing about what we are. This is because includes everything outside of the mainstream, from reasonably well-defined and coherent schools of thought such as Post Keynesians, Marxists and Austrians, to much more nebulous and ill-defined discontents of all hues. To put it bluntly, a broad definition of ‘people who disagree with mainstream economics’ is going to include a lot of cranks. People will place the boundary between serious non-mainstream economists and cranks differently, depending on their perspective.

Another problem is that these schools of thought have fundamental differences. Aside from rejecting standard neoclassical economics, the Marxists and the Austrians don’t have a great deal in common.

Noah seems to define heterodox economics as ‘non-mathematical’ economics. This is inaccurate. There is much formal modelling outside of the mainstream. The difference lies with the starting assumptions. Mainstream macro starts from the assumption of inter-temporal optimisation and a system which returns to the supply-side-determined full-employment equilibrium in the long run. Non-mainstream economists reject these in favour of assumptions which they regard as more empirically plausible.

It is true that there are some heterodox economists, for example Tony Lawson and Ben Fine who take the position that maths is an inappropriate tool for economics and should be rejected. (Incidentally, both were originally mathematicians.) This is a minority position, and one I disagree with. The view is influential, however. The highest-ranked heterodox economics journal, the Cambridge Journal of Economics, has recently changed its editorial policy to explicitly discourage the use of mathematics. This is a serious mistake in my opinion.

So Noah’s claim about mathematics is a straw man. He implicitly acknowledges this by discussing one class of mathematical Post Keynesian models, the so-called ‘stock-flow consistent’ models (SFC). He rightly notes that the name is confusing – any correctly specified closed mathematical macro model should be internally consistent and therefore stock-flow consistent. This is certainly true of DSGE models.

SFC refers to a narrower set of models which incorporate detailed modelling of the ‘plumbing’ of the financial system alongside traditional macro Keynesian behavioural assumptions – and reject the standard inter-temporal optimising assumptions of DSGE macro. Marc Lavoie, who originally came up with the name, admits it is misleading and, with hindsight, a more appropriate name should have been chosen. But names stick, so SFC joins a long tradition of badly-named concepts in economics such as ‘real business cycles’ and ‘rational expectations’.

Noah claims that ‘vague ideas can’t be tested against the data and rejected’.  While the characterisation of all heterodox economics as ‘vague ideas’ is another straw man, the falsifiability point is important. As Noah points out, ‘One of mainstream macro’s biggest failings is that theories that don’t fit the data continue to be regarded as good and useful models.’ He also notes that big SFC models have so many parameters that they are essentially impossible to fit to the data.

This raises an important question about what we want economic models to do, and what the criteria should be for acceptance or rejection. The belief that models should provide quantitative predictions of the future has been much too strongly held. Economists need to come to terms with the reality that the future is unknowable – no model will reliably predict the future. For a while, DSGE models seemed to do a reasonable job. With hindsight, this was largely because enough degrees of freedom were added when converting them to econometric equations that they could do a reasonably good job of projecting past trends forward, along with some mean reversion.  This predictive power collapsed totally with the crisis of 2008.

Models then should be seen as ways to gain insight over the mechanisms at work and to test the implications of combining assumptions. I agree with Narayana Kocherlakota when he argues that we need to return to smaller ‘toy models’ to think through economic mechanisms. Larger econometrically estimated models are useful for sketching out future scenarios – but the predictive power assigned to such models needs to be downplayed.

So the question is then – what are the correct assumptions to make when constructing formal macro models? Noah argues that Post Keynesian models ‘don’t take human behaviour into account – the equations are typically all in terms of macroeconomic aggregates – there’s a good chance that the models could fail if policy changes make consumers and companies act differently than expected’

This is of course Robert Lucas’s critique of structural econometric modelling. This critique was a key element in the ‘microfoundations revolution’ which ushered in the so-called Real Business Cycle models which form the core of the disastrous DSGE research programme.

The critique is misguided, however. Aggregate behavioural relationships do have a basis in individual behavour. As Bob Solow puts it:

The original impulse to look for better or more explicit micro foundations was probably reasonable. It overlooked the fact that macroeconomics as practiced by Keynes and Pigou was full of informal microfoundations. … Generalizations about aggregative consumption-saving patterns, investment patterns, money-holding patterns were always rationalized by plausible statements about individual – and, to some extent, market-behavior.

In many ways, aggregate behavioural specifications can make a stronger claim to be based in microeconomic behaviour than the representative agent DSGE models which came to dominate mainstream macro. (I will expand on this point in a separate blog.)

Mainstream macro has reached the point that only two extremes are admitted: formal, internally consistent DSGE models, and atheoretical testing of the data using VAR models. Anything in between – such as structural econometric modelling – is rejected. As Simon Wren-Lewis has argued, this theoretical extremism cannot be justified.

Crucial issues and ideas emphasised by heterodox economists were rejected for decades by the mainstream while it was in thrall to representative-agent DSGE models. These ideas included the role of income distribution, the importance of money, credit and financial structure, the possibility of long-term stagnation due to demand-side shortfalls, the inadequacy of reliance on monetary policy alone for demand management, and the possibility of demand affecting the supply side. All of these ideas are, to a greater or lesser extent, now gradually becoming accepted and absorbed by the mainstream – in many cases with no acknowledgement of the traditions which continued to discuss and study them even as the mainstream dismissed them.

Does this mean that there is a fully-fledged ‘heterodox economics’ waiting in the wings waiting to take over from mainstream macro? It depends what is meant – is there complete model of the economy sitting in a computer waiting for someone to turn it on? No – but there never will be, either within the mainstream or outside it. But Lavoie argues,

if by any bad luck neoclassical economics were to disappear completely from the surface of the Earth, this would leave economics utterly unaffected because heterodox economics has its own agenda, or agendas, and its own methodological approaches and models.

I think this conclusion is too strong – partly because I don’t think the boundary between neoclassical economics and heterodox economics is as clear as some claim. But it highlights the rich tradition of ideas and models outside of the mainstream – many of which have stood the test of time much better than DSGE macro. It is time these ideas are acknowledged.

Economics: science or politics? A reply to Kay and Romer

Romer’s article on ‘mathiness’ triggered a debate in the economics blogs last year. I didn’t pay a great deal of attention at the time; that economists were using relatively trivial yet abstruse mathematics to disguise their political leanings didn’t seem a particularly penetrating insight.

Later in the year, I read a comment piece by John Kay on the same subject in the Financial Times. Kay’s article, published under the headline ‘Economists should keep to the facts, not feelings’, was sufficiently cavalier with the facts that I felt compelled to respond. I was not the only one – Geoff Harcourt wrote a letter supporting my defence of Joan Robinson and correcting Kay’s inaccurate description of her as a Marxist.

After writing the letter, I found myself wondering why a serious writer like Kay would publish such carelessly inaccurate statements. Following a suggestion from Matteus Grasselli, I turned to Romer’s original paper:

Economists usually stick to science. Robert Solow was engaged in science when he developed his mathematical theory of growth. But they can get drawn into academic politics. Joan Robinson was engaged in academic politics when she waged her campaign against capital and the aggregate production function …

Solow’s mathematical theory of growth mapped the word ‘capital’ onto a variable in his mathematical equations, and onto both data from national income accounts and objects like machines or structures that someone could observe directly. The tight connection between the word and the equations gave the word a precise meaning that facilitated equally tight connections between theoretical and empirical claims. Gary Becker’s mathematical theory of wages gave the words ‘human capital’ the same precision …

Once again, the facts appear to have fallen by the wayside. The issue at the heart of the debates involving Joan Robinson, Robert Solow and others is whether it is valid to  represent a complex macroeconomic system (such as a country) with a single ‘aggregate’ production function. Solow had been working on the assumption that the macroeconomic system could be represented by the same microeconomic mathematical function used to model individual firms. In particular, Solow and his neoclassical colleagues assumed that a key property of the microeconomic version – that labour will be smoothly substituted for capital as the rate of interest rises – would also hold at the aggregate level. It would then be reasonable to produce simple macroeconomic models by assuming a single production function for the whole economy, as Solow did in his famous growth model.

Joan Robinson and her UK Cambridge colleagues showed this was not true. They demonstrated cases (capital reversing and reswitching) which contradicted the neoclassical conclusions about the relationship between the choice of technique and the rate of interest. One may accept the assumption that individual firms can be represented as neoclassical production functions, but concluding that the economy can then also be represented by such a function is a logical error.

One important reason is that the capital goods which enter production functions as inputs are not identical, but instead have specific properties. These differences make it all but impossible to find a way to measure the ‘size’ of any collection of capital goods. Further, in Solow’s model, the distinction between capital goods and consumption goods is entirely dissolved – the production function simply generates ‘output’ which may either be consumed or accumulated. What Robinson demonstrated was that it was impossible to accurately measure capital independently of prices and income distribution. But since, in an aggregate production function, income distribution is determined by marginal productivity – which in turn depends on quantities – it is impossible to avoid arguing in a circle . Romer’s assertion of a ‘tight connection between the word and the equations’ is a straightforward misrepresentation of the facts.

The assertion of ‘equally tight connections between theoretical and empirical claims’, is likewise misplaced. As Anwar Shaikh showed in 1974, is it straightforward to demonstrate that Solow’s ‘evidence’ for the aggregate production function is no such thing. In fact, what Solow and others were testing turned out to be national accounting identities. Shaikh demonstrated that, as long as labour and capital shares are roughly constant – the ‘Kaldor facts’ – then any structure of production will produce empirical results consistent with an aggregate Cobb-Douglas production function. The aggregate production function is therefore ‘not even wrong: it is not a behavioral relationship capable of being statistically refuted’.

As I noted in my letter to the FT, Robinson’s neoclassical opponents conceded the argument on capital reversing and reswitching: Kay’s assertion that Solow ‘won easily’ is inaccurate. In purely logical terms Robinson was the victor, as Samuelson acknowledged when he wrote, ‘If all this causes headaches for those nostalgic for the parables of neoclassical writing, we must remind ourselves that scholars are not born to live an easy existence. We must respect, and appraise, the facts of life.’

What matters, as Geoff Harcourt correctly points out, is that the conceptual implications of the debates remain unresolved. Neoclassical authors, such as Cohen and Harcourt’s co-editor, Christopher Bliss, argue that the logical results,  while correct in themselves, do not undermine marginalist theory to the extent claimed by (some) critics. In particular, he argues, the focus on capital aggregation is mistaken. One may instead, for example, drop Solow’s assumption that capital goods and consumer goods are interchangeable: ‘Allowing capital to be different from other output, particularly consumption, alters conclusions radically.’ (p. xviii). Developing models on the basis of disaggregated optimising agents will likewise produce very different, and less deterministic, results.

But Bliss also notes that this wasn’t the direction that macroeconomics chose. Instead, ‘Interest has shifted from general equilibrium style (high-dimension) models to simple, mainly one-good models … the representative agent is now usually the model’s driver.’ Solow himself characterised this trend as ‘dumb and dumber in macroeconomics’. As the great David Laidler – like Robinson, no Marxist –  observes, the now unquestioned use of representative agents and aggregate production functions means that ‘largely undiscussed problems of capital theory still plague much modern macroeconomics’.

It should by now be clear that the claim of ‘mathiness’ is a bizarre one to level against Joan Robinson: she won a theoretical debate at the level of pure logic, even if the broader implications remain controversial. Why then does Paul Romer single her out as the villain of the piece? – ‘Where would we be now if Solow’s math had been swamped by Joan Robinson’s mathiness?’

One can only speculate, but it may not be coincidence that Romer has spent his career constructing models based on aggregate production functions – the so called ‘neoclassical endogenous growth models’ that Ed Balls once claimed to be so enamoured with. Romer has repeatedly been tipped for the Nobel Prize, despite the fact that his work doesn’t appear to explain very much about the real world. In Krugman’s words ‘too much of it involved making assumptions about how unmeasurable things affected other unmeasurable things.’ So much for those tight connections between theoretical and empirical claims.

So where does this leave macroeconomics? Bliss is correct that the results of the Controversy do not undermine the standard toolkit of methodological individualism: marginalism, optimisation and equilibrium. Robinson and her colleagues demonstrated that one specific tool in the box – the aggregate production function – suffers from deep internal logical flaws. But the Controversy is only one example of the tensions generated when one insists on modelling social structures as the outcome of adversarial interactions between  individuals. Other examples include the Sonnenschein-Mantel-Debreu results and Arrow’s Impossibility Theorem.

As Ben Fine has pointed out, there are well-established results from the philosophy of mathematics and science that suggest deep problems for those who insist on methodological individualism as the only way to understand social structures. Trying to conceptualise a phenomenon such as money on the basis of aggregation over self-interested individuals is a dead end. But economists are not interested in philosophy or methodology. They no longer even enter into debates on the subject – instead, the laziest dismissals suffice.

But where does methodological individualism stop? What about language, for example? Can this be explained as a way for self-interested individuals to overcome transaction costs? The result of this myopia, Fine argues, is that economists ‘work with notions of mathematics and science that have been rejected by mathematicians and scientists themselves for a hundred years and more.’

This brings us back to ‘mathiness’. DeLong characterises this as ‘restricting your microfoundations in advance to guarantee a particular political result and hiding what you are doing in a blizzard of irrelevant and ungrounded algebra.’ What is very rarely discussed, however, is the insistence that microfounded models are the only acceptable form of economic theory. But the New Classical revolution in economics, which ushered in the era of microfounded macroeconomics was itself a political project. As its leading light, Nobel-prize winner Robert Lucas, put it, ‘If these developments succeed, the term “macroeconomic” will simply disappear from use and the modifier “micro” will become superfluous.’ The statement is not greatly different in intent and meaning from Thatcher’s famous claim that ‘there is no such thing as society’. Lucas never tried particularly hard to hide his political leanings: in 2004 he declared, ‘Of the tendencies that are harmful to sound economics, the most seductive, and in my opinion the most poisonous, is to focus on questions of distribution.’ (He also declared, five years before the crisis of 2008, that the ‘central problem of depression-prevention has been solved, for all practical purposes, and has in fact been solved for many decades.’)

As a result of Lucas’ revolution, the academic economics profession purged those who dared to argue that some economic phenomena cannot be explained by competition between selfish individuals. Abstract microfounded theory replaced empirically-based macroeconomic models, despite generating results which are of little relevance for real-world policy-making. As Simon Wren-Lewis puts it, ‘students are taught that [non-microfounded] methods of analysing the economy are fatally flawed, and that simulating DSGE models is the only proper way of doing policy analysis. This is simply wrong.’

I leave the reader to decide where the line between science and politics should be drawn.

2015: Private Debt and the UK Housing Market

This report is taken from the EREP’s Review of the UK Economy in 2015.

In his 2015 Autumn Statement, Chancellor George Osborne gave a bravura performance. He congratulated himself on record growth and employment, falling public debt, surging business investment and a narrowing trade deficit. He announced projections of continuous growth and falling public debt over the next parliament.

While much of this was a straightforward misrepresentation of the facts – capital investment has yet to recover from the 2008 crisis and the current account deficit continues to widen – other sound bites came courtesy of the Office for Budget Responsibility. The OBR delivered the Chancellor an early Christmas present in the form of a set of revised projections showing better-than-expected public finances over the next five years.

When, previously, the OBR inconveniently delivered negative revisions, the Chancellor responded by pushing back the date he claims he will achieve a budget surplus. In response to the OBR’s gift, however, he chose instead to spend the windfall.  This is a risky strategy because any negative shock to the economy means he will miss his current fiscal targets – targets he has already missed repeatedly since coming to office.

As it turns out, these negative shocks have materialised rather quickly. Since the Chancellor made his statement a month ago, UK GDP growth has been revised down, the trade deficit has widened and estimates of borrowing for the current year have increased.

ca-forecasts

In reality, the OBR projections never looked plausible. The UK’s current account deficit – the amount borrowed each year from the rest of the world – is at an all- time high of around 5% of GDP. Every six months for the last three years, the OBR forecast that the deficit would start to close within a year; every time they were proved wrong (see figure above).  Their current assertion – that the trend will be broken in 2016 and the deficit will steadily narrow to around 2% of GDP in 2020 – must be taken with a large pinch of salt.

The current account deficit measures the combined overseas borrowing of the UK public and private sectors. In the unlikely event that George Osborne was to achieve his stated aim of a budget surplus, the whole of this foreign borrowing would be accounted for by the private sector. This is exactly what the OBR is projecting. Specifically, they predict that the household sector will run a deficit of around 2% per year for the next five years. They note that “this persistent and relatively large household deficit would be unprecedented”.

This projection has been the basis of recent stories in the press which have declared that the Chancellor has set the economy on a path to almost-certain financial meltdown within the current parliament. This is too simplistic an analysis. Financial imbalances can persist for a long time. The last UK financial crisis originated not in the UK lending markets but in UK banks’ exposure to overseas lending.

But the Chancellor’s strategy entails serious financial risks. Even though there is no real chance of achieving a surplus by 2020, further cuts to government spending will squeeze spending out of the economy, placing ever more of the burden on household consumption spending to maintain growth.

The figure below shows the annual growth in lending to households. While total credit growth remains subdued, unsecured lending has, in the words of Andy Haldane, chief economist at the Bank of England, been “picking up at a rate of knots”.

debt-growth

Moderate growth in the mortgage market may conceal deeper problems: household debt-to-income ratios have fallen since the crisis but, at around 140% of GDP, remain high both in historical terms and compared to other advanced nations. The majority of new mortgage lending since 2008 has been extended to buy-to-let landlords. These speculative buyers now face the prospect of rising interest rates and tax changes that will take a large chunk out of their property income. Many non-buy-to-let borrowers are badly exposed: a sixth of mortgage debt is held by those who have less than £200 a month left after spending on essentials.

The Financial Policy Committee has noted that these trends “… could pose direct risks to the resilience of the UK banking system, and indirect risks via its impact on economic stability”.

What is often left out of the more apocalyptic visions of a coming credit meltdown is that underlying all this is an unprecedented housing crisis in which an entire generation are locked out of home ownership. Instead of tackling this crisis, Osborne is using the housing market as a casino in the hope of keeping economic growth on track during another five years of austerity. It is a high-risk strategy. His luck may soon run out.

The report’s authors include:

John Weeks on fiscal policy

Ann Pettifor on monetary policy

Richard Murphy on taxation

Özlem Onaran on inequality and wage stagnation

Jeremy Smith on labour productivity

Andrew Simms on climate change and energy

Jo Michell on private debt

The full report is can be downloaded here.

Information on EREP is available here.

Happy Christmas from the Office of Budget Responsibility

Image reproduced from here

The sectoral balances approach to economic forecasting has come under scrutiny recently. It is certainly the case that when used carelessly, projections based on accounting identities have the potential to be either meaningless or misleading. This will be the case if accounting identities are mistakenly taken to imply causal relationships, if projections are presented without a clear statement of the assumptions about what drives the system or if changes taking place in ‘invisible’ variables such as the rate of growth of GDP are not identified (balances are usually presented as percentages of GDP).

Used with care, however (or luck, depending on your perspective), the approach is not without its merits – as I have argued previously. If nothing else, the impossibility of escaping from the fact that in a closed system lending must equal borrowing imposes logical restrictions on the projections that can be made about the future paths of borrowing in a ‘closed’ macroeconomic system.

Which brings us to the Chancellor’s Autumn Statement and the OBR’s rather helpful projections. As Duncan Weldon notes, the OBR are likely to receive a rather warmly written card from the Chancellor’s office this Christmas. While true that the OBR have, in the past, been less than helpful to the Chancellor, one can’t help but wonder about the justification for announcing the OBR projections at the same time as the Chancellors’ statements. Why are the OBR projections not made known to the public at the same time that they are made available to the Chancellor?

But back to sectoral balances. The model used by the OBR produces projections which comply with sectoral balance accounting identities. Four are used: those of the public sector, the household sector, the corporate sector and the rest of the world. The most closely watched is of course the public sector balance. The headline result of the OBR forecasts is that the public sector will run a surplus by 2019. What has so far received less attention (at least since Frances Coppola examined the projections from the March 2015 OBR forecasts) is the implication of this for the other three balances. The most recent OBR projections are shown below.

Fig-1-November-2015

Since the government is projected to run a small surplus from mid-2019, the other three sectors must collectively run a deficit of equal size. The OBR projects that the current account deficit will fall from its current level of around five per cent of GDP to around two per cent of GDP. The UK private sector must be in deficit. Interesting details lie in both the distribution of this deficit between the household and corporate sectors, and in the changes in figures since the last OBR reports in March and July.

In order to show how the numbers have changed since the previous forecasts, I have collected the data series from all three releases into individual charts.

The OBR series from these three releases for the public sector financial balance are shown below. Other than postponing the date at which the government achieves a surplus (and some revisions to the historical data) there is little difference between the three releases.

Fig-2-Public

Changes to the projections for the current account deficit are more significant. The latest projections include improvements in the projected deficit of between 0.5% and 1% of GDP, compared with the July predictions. With the current account deficit at record levels in excess of 5% of GDP, I think it is fair to say the projections look optimistic. I note that in each of the three OBR series, the deficit starts to close in the first projected quarter. Put another way, the inflection point has been postponed three times out of three.

Fig-3-ROW

Things start to get interesting when we turn to the corporate sector. Here the projections have changed rather more significantly. Whereas the previous two data series showed the corporate sector reversing its decade-long surplus in 2014 and finally returning to where many think the corporate sector should be – borrowing to invest – the new series contains significant revisions to the historical data. As it turns out, the corporate sector has remained in surplus, lending one per cent of GDP in Q2 2015. The corporate sector is not now projected to return to deficit until Q3 2018.

Fig-4-Corporate

Since the net financial balance for any sector is the difference between ex post saving – profits in the case of the corporate sector – and investment, these revisions imply either falling corporate investment, rising profits, or both.

The data series for corporate investment are shown below. The historical data have been revised down significantly. Investment in Q2 2015 is 1% of GDP lower than previously recorded. (This is hard to square with Osborne’s statement that ‘business investment has grown more than twice as fast as consumption’.) The reduction compared to previous forecasts widens in the projection out to 2020. Nonetheless, it is hard to escape the conclusion that the projections are extremely optimistic. By 2020, business investment is expected to reach twelve per cent of GDP, higher than any year back to 1980.

Fig-5.Investment

What of business profits? These are shown in the table below, taken from the OBR report. It seems that corporate profit grew at 10% year-on-year in 2014-15, despite GDP growth of around 2.5%. While projected growth rates decline, corporate profit is expected to grow at over 4% annually in every year of the projection out to 2021 (in a context of steady 2.5% GDP growth). There is not much sign of GoodhartNangle in these projections.

Fig-6-Profits

So, to recap: by 2020 we have government running a surplus just under 1% of GDP, a current account deficit of 2% of GDP and a corporate sector deficit around 1% of GDP. Those with a facility for mental arithmetic will have already arrived at the punchline – the household sector will be running a deficit of around 2% of GDP. In fact, given data revisions, the household sector appears to be already running a deficit close to 2% of GDP – a deficit which is projected to remain until 2021 (see figure below).

Fig-7-HHAs a comparison, note that in the period preceding the 2008 crisis, the household sector ran a deficit of not much over 1% of GDP, and for a shorter period than currently projected.

The OBR has this to say on its projections:

Recent data revisions have increased the size of the household deficit in 2014 and we expect little change in the household net position over the forecast period, with gradual increases in household saving offset by ongoing growth of household investment. Available historical data suggest that this persistent and relatively large household deficit would be unprecedented. This may be consistent with the unprecedented scale of the ongoing fiscal consolidation and market expectations for monetary policy to remain extremely accommodative over the next five years, but it also illustrates how the adjustment to fiscal consolidation assumed in our central forecast is subject to considerable uncertainty.  (p. 81)

Perhaps there is something to the sectoral balances approach approach after all. One can only wonder what Godley would make of all this.

Jo Michell

What if Reinhart and Rogoff had adopted a more Keynesian perspective?

Illustration by Ingram Pinn (Financial Times)

Illustration by Ingram Pinn (Financial Times)

In two very influential papers, Reinhart and Rogoff (2010) and Reinhart et al. (2012) investigated the relationship between public debt and economic growth. By classifying the annual observations of their data set into public debt categories (low debt, medium debt, high debt, very high debt) and identifying public debt overhang episodes, they indicated that higher public debt-to-GDP ratios are related to lower economic growth. They also emphasised that this relationship is non-linear: although the debt-to-growth correlation is weak below the 90 per cent debt-to-GDP threshold, it becomes much stronger above it. As is well-known, these results were used by many policy makers in support of the austerity policies that have been implemented over the last years in various countries.

In their popular critique Herndon et al. (2013, 2014) called the results of Reinhart and Rogoff into question. They pointed out three problems: (i) coding errors; (ii) selective exclusion of available data; and (iii) inappropriate weighting of summary statistics. They showed that when these problems are tackled, economic growth does not dramatically reduce when the public debt-to-GDP ratio passes the 90 per cent threshold. Reinhart and Rogoff (2013) responded by acknowledging the coding errors in their estimations; however, they disagreed that their weighting method is inappropriate and that they made selective exclusion of data. They themselves presented some corrected estimations according to which the negative relationship between growth and debt remains, but ceases to become stronger above the 90 per cent threshold.

An interesting perspective to this debate is that the whole discussion about the relationship between public debt and economic growth would have been completely different if Reinhart and Rogoff had decided to focus on the adverse effects of low growth on public indebtedness rather than on the adverse effects of high public indebtedness on growth; in other words, if they had analysed their data set using a more Keynesian perspective that emphasises the role of automatic stabilisers and the direct favourable impact of a higher GDP on the debt-to-GDP ratio. In a note that I recently published (Dafermos, 2015) I show what their results would be in that case. Using the same descriptive statistics techniques that Reinhart and Rogoff utilised in their papers, I classify the annual observations of their data set into economic growth categories (low growth, medium growth, high growth, very high growth) and I indicate that the public debt-to-GDP ratio increases as economic growth declines. I also identify low growth episodes and I show that in most countries these episodes are characterised by higher public indebtedness. Therefore, if Reinhart and Rogoff had decided to present their data in this way, the main implication of their analysis would have been that growth policies need to be adopted by policy makers in order to avoid high public indebtedness; and not that policy makers need to focus on the reduction of public debt in order to avoid low growth.

Of course, Reinhart and Rogoff are careful about this issue: they clearly state that their analysis does not capture causality. However, by classifying their data set into public debt categories and identifying debt overhang episodes they unavoidably concentrated on the growth-reducing effects of high debt, relegating the debt-increasing effects of low growth to the sidelines. On the contrary, if they had adopted a more Keynesian perspective, they could have focused on the debt-increasing effects of low growth. In that case, their conclusions, which informed the policy debate, would have been completely different.

It is also important that the econometric research that followed the publication of their papers was substantially affected by the decision of Reinhart and Rogoff to focus on the growth-reducing effects of high public debt: most researchers have paid attention to the adverse effects of high debt on growth and not the other way round. Interestingly, the literature has not so far provided strong support to the causality from public debt to economic growth (see footnote 1 in my note). This implies that the empirical research needs to investigate the debt-increasing effects of low growth in greater depth; as would have probably been the case if Reinhart and Rogoff had decided to analyse their dataset using a more Keynesian perspective, or if they had explicitly presented both ‘halves’ of the public debt-economic growth relationship.

Yannis Dafermos

Models, maths and macro: A defence of Godley

To put it bluntly, the discipline of economics has yet to get over its childish passion for mathematics and for purely theoretical and often highly ideological speculation, at the expense of historical research and collaboration with the other social sciences.

The quote is, of course, from Piketty’s Capital in the 21st Century. Judging by Noah Smith’s recent blog entry, there is still progress to be made.

Smith observes that the performance of DSGE models is dependably poor in predicting future macroeconomic outcomes—precisely the task for which they are widely deployed. Critics of DSGE are however dismissed because—in a nutshell—there’s nothing better out there.

This argument is deficient in two respects. First, there is a self-evident flaw in a belief that, despite overwhelming and damning evidence that a particular tool is faulty—and dangerously so—that tool should not be abandoned because there is no obvious replacement.

The second deficiency relates to the claim that there is no alternative way to approach macroeconomics:

When I ask angry “heterodox” people “what better alternative models are there?”, they usually either mention some models but fail to provide links and then quickly change the subject, or they link me to reports that are basically just chartblogging.

Although Smith is too polite to accuse me directly, this refers to a Twitter exchange
from a few days earlier. This was triggered when I took offence at a previous post
of his in which he argues that the triumph of New Keynesian sticky-price models over their Real Business Cycle predecessors was proof that “if you just keep pounding away with theory and evidence, even the toughest orthodoxy in a mean, confrontational field like macroeconomics will eventually have to give you some respect”.

When I put it to him that, rather then supporting his point, the failure of the New Keynesian model to be displaced—despite sustained and substantiated criticism—rather undermined it, he responded—predictably—by asking what should replace it.

The short answer is that there is no single model that will adequately tell you all you need to know about a macroeconomic system. A longer answer requires a discussion of methodology and the way that we, as economists, think about the economy. To diehard supporters of the ailing DSGE tradition, “a model” means a collection of dynamic simultaneous equations constructed on the basis of a narrow set of assumptions around what individual “agents” do—essentially some kind of optimisation problem. Heterodox economists argue for a much broader approach to understanding the economic system in which mathematical models are just one tool to aid us in thinking about economic processes.

What all this means is that it is very difficult to have a discussion with people for whom the only way to view the economy is through the lens of mathematical models—and a particularly narrowly defined class of mathematical models—because those individuals can only engage with an argument by demanding to be shown a sheet of equations.

In repsonse to such a demand, I conceded ground by noting that the sectoral balances approach, most closely associated with the work of Wynne Godley, was one example of mathematical formalism in heterodox economics. I highlighted Godley’s famous 1999 paper
in which, on the basis of simulations from a formal macro model, he produces a remarkably prescient prediction of the 2008 financial crisis:

…Moreover, if, per impossibile, the growth in net lending and the growth in money supply growth were to continue for another eight years, the implied indebtedness of the private sector would then be so extremely large that a sensational day of reckoning could then be at hand.

This prediction was based on simulations of the private sector debt-to-income ratio in a system of equations constructed around the well-known identity that the financial balances of the private, public and foreign sector must sum to zero. Godley’s assertion was that, at some point, the growth of private sector debt relative to income must come to an end, triggering a deflationary deleveraging cycle—and so it turned out.

Despite these predictions being generated on the basis of a fully-specified mathematical model, they are dismissed by Smith as “chartblogging” (see the quote above). If “chartblogging” refers to constructing an argument by highlighting trends in graphical representations of macroeconomic data, this seems an entirely admissible approach to macroeconomic analysis. Academics and policy-makers in the 2000s could certainly have done worse than to examine a chart of the household debt-to-income ratio. This would undoubtedly have proved more instructive than adding another mathematical trill to one of the polynomials of their beloved DSGE models—models, it must be emphasised, once again, in which money, banks and debt are, at best, an afterthought.

But the “chartblogging” slur is not even half-way accurate. The macroeconomic model used by Godley grew out of research at the Cambridge Economic Policy Group in the 1970s when Godley and his colleagues Francis Cripps and Nicholas Kaldor were advisors to the Treasury. It is essentially an old-style macroeconometric model combined with financial and monetary stock-flow accounting. The stock-flow modelling methodology has subsequently developed in a number of directions and detailed expositions are to be found in a wide range of publications including the well-known textbook by Lavoie and Godley—a book which surely contains enough equations to satisfy even Smith. Other well-known macroeconometric models include the model used by the UK Office of Budget Responsibility, the Fair model in the US, and MOSES in Scandinavia, alongside similar models in Norway and Denmark. Closer in spirit to DSGE are the NIESR model and the IMF quarterly forecasting model. On the other hand, there is the CVAR method of Johansen and Juselius and similar approaches of Pesaran et al. These are only a selection of examples—and there is an equally wide range of more theoretically oriented work.

This demonstrates the total ignorance of the mainstream of the range and vibrancy of theoretical and empirical research and debate taking place outside the realm of microfounded general equilibrium modelling. The increasing defensiveness exhibited by neoclassical economists when faced with criticism suggests, moreover, an uncomfortable awareness that all is not well with the orthodoxy. Instead of acknowleding the existence of a formal literature outside the myopia of mainstream academia, the reaction is to try and shut down discussion with inaccurate blanket dismissals.

I conclude by noting that Smith isn’t Godley’s highest-profile detractor. A few years after he died—Godley, that is—Krugman wrote an unsympathetic review of his approach to economics, deriding him—oddly for someone as wedded to the IS-LM system as Krugman—for his “hydraulic Keynesianism”. In Krugman’s view, Godley’s method has been superseded by superior microfounded optimising-agent models:

So why did hydraulic macro get driven out? Partly because economists like to think of agents as maximizers—it’s at the core of what we’re supposed to know—so that other things equal, an analysis in terms of rational behavior always trumps rules of thumb. But there were also some notable predictive failures of hydraulic macro, failures that it seemed could have been avoided by thinking more in maximizing terms.

Predictive failures? Of all the accusations that could be levelled against Godley, that one takes some chutzpah.

Jo Michell

Response to Tony Yates’ critique of Teaching Economics After the Crash

Tony Yates has written a critical rejoinder to Aditya Chakrabortty’s Radio 4 documentary on student demands for changes to university teaching of economics. Yates’ contribution is welcome as a rare example of a mainstream economist publicly engaging with the issues raised by dissatisfied students. For too long, the response of the mainstream has been to ignore criticism. Yates’ willingness to enter into dialogue – even if motivated by unhappiness with the content of the programme – is encouraging. Further, it clarifies the view of (some) mainstream economists on the teaching debate.

Yates’ first complaint is that the programme is an opinion piece rather than a report in which equal space is given to each side. It is true that the bulk of the programme focused on the grievances raised by the student movement – this was after all the subject of the piece – and provided only brief slots for dissenting voices. Criticising the programme on this basis ignores the bigger picture of total dominance by mainstream economics – not only in academia but also in the media and public debate. The number of critical economists who appear regularly on television and radio can be counted on one hand. Chakrabortty’s programme and the student movement that pushed it onto the agenda are welcome, yet remain a drop in the ocean.

Yates might reflect on the following question: Were a programme broadcast that defined economics in the terms he believes – a rigorous scientific discipline systematically discovering objective truths and discarding past mistakes – would he object to such an equally one-sided narrative?  For decades, this narrative has dominated to the extent that, until recently, there was no publicly audible debate. It is to the enormous credit of student groups that they have raised the volume of critical voices such that Chakrabortty’s programme could be made.

The more substantive criticisms made by Yates relate to what he regards as manifold factual inaccuracies peddled by interviewees and allowed to go unchallenged – in particular, inaccuracies about the assumptions of mainstream economics.

There are two important problems with Yates’ argument.  First, Chakrabortty’s programme was explicitly concerned with teaching economics – teaching economics at undergraduate level specifically.  Yates’ response is mainly concerned with academic and professional economics in general and, in particular, the higher reaches of contemporary research programmes. Second, and more importantly, Yates condenses students’ calls for increased methodological pluralism into a debate between rational choice theory and its (neoclassical) alternatives. One of the first students interviewed by Chakrabortty complains about a “lack of alternative perspectives, lack of history or context, that could include politics . . . lack of critical thinking, and lack of real world application” in undergraduate degrees. Yates’ response entirely fails to address this key issue.

The “caricatures” of mainstream economics to which Yates takes offence include rational choice, rational expectations, perfect markets, quantifiable risk, and an ignorance of money, banking and finance. Yates argues that this characterisation fails to take account of recent innovations such as bounded rationality, asymmetric information, monopolistic competition, learning effects, uncertainty, sticky prices, credit frictions, and so on. Moreover, Yates has previously argued that a course based on these types of models could adequately replace the course on Bubbles, Panics and Crashes which Manchester University cancelled.

Putting aside, for the moment, issues of methodological pluralism and historical context, does Yates really believe that Farmer’s multiple equilibrium models, internal rationality in intertemporal optimisation, or search models of money and credit should be taught in undergraduate degrees? One of us (Jump) took an MSc on which John Hardman Moore taught. Even there, the “collected works of the Kiyotaki-Moore collaboration” didn’t make it onto the syllabus. One can hardly criticise a programme about teaching economics – and, by extension, those involved with the various student movements – for ignoring papers that most PhD students find difficult to follow.  Regardless of the validity of the approach, “crunching exotic nonlinear ordinary differential equations” is unlikely to become part of the undergraduate economics syllabus any time soon.

A squabble over the exact models taught is not, however, the real issue.  While true that, since the heyday of real business cycle models, the mainstream has pulled back from the most egregious extremes of asserting a world of continuous full employment and total policy ineffectiveness, subsequent modifications to general equilibrium models – sticky prices for instant price adjustment, internal rationality for rational expectations, asymmetric information for full information – are always assumed to be “frictions” and “imperfections”; deviations from some socially optimal baseline.  Arguing about which specific unrealistic assumption has been dropped in this or that model misses the wood for the trees. The students want to be allowed to engage with different methodological approaches to economics – not to be told that if they study for another two years they can learn the Bernanke-Gertler financial accelerator model instead of the Woodford version with “perfect capital markets”.

The methodological approach of neoclassical economics – equilibria derived from optimisation problems couched in ever-more complicated mathematical settings – is highly restrictive, ideologically loaded, and universally imposed on undergraduates. The result of the complete elimination of any other approach from the curriculum is that students spend all their time learning how to manipulate abstract mathematical models which appear to hold little relevance for the real-world problems they are interested in addressing – as is made clear from the interviews conducted by Chakrabortty.

An important consequence of this methodological narrowing has been the (almost complete) eradication of economic history and the history of economic thought from the undergraduate curriculum.  This is a point conceded by Karl Whelan who argues, in his response to Chakrabortty’s programme, that mixing the formal neoclassical syllabus with “broader knowledge” would produce more rounded students – a conclusion also reached by the RES steering group on teaching economics.

Yates admits that he doesn’t believe that “any of the monetary policymakers I worked for or read believed much of [the workhorse NK model].  They worked off hunches, gut instinct, practical experiences.” (This is ironic given that Gali and Gertler – key architects and advocates of the models Yates claims policy-makers weren’t using – believe the models were introduced because previous versions were so inaccurate that “monetary policymakers turned to a combination of instinct, judgment, and raw hunches to assess the implications of different policy paths for the economy”.) What are such hunches and instincts based upon?  Aside from personal experience, one imagines that historical knowledge of previous crises played a part here (e.g., Ben Bernanke). Re-introducing this type of material into economics teaching would, as Whelan argues, produce more capable graduates.  Moreover, knowledge of the way that theory has evolved alongside economic events would provide valuable context for the “exotic non-linear equations” – but it would also cultivate an awareness of the dramatic methodological narrowing within the subject.

One of us (Michell) put this point to Yates on Twitter – admittedly not the ideal medium for careful debate. His response was approximately the following: economic history and history of economic thought are irrelevant – at best, a fun diversion for bath-time reading. This is because economics continually progresses so that the history of the discipline only reveals things “either discarded or whose husks were bettered and extracted”. As an example: “I don’t need to read Keynes to understand the liquidity trap … Wallace and Woodford suffices”.

At this point, one arrives at the inevitable argument that, whilst increasing methodological pluralism in undergraduate degrees may be a good thing, “heterodox economics” is best consigned to optional modules, or discarded altogether.  This misses a point of considerable importance: academic heterodoxy in economics is, more often than not, associated with methodological disagreement.  This is most clear in the further reaches of Post Keynesian and Austrian economics – e.g. Shackle, Lachmann – and in Marxian political economy where historical analysis is central.

If, for example, one wanted to teach the economics of financial crisis, surely the history of financial crises and inductive theory are the correct places to start?  Kindleberger and Minsky are the obvious candidates – after which more formal models could be considered.  This is not to say that the various heterodox approaches do not have their problems, but they are useful springboards to a deeper understanding of economic phenomena. Such empirically-based study would surely be a better starting point than learning Euler equations – despite the fact that the standard consumption Euler equation is known to fail miserably when taken to the data – or the standard model of a representative firm’s investment decision – despite the on-going failure of econometricians to find a robust relation between short run capital investment and the real interest rate.

Let us finish by returning to Yates’ Whig-historical view of the liquidity trap – a view which encapsulates much of the problem with mainstream economics. In modern neoclassical parlance, the liquidity trap refers to a situation in which nominal interest rates are equal to zero and quantitative easing is ineffective because changes in the quantity of (base) money have no effect on the (rational expectations) equilibrium future inflation path. As a result, the central bank is unable to reduce the real rate of interest and stimulate spending. All this matters because the economy fails to bring itself back to equilibrium in a timely fashion due to slow price adjustment.

This is unrecognisable to any serious scholar of Keynes. The liquidity trap refers to a situation in which fundamental uncertainty about the future leads people to hoard cash in preference to other financial assets, no matter how cheap those assets become. At the same time, uncertainty means firms may not commit to investment even if interest rates fall to a point that would previously have stimulated spending. The stickiness or otherwise of prices and wages is irrelevant because changes in output and employment provide the mechanism by which saving and investment are brought into equilibrium.

This brings other contentious topics to the fore, such as uncertainty, animal spirits and the neoclassical treatment of money. Each of these is highlighted by Yates as used in the programme to unfairly attack mainstream economics – he does concede that money as a veil over barter is a fair description for the most part.

Recall the definition of uncertainty emphasised by Knight and Keynes: a situation in which the future simply cannot be predicted, in contrast to a ‘risky’ situation in which all possible events are known, along with the probability of each.  This differentiation is fairly basic, and has been textbook material in game theory since (at least) Luce and Raiffa.  Now consider one example using Yates’ favoured approach to modelling uncertainty in macroeconomics: The central bank, unable to determine which of its three Phillips Curve models is correct, uses Bayesian inference to decide which model to use. This is almost beyond parody – simply a branding exercise which conceals the fact that the model has nothing whatsoever to do with the true meaning of the concept. Other “Keynesian” features of modern neoclassical economics highlighted by Yates are similarly grotesque caricatures of the original concepts.

By not studying Keynes in the original – or any other important economist from more than forty years ago – students are prevented from discovering such inconsistencies and are forced to take at face value the distortions and misrepresentations of mainstream economics. They are prevented from understanding how historical circumstance plays a role in the development and acceptance of economic theory: the Great Depression for Keynes and the stagflation of the 1970s for Friedman, for example. They also – crucially – fail to appreciate that economic and political power matters: mainstream economic theory is “history as written by those perceived to have been the intellectual victors of key debates”.

Yates describes Aditya Chakrabortty’s Radio 4 documentary as “a distorting dramatisation, on account of allowing multiple silly, uninformed critiques to go unchallenged in the program. Yet presented as a reasonable, impartial take on what is going on in economics.” This is unfair to the students involved in the reform movement and misses the main point of the programme. While we would not defend every claim made in the programme, we strongly support the call for a widening of the economics curriculum.

Given the role of the profession in contributing to the 2008 crisis, and in justifying the inexcusable policy packages imposed in response to the post-crisis expansion of sovereign debt, we might – at the very least – display some humility when addressing the inevitable public backlash. Beyond this, we must act on student demands and address past failings by implementing a fundamental overhaul of the economics curriculum.

 

Rob Jump
Jo Michell