macroeconomics

Chart showing measures of CO2 emissions for high-income countries

Do economists need to talk about consumption?

This post was originally published here, as part of a series titled Demanding change by changing demand produced by environmental charity Global Action Plan. Some similar themes are explored in more technical detail in the context of lower- and middle- income countries in a recent working paper for the ILO, co-authored with Adam Aboobaker.

For much of the last thirty years or so, progressive economists have argued that macroeconomic policy is too tight. In simpler terms, this means that some combination of higher government spending, lower taxation, and lower interest rates will lead to more jobs and higher incomes.

Such arguments are sometimes presented as part of advocacy for initiatives responding to the environmental crisis, such as the Green New Deal. For the most part, however, the environmental implications of higher near-term economic activity in rich countries do not attract much attention – it is taken as given that higher economic activity, as measured by gross domestic product (GDP), is unequivocally positive.

The current situation of high inflation, driven by energy shortages, war and climate change, serves as a sharp reminder that there is something missing in analysis which sees higher growth as an entirely free lunch. Almost all economic activity depletes scarce physical resources and generates carbon emissions. Higher employment usually comes at the cost of higher emissions. Furthermore, it is possible that we are now also reaching the end of the historic period in which physical resources were usually immediately available – so that economic activity could quickly rise in response to higher overall spending. The era of the Keynesian free lunch may be ending, replaced by a regime characterised by recurring inflationary episodes.

This puts progressive economists, like myself, who believe that the economies of rich countries are predominantly demand driven – meaning that higher overall spending means more jobs and higher incomes – in an uncomfortable position.

This view relies partly on the idea of the “multiplier”. This is the claim, which is well supported by empirical evidence, that every pound of new spending in the economy will generate additional income and spending over and above the initial pound spent. The mechanism works, to a large extent, by stimulating consumption spending: if a new government investment project is initiated – to provide additional green energy, for example – the money spent on the project – on wages, transport and materials – will be received by individuals and businesses as income. Some of this additional income will be spent on consumption, generating a second round of additional new incomes.

Similarly, the argument that redistribution from those on higher incomes to those on lower incomes is good for growth relies on the fact that those on lower incomes spend a greater proportion of their incomes on consumption goods – redistribution from rich to poor thus raises total consumption expenditure and economic activity.

How are progressive economists to respond to the now inescapable fact that current resource use greatly exceeds planetary limits, and “decoupling” – the trend for energy and resource use per dollar of spending to fall as GDP rises – will not be sufficient to stay within planetary boundaries if steady GDP growth continues?

There is no single answer to this question – the appropriate response will require action on many fronts simultaneously. However, economists are beginning to consider whether we need to introduce constraints on consumption, at least for those on higher incomes in rich countries, as part of the solution. Rather than relying only on voluntary consumer choice and natural shifts in consumption patterns – such as the trend towards lower consumption of meat in some rich countries – it may be that state intervention is required to influence both overall levels of consumption and its distribution.

There are two main arguments in favour of constraining consumption. The first is straightforward: all consumption, whether of food, transport, clothing or shelter, involves carbon emissions and resource depletion. Reduced consumption growth should translate directly into lower emissions growth.

The other relates to the need to reallocate current resources, including labour, towards the investment needed to fundamentally reshape our economic systems. Lower consumption means fewer people working in industries which provide for consumption spending, and fewer raw materials devoted to the production of consumption goods. This frees up resources for green investment: people and materials can be re-deployed towards the investment projects which are urgently needed.

This raises some thorny questions: what policy tools can be used to shift the composition and scale of consumption? Which groups should face incentives – or compulsion – to reduce consumption and what form should these measures take? How will voluntary shifts in consumption interact with more direct measures to reduce consumption? And crucially, how can jobs and incomes be protected without relying on consumption as a key driver of macroeconomic dynamism?

Much of this comes down to issues of distribution. Statistics on poverty make clear that large numbers of people in rich nations are unable to consume sufficient basic necessities. Basic justice dictates that the average incomes and consumption of those in lower income countries be allowed to catch up with those of richer countries. The need for redistribution of income within countries, and income catch-up across countries is undeniable – yet such redistribution, if it were to occur without other changes, will lead to increased overall consumption and emissions.

It is therefore hard to avoid the conclusion that taxation and regulation will be required to limit some part of the energy-intensive consumption of those on higher incomes in rich countries, particularly consumption which can be considered “luxury” consumption.

One plausible response to such suggestions is to claim that voluntary shifts in the kinds of things produced and consumed will naturally lead to reduced emissions, even while “consumption”, as measured by the national accounts, continues to grow. This kind of voluntary behavioural and consumption change – buying fewer cheap clothes, holidaying by rail rather than plane, switching to electric cars – will have a part to play in the transition to a low carbon economy, alongside reorientation from goods consumption to a more services-driven “foundational” economy. It is unlikely, however, that such changes will be sufficient.

The politics of consumption constraints are daunting. Managing competing distributional claims in the face of opposition from increasingly concentrated wealth and power is hard enough when the overall pie is growing. As we move towards a world of potential genuine scarcity, the politics of redistribution will become even more malign. This only emphasises the importance of getting the economics right.

Any successful response to the climate crisis will inevitably involve action and change at all levels – from local organising and “organic” shifts in consumption to reform of financial systems and action to tame corporate power and concentrated wealth. Constraints on the consumption of the relatively well off should be part of such a response. A debate about the economics and politics of these constraints is overdue.

Advertisement

Fiscal silly season

We are entering fiscal silly season. As the budget approaches, we should brace for impact with breathless reporting of context-free statistics about inflation, interest rates and government debt.

The story is likely to go something like this. Inflation is rising. This raises costs on government debt because some of it (index-linked bonds) pays an interest rate linked to inflation. Costs associated with quantitative easing (QE) will also increase because QE is financed by central bank reserves which pay Bank Rate (the Bank of England’s policy rate of interest). Since inflation is rising the Bank will have to raise interest rates to control it. This will increase the financing costs of QE and the cost of issuing new debt for the Treasury.

The conclusion — sometimes implied, sometimes explicit — is usually some version of “the situation is unsustainable therefore the government will have to make cuts”.

While each part of the story is technically correct in isolation, the overall narrative — debt is out of control and the situation is going to get worse because of inflation — doesn’t stand up to scrutiny.

These stories are rarely presented with sufficient context. Instead, journalists tend to rely on statistical soundbites such as “public debt is the highest since … ”. This is rarely if ever accompanied by the fact that debt/GDP is a fairly meaningless number.

The problems associated with government debt essentially boil down to the fact that debt involves redistribution. In the case of the government this means redistribution in the form of transfers from tax payers to bond holders. This is politically difficult. (This is also why “but currency issuer …” responses to these issues are largely beside the point — the problems of debt management are ultimately political not technical).

The ratio of debt to GDP tells us very little about the current political difficulties arising from debt servicing. Instead, the relevant magnitudes are total interest payments and tax revenues.

Total interest payments are equal to the debt stock multiplied by the effective interest rate on government debt. Focusing on the debt stock in isolation is thus equivalent to representing the area of a rectangle by the length of one side.

A better indicator of the risks associated with public debt is the ratio of government interest payments to tax revenues, as plotted in the figure below.

source: macroflow

Interest payments on government debt have indeed risen recently. A spike in June triggered media articles about the highest interest payments on record. In context, such statistics are shown to be meaningless. Interest payment have risen to around 6% of taxation over a four quarter period, compared with all-time lows of about 5.3%. (Calculated on a 12 monthly basis this rises to around 6.5%). It is hard to see signs that the sky is falling.

In fact, this indicator overstates current interest costs. This is because much of the interest paid by the Treasury is paid to the Bank of England which holds a substantial chunk (currently around 37%) of UK government debt as a result of QE (see chart below). Most of this interest is returned directly to the Treasury. Since the start of QE, this has saved the Treasury over £100bn in interest costs.

source: macroflow

Adjusting for this reduction in interest payments produces the figure below: net interest payments sum to around 4.7% of tax revenues over the last four quarters (or 5.2% on a rolling 12 monthly basis).

source: macroflow

What of the dangers ahead? It is true that if inflation rises, then interest costs will rise, all else equal. But the scale of these rises is not predetermined, and will be affected by policy.

First, persistent inflation is far from a certainty. If if inflation does persist in the short term, the Bank does not need to raise interest rates. Hikes in response to price pressures due to pandemic reopening and supply side bottlenecks will do more harm than good — instead the Bank should wait until the economic recovery is clearly underway. In this context, interest rate increases would likely be a good sign, and would be offset by rising tax revenues. Further, the Bank could introduce a “tiered reserve” system which would serve to hold down the rate paid on a substantial proportion of outstanding debt. Short term and index-linked debt can be rolled over at longer maturities, delaying the point at which higher rates would feed into higher interest payments.

In summary, simple claims such as “a one percentage point rise in interest rates and inflation could cost the Treasury about £25bn a year” are not useful without context and explanation of the long list of assumptions required to produce such a figure. The policy conclusions derived from such claims should be taken with a large pinch of salt.

Season’s Greetings and enjoy the festive period!

What is the Loanable Funds theory?

I had another stimulating discussion with Noah Smith last week. This time the topic was the ‘loanable funds’ theory of the rate of interest. The discussion was triggered by my suggestion that the ‘safe asset shortage’ and associated ‘reach for yield’ are in part caused by rising wealth concentration. The logic is straightforward: since the rich spend less of their income than the poor, wealth concentration tends to increase the rate of saving out of income. This means an increase in desired savings chasing the available stock of financial assets, pushing up the price and lowering the yield.

Noah viewed this as a plausible hypothesis but suggested it relies on the loanable funds model. My view was the opposite – I think this mechanism is incompatible with the loanable funds theory. Such disagreements are often enlightening – either one of us misunderstood the mechanisms under discussion, or we were using different definitions. My instinct was that it was the latter: we meant something different by ‘loanable funds theory’ (LFT hereafter).

To try and clear this up, Noah suggested Mankiw’s textbook as a starting point – and found a set of slides which set out the LFT clearly. The model described was exactly the one I had in mind – but despite agreeing that Mankiw’s exposition of the LFT was accurate it was clear we still didn’t agree about the original point of discussion.

The reason seems to be that Noah understands the LFT to describe any market for loans: there are some people willing to lend and some who wish to borrow. As the rate of interest rises, the volume of available lending increases but the volume of desired borrowing falls. In equilibrium, the rate of interest will settle at r* – the market-clearing  rate.

What’s wrong with this? – It certainly sounds like a market for ‘loanable funds’. The problem is that LFT is not a theory of loan market clearing per se. It’s a theory of macroeconomic equilibrium. It’s not a model of any old loan market: it’s a model of a one very specific market – the market which intermediates total (net) saving with total capital investment in a closed economic system.

OK, but saving equals investment by definition in macroeconomic terms: the famous S = I identity. How can there be a market which operates to ensure equality between two identically equal magnitudes?

The issue – as Keynes explained in the General Theory– is that in a modern capitalist economy, the person who saves and the person who undertakes fixed capital investment are not usually the same. Some mechanism needs to be in place to ensure that a decision to ‘not consume’ somewhere in the system – to save – is always matched by a decision to invest – to build a new machine, road or building – somewhere else in the economy.

To see the issue more clearly consider the ‘corn economy’ used in many standard macro models: one good – corn – is produced. This good can either be consumed or invested (by planting in the ground or storing corn for later consumption). The decision to plant or store corn is simultaneously both a decision to ‘not consume’ and to ‘invest’ (the rate of return on investment will depend on the mix of stored to planted corn). In this simple economy S = I because it can’t be any other way. A market for loanable funds is not required.

But this isn’t how modern capitalism works. Decisions to ‘not consume’ and decisions to invest are distributed throughout the economic system. How can we be sure that these decisions will lead to identical intended saving and investment – what ensures that S and I are equal? The loanable funds theory provides one possible answer to this question.

The theory states that decisions to save (i.e. to not consume) are decisive – investment adjusts automatically to accommodate any change in consumption behaviour. To see how this works, we need to recall how the model is derived. The diagram below shows the basic system (I’ve borrowed the figure from Nick Rowe).

lf

The upward sloping ‘desired saving’ curve is derived on the assumption that people are ‘impatient’ – they prefer current consumption to future consumption. In order to induce people to save,  a return needs to be paid on their savings. As the return paid on savings increases, consumers are collectively willing to forgo a greater volume of current consumption in return for a future payoff.

The downward sloping investment curve is derived on standard neoclassical marginalist principles. ‘Factors of production’ (i.e. labour and capital) receive ‘what they are worth’ in competitive markets. The real wage is equal to the marginal productivity of labour and the return on ‘capital’ is likewise equal to the marginal productivity of capital. As the ‘quantity’ of capital increases, the marginal product – and thus the rate of return – falls.

So the S and I curves depict how much saving and investment would take place at each possible rate of interest. As long as the S and I curves are well-defined and ‘monotonic’ (a strong assumption), there is only one rate of interest at which the amount people wish to lend is equal to the amount (other) people would like to borrow. This is r*, the point of intersection between the curves. This rate of interest is often referred to as the Wicksellian ‘natural rate’.

Now, consider what happens if the collective impatience of society decreases. At any rate of interest, consumption as a share of income will be lower and desired saving correspondingly higher – the S curve moves to the right. As the S curve shifts to the right – assuming no change in the technology determining the slope and position of the I curve – a greater share of national income is ‘not consumed’. But by pushing down the rate of interest in the loanable funds market, reduced consumption – somewhat miraculously – leads to an automatic increase in investment. An outward shift in the S curve is accompanied by a shift along the I curve.

Consider what this means for macroeconomic aggregates. Assuming a closed system, income is, by definition, equal to consumption plus investment: Y = C + I. The LFT says is that in freely adjusting markets, reductions in C due to shifts in preferences are automatically offset by increases in I. Y will remain at the ‘full employment’ rate of output at all times.

The LFT therefore underpins ‘Say’s Law’ – summarised by Keynes as ‘supply creates its own demand’. It was thus a key target for Keynes’ attack on the ‘Law’ in his General Theory. Keynes argued against the notion that saving decisions are strongly influenced by the rate of interest. Instead, he argued consumption is mostly determined by income. If individuals consume a fixed proportion of their income, the S curve in the diagram is no longer well defined – at any given level of output, S is vertical, but the position of the curve shifts with output. This is quite different to the LFT which regards the position of the two curves as determined by the ‘deep’ structural parameters of the system – technology and preferences.

How then is the rate of interest determined in Keynes’ theory? – the answer is ‘liquidity preference’. Rather than desired saving determining the rate of interest, what matters is the composition of financial assets people use to hold their savings. Keynes simplifies the story by assuming only two assets: ‘money’ which pays no interest and ‘bonds’ which do pay interest. It is the interaction of supply and demand in the bond market – not the ‘loanable funds’ market – which determines the rate of interest.

There are two key points here: the first is that saving is a residual – it is determined by output and investment. As such, there is no mechanism to ensure that desired saving and desired investment will be equalised. This means that output, not the rate of interest, will adjust to ensure that saving is equal to investment. There is no mechanism which ensures that output is maintained at full employment levels. The second is that interest rates can move without any change in either desired saving or desired investment. If there is an increase in ‘liquidity preference’ – a desire to hold lower yielding but safer assets, this will cause an increase in the rate of interest on riskier assets.

How can the original question be framed using these two models? – What is the implication of increasing wealth concentration on yields and macro variables?

I think Noah is right that one can think of the mechanism in a loanable funds world. If redistribution towards the rich increases the average propensity to save, this will shift the S curve to the right – as in the example above – reducing the ‘natural’ rate of interest. This is the standard ‘secular stagnation’ story – a ‘global savings glut’ has pushed the natural rate below zero. However, in a loanable funds world this should – all else being equal – lead to an increase in investment. This doesn’t seem to fit the stylised facts: capital investment has been falling as a share of GDP in most advanced nations. (Critics will point out that I’m skirting the issue of the zero lower bound – I’ll have to save that for another time).

My non-LFT interpretation is the following. Firstly, I’d go further than Keynes and argue that the rate of interest is not only relatively unimportant for determining S – it also has little effect on I. There is evidence to suggest that firms’ investment decisions are fairly interest-inelastic. This means that both curves in the diagram above have a steep slope – and they shift as output changes. There is no ‘natural rate’ of interest which brings the macroeconomic system into equilibrium.

In terms of the S = I identity, this means that investment decisions are more important for the determination of macro variables than saving decisions. If total desired saving as a share of income increases – due to wealth concentration, for example – this will have little effect on investment. The volume of realised saving, however, is determined by (and identically equal to) the volume of capital investment. An increase in desired saving manifests itself not as a rise in investment – but as a fall in consumption and output.

In such a scenario – in which a higher share of nominal income is saved – the result will be weak demand for goods but strong demand for financial assets – leading to deflation in the goods market and inflation in the market for financial assets. Strong demand for financial assets will reduce rates of return – but only on financial assets: if investment is inelastic to interest rate there is no reason to believe there will be any shift in investment or in the return on fixed capital investment.

In order explain the relative rates of return on equity and bonds, a re-working of Keynes’ liquidity preference theory is required. Instead of a choice between ‘money’ and ‘bonds’, the choice faced by investors can be characterised as a choice between risky equity and less-risky bonds. Liquidity preference will then make itself felt as an increase in the price of bonds relative to equity – and a corresponding movement in the yields on each asset. On the other hand, an increase in total nominal saving will increase the price of all financial assets and thus reduce yields across the board. Given that it is likely that portfolio managers will have minimum target rates of return, this is will induce a shift into higher-risk assets.

Consistent modelling and inconsistent terminology

Image reproduced from here

Simon Wren-Lewis has a couple of recent posts up on heterodox macro, and stock-flow consistent modelling in particular. His posts are constructive and engaging. I want to respond to some of the points raised.

Simon discusses the modelling approach originating with Wynne Godley, Francis Cripps and others at the Cambridge Economic Policy Group in the 1970s. More recently this approach is associated with the work of Marc Lavoie who co-wrote the key textbook on the topic with Godley.

The term ‘stock-flow consistent’ was coined by Claudio Dos Santos in his PhD thesis, ‘Three essays in stock flow consistent modelling’ and has been a source of misunderstanding ever since. Simon writes, ‘it is inferred that mainstream models fail to impose stock flow consistency.’ As I tried to emphasise  in the blog which Simon links to, this is not the intention: ‘any correctly specified closed mathematical macro model should be internally consistent and therefore stock-flow consistent. This is certainly true of DSGE models.’ (There is an important caveat here:  this consistency won’t be maintained after log-linearisation – a standard step in DSGE solution – and the further a linearised model gets from the steady state, the worse this inconsistency will become.)[1]

Marc Lavoie has emphasised that he regrets adopting the name, precisely because of the implication that consistency is not maintained in other modelling traditions. Instead, the term refers to a subset of models characterised by a number of specific features. These include the following: aggregate behavioural macro relationships informed by both empirical evidence and post-Keynesian theory; detailed, institutionally-specific modelling of the monetary and financial sector; and explicit feedback effects from financial balance sheets to economic behaviour and the stability of the macro system both in the short run and the long run.

A distinctive feature of these models is their rejection of the loanable funds theory of banking and money – a position endorsed in a recent Bank of England Quarterly Bulletin and Working Paper. Partially as a result of this view of the importance of money and money-values in the decision-making process, these models are usually specified in nominal magnitudes. As a result, they map more directly onto the national accounts than real-sector models which require complex transformations of data series using price deflators.

Since the behavioural features of these models are informed by a well-developed theoretical tradition, Simon’s assertion that SFC modelling is ‘accounting, not economics’ is inaccurate. Accounting is one important element in a broader methodological approach. Imposing detailed financial accounting alongside behavioural assumptions about how financial stocks and flows evolve imposes constraints across the entire system. Rather like trying to squeeze the air out of one part of a balloon, only to find another part inflating, chasing assets and liabilities around a closed system of linked balance sheets can be an informative exercise – because where leverage eventually turns up is not always clear at the outset. Likewise, SFC models may include detailed modelling of inventories, pricing and profits, or of changes in net worth due to asset price revaluation and price inflation. For such processes, even the accounting is non-trivial. Taking accounting seriously allows modellers to incorporate institutional complexity – something of increasing importance in today’s world.

The inclusion of detailed financial modelling allows the models to capture Godley’s view that agents aim to achieve certain stock-flow norms. These may include household debt-to-income ratios, inventories-to-sales ratios for firms and leverage ratios for banks. Many of the functional forms used implicitly capture these stock-flow ratios. This is the case for the simple consumption function used in the BoE paper discussed by Simon, as shown here. Of course, other functional specifications are possible, as in this model, for example, which includes a direct interest rate effect on consumption.

Simon notes that adding basic financial accounting to standard models is trivial but ‘in most mainstream models these balances are of no consequence’. This is an important point, and should set alarm bells ringing. Simon identifies one reason for the neutrality of finance in standard models: ‘the simplicity of the dominant mainstream model of intertemporal consumption’.

There are deeper reasons why the financial sector has little role in standard macro. In the majority of standard DSGE macro models, the system automatically tends towards some long-run supply side-determined full-employment equilibrium – in other words the models incorporate Milton Friedman’s long-run vertical Phillips Curve. Further, in most DSGE models, income distribution has no long-run effect on macroeconomic outcomes.

Post-Keynesian economics, which provides much of the underlying theoretical structure of SFC models, takes issue with these assumptions. Instead, it is argued, Keynes was correct in his assertion that demand deficiency can lead economies to become stuck in equilibria characterised by under-employment or stagnation.

Now, if the economic system is always in the process of returning to the flexible-price full-employment equilibrium, then financial stocks will be, at most, of transitory significance. They may serve to amplify macroeconomic fluctuations, as in the Bernanke-Gertler-Gilchrist models, but they will have no long-run effects. This is the reason that DSGE models which do attempt to incorporate financial leverage also require additional ‘ad-hoc’ adjustments to the deeper model assumptions – for example this model by Kumhof and Ranciere imposes an assumption of non-negative subsistence consumption for households. As a result, when income falls, households are unable to reduce consumption but instead run up debt. For similar reasons, if one tries to abandon the loanable funds theory in DSGE models – one of the key reasons for the insistence on accounting in SFC models – this likewise raises non-trivial issues, as shown in this paper by Benes and Kumhof  (to my knowledge the only attempt so far to produce such a model).

Non-PK-SFC models, such as the UK’s OBR model, can therefore incorporate modelling of sectoral balances and leverage ratios – but these stocks have little effect on the real outcomes of the model.

On the contrary, if long-run disequlibrium is considered a plausible outcome, financial stocks may persist and feedbacks from these stocks to the real economy will have non-trivial effects. In such a situation, attempts by individuals or sectors to achieve some stock-flow ratio can alter the long-run behaviour of the system. If a balance-sheet recession persists, it will have persistent effects on the real economy – such hysteresis effects are increasingly acknowledged in the profession.

This relates to an earlier point made in Simon’s post: ‘the fact that leverage was allowed to increase substantially before the crisis was not something that most macroeconomists were even aware of … it just wasn’t their field’. I’m surprised this is presented as evidence for the defence of mainstream macro.

The central point made by economists like Minsky and Godley was that financial dynamics should be part of our field. The fact that by 2007 it wasn’t, illustrates how badly mainstream macroeconomics went wrong. Between Real Business Cycle models, Rational Expectations, the Efficient Markets Hypothesis and CAPM, economists convinced themselves – and, more importantly, policy-makers – that the financial system was none of their business. The fact that economists forgot to look at leverage ratios wasn’t an absent-minded oversight. As Oliver Blanchard argues:

 ‘… mainstream macroeconomics had taken the financial system for granted. The typical macro treatment of finance was a set of arbitrage equations, under the assumption that we did not need to look at who was doing what on Wall Street. That turned out to be badly wrong.’

This is partially acknowledged by Simon when he argues that the ‘microfoundations revolution’ lies behind economists’ myopia on the financial system. Where I, of course, agree with Simon is that ‘had the microfoundations revolution been more tolerant of other methodologies … macroeconomics may well have done more to integrate the financial sector into their models before the crisis’. Putting aside the point that, for the most part, the microfoundations revolution didn’t actually lead to microfounded models, ‘integrating the financial sector’ into models is exactly what people like Godley, Lavoie and others were doing.

Simon also makes an important point in highlighting the lack of acknowledgement of antecedents by PK-SFC authors and, as a result, a lack of continuity between PK-SFC models and the earlier structural econometric models (SEMs) which were eventually killed off by the shift to microfounded models. There is a rich seam of work here – heterodox economists should both acknowledge this and draw on it in their own work. In many respects, I see the PK-SFC approach as a continuation of the SEM tradition – I was therefore pleased to read this paper in which Simon argues for a return to the use of SEMs alongside DSGE and VAR techniques.

To my mind, this is what is attempted in the Bank of England paper criticised by Simon – the authors develop a non-DSGE, econometrically estimated, structural model of the UK economy in which the financial system is taken seriously. Simon is right, however, that the theoretical justifications for the behavioural specifications and the connections to previous literature could have been spelled out more clearly.

The new Bank of England model is one of a relatively small group of empirically-oriented SFC models. Others include the Levy Institute model of the US, originally developed by Wynne Godley and now maintained by Gennaro Zezza, the UNCTAD Global Policy model, developed in collaboration with Godley’s old colleague Francis Cripps, and the Gudgin and Coutts model of the UK economy (the last of these is not yet fully stock-flow consistent but shares much of its theoretical structure with the other models).

One important area for improvement in these models lies with their econometric specification. The models tend to have large numbers of parameters, making them difficult to estimate other than through individual OLS regressions of behavioural relationships. PK-SFC authors can certainly learn from the older SEM tradition in this area.

I find another point of agreement in Simon’s statement that ‘heterodox economists need to stop being heterodox’. I wouldn’t state this so strongly – I think heterodox economists need to become less heterodox. They should identify and more explicitly acknowledge those areas in which there is common ground with mainstream economics.  In those areas where disagreement persists, they should try to explain more clearly why this is the case. Hopefully this will lead to more fruitful engagement in the future, rather than the negativity which has characterised some recent exchanges.

[1] Simon goes on to argue that stock-flow consistency is not ‘unique to Godley. When I was a young economist at the Treasury in the 1970s, their UK model was ‘stock-flow consistent’, and forecasts routinely looked at sector balances.’  During the 1970s, there was sustained debate between the Treasury and Godley’s Cambridge team, who were, aside from Milton Friedman’s monetarism, the most prominent critics of the Keynesian conventional wisdom of the time – there is an excellent history here. I don’t know the details but I wonder if the awareness of sectoral balances at the Treasury was partly due to Godley’s influence?

The Fable of the Ants, or Why the Representative Agent is No Such Thing

Image reproduced from here

Earlier in the summer, I had a discussion on Twitter with Tony Yates, Israel Arroyo and others on the use of the representative agent in macro modelling.

The starting point for representative agent macro is an insistence that all economic models must be ‘microfounded’. This means that model behaviour must be derived from the optimising behaviour of individuals – even when the object of study is aggregates such as employment, national output or the price level. But given the difficulty – more likely the impossibility – of building an individual-by-individual model of the entire economic system, a convenient short-cut is taken. The decision-making process of one type of agents as a whole (for example consumers or firms) is reduced to that of a single ‘representative’ individual – and  is taken to be identical to that assumed to characterise the behaviour of actual individuals.

For example, in the simple textbook DSGE models taught to macro students, the entire economic system is assumed to behave like a single consumer with fixed and externally imposed preferences over how much they wish to consume in the present relative to the future.

I triggered the Twitter debate by noting that this is equivalent to attempting to model the behaviour of a colony of ants by constructing a model of one large ‘average’ ant. The obvious issue illustrated by the analogy is that ants are relatively simple organisms with a limited range of behaviours – but the aggregate behaviour of an ant colony is both more complex and qualitatively different to that of an individual ant.

This is a well-known topic in computer science: a class of optimisation algorithms were developed by writing code which mimics the way that an ant colony collectively locates food. These algorithms are a sub-group of broader class of ‘swarm intelligence’ algorithms. The common feature is that interaction between ‘agents’ in a population, where the behaviour of each individual is specified as a simple set of rules, produces some emergent ‘intelligent’ behaviour at the population level.

In ants, one such behaviour is the collective food search: ants initially explore at random. If they find food, they lay down pheromone trails on their way back to base. This alters the behaviour of ants that subsequently set out to search for food: the trails attract ants to areas where food was previously located. It turns out this simple rules-based system produces a highly efficient colony-level algorithm for locating the shortest paths to food supplies.

The key point about these algorithms is that the emergent behaviour is qualitatively different from that of individual agents – and is typically robust to changes at the micro level: a reasonably wide degree of variation in ant behaviour at the individual level is possible without disruption to the behaviour of the colony. Further, these emergent properties cannot usually be identified by analysing a single agent in isolation – they will only occur as a result of the interaction between agents (and between agents and their environment).

But this is not how representative agent macro works. Instead, it is assumed that the aggregate behaviour is simply identical to that of individual agents. To take another analogy, it is like a physicist modelling the behaviour of a gas in a room by starting with the assumption of one room-sized molecule.

Presumably economists have good reason to believe that, in the case of economics, this simplifying assumption is valid?

On the contrary, microeconomists have known for a long time that the opposite is the case. Formal proofs demonstrate that a population of agents, each represented using a standard neoclassical inter-temporal utility function will not produce behaviour at the aggregate level which is consistent with a ‘representative’ utility function. In other words, such a system has emergent properties. As Kirman puts it:

“… there is no plausible formal justification for the assumption that the aggregate of individuals, even maximisers, acts itself like an individual maximiser. Individual maximisation does not engender collective rationality, nor does the fact that the collectivity exhibits a certain rationality necessarily imply that individuals act rationaly. There is simply no direct relation between individual and collective behaviour.”

Although the idea of the representative agent isn’t new – it appears in Edgeworth’s 1881 tract on ‘Mathematical Psychics’ – it attained its current dominance as a result of Robert Lucas’ critique of Keynesian structural macroeconomic models. Lucas argued that the behavioural relationships underpinning these models are not be invariant to changes in government policy and therefore should not be used to inform such policy. The conclusion drawn – involving a significant logical leap of faith – was that all macroeconomic models should be based on explicit microeconomic optimization.

This turned out to be rather difficult in practice. In order to produce models which are ‘well-behaved’ at the macro level, one has to impose highly implausible restrictions on individual agents.

A key restriction needed to ensure that microeconomic optimisation behaviour is preserved at the macro level is that of linear ‘Engel curves’. In cross-sectional analysis, this means individuals consume normal and inferior goods in fixed proportions, regardless of their income – a supermarket checkout worker will continue to consume baked beans and Swiss watches in unchanged proportions after she wins the lottery.

In an inter-temporal setting – i.e. in macroeconomic models – this translates to an assumption of constant relative risk aversion. This imposes the constraint that any individual’s aversion to losing a fixed proportion of her income remains constant even as her income changes.

Further, and unfortunately for Lucas, income distribution turns out to matter: if all individuals do not behave identically, then as income distribution changes, aggregate behaviour will also shift. As a result, aggregate utility functions will only be ‘well-behaved’ if, for example, individuals have identical and linear Engel curves, or if individuals have different linear Engel curves but income distribution is not allowed to change.

As well as assuming away any role for, say income distribution or financial interactions, these assumptions contradict well-established empirical facts. The composition of consumption shifts as income increases. It is hard to believe these restrictive special cases provide a sufficient basis on which to construct macro models which can inform policy decisions – but this is exactly what is done.

Kirman notes that ‘a lot of microeconomists said that this was not very good, but macroeconomists did not take that message on board at all. They simply said that we will just have to simplify things until we get to a situation where we do have uniqueness and stability. And then of course we arrive at the famous representative individual.’

The key point here is that a model in which the population as whole collectively solves an inter-temporal optimisation problem – identical to that assumed to be solved by individuals – cannot be held to be ‘micro-founded’ in any serious way. Instead, representative agent models are aggregative macroeconomic models – like Keynesian structural econometric models – but models which impose arbitrary and implausible restrictions on the behaviour of individuals. Instead of being ‘micro-founded’, these models are ‘micro-roofed’ (the term originates with Matheus Grasselli).

It can be argued that old-fashioned Keynesian structural macro behavioural assumptions can in fact stake a stronger claim to compatibility with plausible microeconomic behaviour – precisely because arbitrary restrictions on individual behaviour are not imposed. Like the ant-colony, it can be shown that under sensible assumptions, robust aggregate Keynesian consumption and saving functions can be derived from a range of microeconomic behaviours – both optimising and non-optimising.

So what of the Lucas Critique?

Given that representative agent models are not micro-founded but are aggregate macroeconomic representations, Peter Skott argues that ‘the appropriate definition of the agent will itself typically depend on the policy regime. Thus, the representative-agent models are themselves subject to the Lucas critique. In short, the Lucas inspired research program has been a failure.’

This does not mean that microeconomic behaviour doesn’t matter. Nor is it an argument for a return to the simplistic Keynesian macro modelling of the 1970s. As Hoover puts it:

‘This is not to deny the Lucas critique. Rather it is to suggest that its reach may be sufficiently moderated in aggregate data that there are useful macroeconomic relationships to model that are relatively invariant’

Instead, it should be accepted that some aggregate macroeconomic behavioural relationships are likely to be robust, at least in some contexts and over some periods of time. At the same time, we now have much greater scope to investigate the relationships between micro and macro behaviours. In particular, computing power allows for the use of agent-based simulations to analyse the emergent properties of complex social systems.

This seems a more promising line of enquiry than the dead end of representative agent DSGE modelling.

Economics: science or politics? A reply to Kay and Romer

Romer’s article on ‘mathiness’ triggered a debate in the economics blogs last year. I didn’t pay a great deal of attention at the time; that economists were using relatively trivial yet abstruse mathematics to disguise their political leanings didn’t seem a particularly penetrating insight.

Later in the year, I read a comment piece by John Kay on the same subject in the Financial Times. Kay’s article, published under the headline ‘Economists should keep to the facts, not feelings’, was sufficiently cavalier with the facts that I felt compelled to respond. I was not the only one – Geoff Harcourt wrote a letter supporting my defence of Joan Robinson and correcting Kay’s inaccurate description of her as a Marxist.

After writing the letter, I found myself wondering why a serious writer like Kay would publish such carelessly inaccurate statements. Following a suggestion from Matteus Grasselli, I turned to Romer’s original paper:

Economists usually stick to science. Robert Solow was engaged in science when he developed his mathematical theory of growth. But they can get drawn into academic politics. Joan Robinson was engaged in academic politics when she waged her campaign against capital and the aggregate production function …

Solow’s mathematical theory of growth mapped the word ‘capital’ onto a variable in his mathematical equations, and onto both data from national income accounts and objects like machines or structures that someone could observe directly. The tight connection between the word and the equations gave the word a precise meaning that facilitated equally tight connections between theoretical and empirical claims. Gary Becker’s mathematical theory of wages gave the words ‘human capital’ the same precision …

Once again, the facts appear to have fallen by the wayside. The issue at the heart of the debates involving Joan Robinson, Robert Solow and others is whether it is valid to  represent a complex macroeconomic system (such as a country) with a single ‘aggregate’ production function. Solow had been working on the assumption that the macroeconomic system could be represented by the same microeconomic mathematical function used to model individual firms. In particular, Solow and his neoclassical colleagues assumed that a key property of the microeconomic version – that labour will be smoothly substituted for capital as the rate of interest rises – would also hold at the aggregate level. It would then be reasonable to produce simple macroeconomic models by assuming a single production function for the whole economy, as Solow did in his famous growth model.

Joan Robinson and her UK Cambridge colleagues showed this was not true. They demonstrated cases (capital reversing and reswitching) which contradicted the neoclassical conclusions about the relationship between the choice of technique and the rate of interest. One may accept the assumption that individual firms can be represented as neoclassical production functions, but concluding that the economy can then also be represented by such a function is a logical error.

One important reason is that the capital goods which enter production functions as inputs are not identical, but instead have specific properties. These differences make it all but impossible to find a way to measure the ‘size’ of any collection of capital goods. Further, in Solow’s model, the distinction between capital goods and consumption goods is entirely dissolved – the production function simply generates ‘output’ which may either be consumed or accumulated. What Robinson demonstrated was that it was impossible to accurately measure capital independently of prices and income distribution. But since, in an aggregate production function, income distribution is determined by marginal productivity – which in turn depends on quantities – it is impossible to avoid arguing in a circle . Romer’s assertion of a ‘tight connection between the word and the equations’ is a straightforward misrepresentation of the facts.

The assertion of ‘equally tight connections between theoretical and empirical claims’, is likewise misplaced. As Anwar Shaikh showed in 1974, is it straightforward to demonstrate that Solow’s ‘evidence’ for the aggregate production function is no such thing. In fact, what Solow and others were testing turned out to be national accounting identities. Shaikh demonstrated that, as long as labour and capital shares are roughly constant – the ‘Kaldor facts’ – then any structure of production will produce empirical results consistent with an aggregate Cobb-Douglas production function. The aggregate production function is therefore ‘not even wrong: it is not a behavioral relationship capable of being statistically refuted’.

As I noted in my letter to the FT, Robinson’s neoclassical opponents conceded the argument on capital reversing and reswitching: Kay’s assertion that Solow ‘won easily’ is inaccurate. In purely logical terms Robinson was the victor, as Samuelson acknowledged when he wrote, ‘If all this causes headaches for those nostalgic for the parables of neoclassical writing, we must remind ourselves that scholars are not born to live an easy existence. We must respect, and appraise, the facts of life.’

What matters, as Geoff Harcourt correctly points out, is that the conceptual implications of the debates remain unresolved. Neoclassical authors, such as Cohen and Harcourt’s co-editor, Christopher Bliss, argue that the logical results,  while correct in themselves, do not undermine marginalist theory to the extent claimed by (some) critics. In particular, he argues, the focus on capital aggregation is mistaken. One may instead, for example, drop Solow’s assumption that capital goods and consumer goods are interchangeable: ‘Allowing capital to be different from other output, particularly consumption, alters conclusions radically.’ (p. xviii). Developing models on the basis of disaggregated optimising agents will likewise produce very different, and less deterministic, results.

But Bliss also notes that this wasn’t the direction that macroeconomics chose. Instead, ‘Interest has shifted from general equilibrium style (high-dimension) models to simple, mainly one-good models … the representative agent is now usually the model’s driver.’ Solow himself characterised this trend as ‘dumb and dumber in macroeconomics’. As the great David Laidler – like Robinson, no Marxist –  observes, the now unquestioned use of representative agents and aggregate production functions means that ‘largely undiscussed problems of capital theory still plague much modern macroeconomics’.

It should by now be clear that the claim of ‘mathiness’ is a bizarre one to level against Joan Robinson: she won a theoretical debate at the level of pure logic, even if the broader implications remain controversial. Why then does Paul Romer single her out as the villain of the piece? – ‘Where would we be now if Solow’s math had been swamped by Joan Robinson’s mathiness?’

One can only speculate, but it may not be coincidence that Romer has spent his career constructing models based on aggregate production functions – the so called ‘neoclassical endogenous growth models’ that Ed Balls once claimed to be so enamoured with. Romer has repeatedly been tipped for the Nobel Prize, despite the fact that his work doesn’t appear to explain very much about the real world. In Krugman’s words ‘too much of it involved making assumptions about how unmeasurable things affected other unmeasurable things.’ So much for those tight connections between theoretical and empirical claims.

So where does this leave macroeconomics? Bliss is correct that the results of the Controversy do not undermine the standard toolkit of methodological individualism: marginalism, optimisation and equilibrium. Robinson and her colleagues demonstrated that one specific tool in the box – the aggregate production function – suffers from deep internal logical flaws. But the Controversy is only one example of the tensions generated when one insists on modelling social structures as the outcome of adversarial interactions between  individuals. Other examples include the Sonnenschein-Mantel-Debreu results and Arrow’s Impossibility Theorem.

As Ben Fine has pointed out, there are well-established results from the philosophy of mathematics and science that suggest deep problems for those who insist on methodological individualism as the only way to understand social structures. Trying to conceptualise a phenomenon such as money on the basis of aggregation over self-interested individuals is a dead end. But economists are not interested in philosophy or methodology. They no longer even enter into debates on the subject – instead, the laziest dismissals suffice.

But where does methodological individualism stop? What about language, for example? Can this be explained as a way for self-interested individuals to overcome transaction costs? The result of this myopia, Fine argues, is that economists ‘work with notions of mathematics and science that have been rejected by mathematicians and scientists themselves for a hundred years and more.’

This brings us back to ‘mathiness’. DeLong characterises this as ‘restricting your microfoundations in advance to guarantee a particular political result and hiding what you are doing in a blizzard of irrelevant and ungrounded algebra.’ What is very rarely discussed, however, is the insistence that microfounded models are the only acceptable form of economic theory. But the New Classical revolution in economics, which ushered in the era of microfounded macroeconomics was itself a political project. As its leading light, Nobel-prize winner Robert Lucas, put it, ‘If these developments succeed, the term “macroeconomic” will simply disappear from use and the modifier “micro” will become superfluous.’ The statement is not greatly different in intent and meaning from Thatcher’s famous claim that ‘there is no such thing as society’. Lucas never tried particularly hard to hide his political leanings: in 2004 he declared, ‘Of the tendencies that are harmful to sound economics, the most seductive, and in my opinion the most poisonous, is to focus on questions of distribution.’ (He also declared, five years before the crisis of 2008, that the ‘central problem of depression-prevention has been solved, for all practical purposes, and has in fact been solved for many decades.’)

As a result of Lucas’ revolution, the academic economics profession purged those who dared to argue that some economic phenomena cannot be explained by competition between selfish individuals. Abstract microfounded theory replaced empirically-based macroeconomic models, despite generating results which are of little relevance for real-world policy-making. As Simon Wren-Lewis puts it, ‘students are taught that [non-microfounded] methods of analysing the economy are fatally flawed, and that simulating DSGE models is the only proper way of doing policy analysis. This is simply wrong.’

I leave the reader to decide where the line between science and politics should be drawn.

Corbyn and the Peoples’ Bank of England

Jeremy Corbyn’s proposal for ‘Peoples’ Quantitative Easing’ – public investment paid for using money printed by the Bank of England – has provoked criticism, including an intervention by Labour’s shadow Chancellor Chris Leslie. It seems the anti-Corbyn wing of the Labour party has finally decided to engage with Corbyn’s policy agenda after several weeks of simply dismissing him out of hand.

Critics of the plan make two main points: that the policy will be inflationary and that it dissolves the boundary between fiscal policy and monetary policy. It would therefore, they claim, fatally undermine the independence of the Bank of England.

The first point is inevitably followed by the observation that inflation and the policy response to inflation – interest rate hikes and recession – hurts the poor. As ever, the first line of attack on economic policies proposed by the left is to claim they will hurt the very people they aim to help. Leslie falls back on the old trope that the state must `live within its means’. It is well-known that this government-as-household analogy is nonsense. But what of the monetary argument?

Inflation is not caused by printing money per se. It is instead the result of a combination of factors: wage increases, supply not keeping pace with demand, and shortages of commodities, many of which are imported.

By these measures, inflationary pressure is currently low – official CPI is around zero. Since this measure tends to over-estimate true inflation, the UK is probably in deflation. There is finally evidence of rising wages – but this comes after both a sharp drop in wages due to the financial crisis and an extended period in which wages have grown at a slower rate than output. The pound is strong, reducing price pressure from imports.

More importantly, the purpose of investment is to increase productive capacity and raise labour productivity. Discussion of monetary policy usually revolves around the ‘output gap’ – the difference between the demand for goods and services and the potential supply. Putting to one side the problems with this immeasurable metric, the point is that investment spending increases potential output as well as stimulating demand, so the medium-run effect on the output gap cannot be determined a priori.

The issue of central bank independence is more subtle – certainly more subtle than the binary choice presented by Corbyn’s critics. That central banks should be free from the malign influence of democratically elected policy-makers has been an article of faith since 1997 when the Labour government granted the Bank of England operational independence. But, as Frances Coppola has argued, central bank independence is an illusion. The Bank’s mandate and inflation target are set by the government. In extremis, the government can choose to revoke ‘independence’.

More relevant to the current debate is the fact that the post-crisis period has already seen significant blurring of the distinction between monetary and fiscal policy. In using its balance sheet to purchase £375bn of securities – mostly government bonds – the Bank of England has, to all intents and purposes, funded the government deficit. The assertion that the barrier is maintained by allowing debt to be purchased only in the secondary market is sleight of hand: while the government was selling new bonds to private financial institutions the Bank was simultanously buying previously issued government bonds from much the same financial institutions.

At this point, critics will object that the Bank was operating within its mandate: QE was enacted in an attempt to hit the inflation target. This is most likely true, although during the inflation spike in 2011, there were suggestions the Bank was deliberately under-forecasting inflation in order to be able to run looser policy; as it turned out, the Banks’ forecasts over-estimated inflation.

None of this alters the fact that quantitative easing both increases the ability of the government to finance deficit spending and has distributional consequences; QE reduced the interest rate on government bonds while increasing the wealth of the already wealthy. Crucially, there won’t be a return to ‘conventional’ monetary policy any time soon. At a panel discussion at the FT’s Alphaville conference on ‘Central Banking After the Crisis’ featuring George Magnus and Claudio Borio among others, there was consensus that we have entered a new era in which the distinction between monetary and fiscal policy holds little relevance; there will be no return to the ‘haven of familiar monetary practice‘ in which steering of short-term interest rates is the primary mechanism of macroeconomic control.

The issue which has triggered this debate is the long-term decline in UK capital expenditure – both public and private. An increase in investment is desperately needed. Corbyn isn’t the first to suggest ‘QE for the people’ – a number of respectable economic commentators have recently called for such measures in letters to the Financial Times and Guardian. Martin Wolf, chief economics commentator at the FT, recently argued that ‘the case for using the state’s power to create credit and money in support of public spending is strong’. Former Chairman of the Financial Services Authority, Adair Turner, has made similar proposals.

I agree, however, with the view that it makes more sense to fund public investment the old-fashioned way – using bonds issued by the Treasury. Where I disagree with Corbyn’s critics is on the sanctity of `independent’ monetary policy; the Bank should stand ready to ensure that these bonds can be issued at an affordable rate of interest.

Why has Corbyn – supposedly a throwback to the 1980s – proposed this new-fangled monetary mechanism? Rather than some sort of populist gesture, I suspect this reflects a status quo which has elevated the status of monetary policy while downgrading fiscal policy. This, in turn, reflects the belief that the government can’t be trusted to make decisions about the direction of the economy; only the private sector has the correct incentive structures in place to guide us to an optimal equilibrium. Monetary policy is the macroeconomic tool of choice because it respects the primacy of the market.

Given that the boundary between fiscal and monetary policy has broken down at least semi-permanently, that status quo no longer holds. It is now time for a serious discussion about the correct approach to macroeconomic stabilisation, the state’s role in directing and financing investment and the distributional implications of monetary policy. It is to Corbyn’s credit that these issues are at last being debated.

Models, maths and macro: A defence of Godley

To put it bluntly, the discipline of economics has yet to get over its childish passion for mathematics and for purely theoretical and often highly ideological speculation, at the expense of historical research and collaboration with the other social sciences.

The quote is, of course, from Piketty’s Capital in the 21st Century. Judging by Noah Smith’s recent blog entry, there is still progress to be made.

Smith observes that the performance of DSGE models is dependably poor in predicting future macroeconomic outcomes—precisely the task for which they are widely deployed. Critics of DSGE are however dismissed because—in a nutshell—there’s nothing better out there.

This argument is deficient in two respects. First, there is a self-evident flaw in a belief that, despite overwhelming and damning evidence that a particular tool is faulty—and dangerously so—that tool should not be abandoned because there is no obvious replacement.

The second deficiency relates to the claim that there is no alternative way to approach macroeconomics:

When I ask angry “heterodox” people “what better alternative models are there?”, they usually either mention some models but fail to provide links and then quickly change the subject, or they link me to reports that are basically just chartblogging.

Although Smith is too polite to accuse me directly, this refers to a Twitter exchange
from a few days earlier. This was triggered when I took offence at a previous post
of his in which he argues that the triumph of New Keynesian sticky-price models over their Real Business Cycle predecessors was proof that “if you just keep pounding away with theory and evidence, even the toughest orthodoxy in a mean, confrontational field like macroeconomics will eventually have to give you some respect”.

When I put it to him that, rather then supporting his point, the failure of the New Keynesian model to be displaced—despite sustained and substantiated criticism—rather undermined it, he responded—predictably—by asking what should replace it.

The short answer is that there is no single model that will adequately tell you all you need to know about a macroeconomic system. A longer answer requires a discussion of methodology and the way that we, as economists, think about the economy. To diehard supporters of the ailing DSGE tradition, “a model” means a collection of dynamic simultaneous equations constructed on the basis of a narrow set of assumptions around what individual “agents” do—essentially some kind of optimisation problem. Heterodox economists argue for a much broader approach to understanding the economic system in which mathematical models are just one tool to aid us in thinking about economic processes.

What all this means is that it is very difficult to have a discussion with people for whom the only way to view the economy is through the lens of mathematical models—and a particularly narrowly defined class of mathematical models—because those individuals can only engage with an argument by demanding to be shown a sheet of equations.

In repsonse to such a demand, I conceded ground by noting that the sectoral balances approach, most closely associated with the work of Wynne Godley, was one example of mathematical formalism in heterodox economics. I highlighted Godley’s famous 1999 paper
in which, on the basis of simulations from a formal macro model, he produces a remarkably prescient prediction of the 2008 financial crisis:

…Moreover, if, per impossibile, the growth in net lending and the growth in money supply growth were to continue for another eight years, the implied indebtedness of the private sector would then be so extremely large that a sensational day of reckoning could then be at hand.

This prediction was based on simulations of the private sector debt-to-income ratio in a system of equations constructed around the well-known identity that the financial balances of the private, public and foreign sector must sum to zero. Godley’s assertion was that, at some point, the growth of private sector debt relative to income must come to an end, triggering a deflationary deleveraging cycle—and so it turned out.

Despite these predictions being generated on the basis of a fully-specified mathematical model, they are dismissed by Smith as “chartblogging” (see the quote above). If “chartblogging” refers to constructing an argument by highlighting trends in graphical representations of macroeconomic data, this seems an entirely admissible approach to macroeconomic analysis. Academics and policy-makers in the 2000s could certainly have done worse than to examine a chart of the household debt-to-income ratio. This would undoubtedly have proved more instructive than adding another mathematical trill to one of the polynomials of their beloved DSGE models—models, it must be emphasised, once again, in which money, banks and debt are, at best, an afterthought.

But the “chartblogging” slur is not even half-way accurate. The macroeconomic model used by Godley grew out of research at the Cambridge Economic Policy Group in the 1970s when Godley and his colleagues Francis Cripps and Nicholas Kaldor were advisors to the Treasury. It is essentially an old-style macroeconometric model combined with financial and monetary stock-flow accounting. The stock-flow modelling methodology has subsequently developed in a number of directions and detailed expositions are to be found in a wide range of publications including the well-known textbook by Lavoie and Godley—a book which surely contains enough equations to satisfy even Smith. Other well-known macroeconometric models include the model used by the UK Office of Budget Responsibility, the Fair model in the US, and MOSES in Scandinavia, alongside similar models in Norway and Denmark. Closer in spirit to DSGE are the NIESR model and the IMF quarterly forecasting model. On the other hand, there is the CVAR method of Johansen and Juselius and similar approaches of Pesaran et al. These are only a selection of examples—and there is an equally wide range of more theoretically oriented work.

This demonstrates the total ignorance of the mainstream of the range and vibrancy of theoretical and empirical research and debate taking place outside the realm of microfounded general equilibrium modelling. The increasing defensiveness exhibited by neoclassical economists when faced with criticism suggests, moreover, an uncomfortable awareness that all is not well with the orthodoxy. Instead of acknowleding the existence of a formal literature outside the myopia of mainstream academia, the reaction is to try and shut down discussion with inaccurate blanket dismissals.

I conclude by noting that Smith isn’t Godley’s highest-profile detractor. A few years after he died—Godley, that is—Krugman wrote an unsympathetic review of his approach to economics, deriding him—oddly for someone as wedded to the IS-LM system as Krugman—for his “hydraulic Keynesianism”. In Krugman’s view, Godley’s method has been superseded by superior microfounded optimising-agent models:

So why did hydraulic macro get driven out? Partly because economists like to think of agents as maximizers—it’s at the core of what we’re supposed to know—so that other things equal, an analysis in terms of rational behavior always trumps rules of thumb. But there were also some notable predictive failures of hydraulic macro, failures that it seemed could have been avoided by thinking more in maximizing terms.

Predictive failures? Of all the accusations that could be levelled against Godley, that one takes some chutzpah.

Jo Michell