Month: September 2016

Consistent modelling and inconsistent terminology

Image reproduced from here

Simon Wren-Lewis has a couple of recent posts up on heterodox macro, and stock-flow consistent modelling in particular. His posts are constructive and engaging. I want to respond to some of the points raised.

Simon discusses the modelling approach originating with Wynne Godley, Francis Cripps and others at the Cambridge Economic Policy Group in the 1970s. More recently this approach is associated with the work of Marc Lavoie who co-wrote the key textbook on the topic with Godley.

The term ‘stock-flow consistent’ was coined by Claudio Dos Santos in his PhD thesis, ‘Three essays in stock flow consistent modelling’ and has been a source of misunderstanding ever since. Simon writes, ‘it is inferred that mainstream models fail to impose stock flow consistency.’ As I tried to emphasise  in the blog which Simon links to, this is not the intention: ‘any correctly specified closed mathematical macro model should be internally consistent and therefore stock-flow consistent. This is certainly true of DSGE models.’ (There is an important caveat here:  this consistency won’t be maintained after log-linearisation – a standard step in DSGE solution – and the further a linearised model gets from the steady state, the worse this inconsistency will become.)[1]

Marc Lavoie has emphasised that he regrets adopting the name, precisely because of the implication that consistency is not maintained in other modelling traditions. Instead, the term refers to a subset of models characterised by a number of specific features. These include the following: aggregate behavioural macro relationships informed by both empirical evidence and post-Keynesian theory; detailed, institutionally-specific modelling of the monetary and financial sector; and explicit feedback effects from financial balance sheets to economic behaviour and the stability of the macro system both in the short run and the long run.

A distinctive feature of these models is their rejection of the loanable funds theory of banking and money – a position endorsed in a recent Bank of England Quarterly Bulletin and Working Paper. Partially as a result of this view of the importance of money and money-values in the decision-making process, these models are usually specified in nominal magnitudes. As a result, they map more directly onto the national accounts than real-sector models which require complex transformations of data series using price deflators.

Since the behavioural features of these models are informed by a well-developed theoretical tradition, Simon’s assertion that SFC modelling is ‘accounting, not economics’ is inaccurate. Accounting is one important element in a broader methodological approach. Imposing detailed financial accounting alongside behavioural assumptions about how financial stocks and flows evolve imposes constraints across the entire system. Rather like trying to squeeze the air out of one part of a balloon, only to find another part inflating, chasing assets and liabilities around a closed system of linked balance sheets can be an informative exercise – because where leverage eventually turns up is not always clear at the outset. Likewise, SFC models may include detailed modelling of inventories, pricing and profits, or of changes in net worth due to asset price revaluation and price inflation. For such processes, even the accounting is non-trivial. Taking accounting seriously allows modellers to incorporate institutional complexity – something of increasing importance in today’s world.

The inclusion of detailed financial modelling allows the models to capture Godley’s view that agents aim to achieve certain stock-flow norms. These may include household debt-to-income ratios, inventories-to-sales ratios for firms and leverage ratios for banks. Many of the functional forms used implicitly capture these stock-flow ratios. This is the case for the simple consumption function used in the BoE paper discussed by Simon, as shown here. Of course, other functional specifications are possible, as in this model, for example, which includes a direct interest rate effect on consumption.

Simon notes that adding basic financial accounting to standard models is trivial but ‘in most mainstream models these balances are of no consequence’. This is an important point, and should set alarm bells ringing. Simon identifies one reason for the neutrality of finance in standard models: ‘the simplicity of the dominant mainstream model of intertemporal consumption’.

There are deeper reasons why the financial sector has little role in standard macro. In the majority of standard DSGE macro models, the system automatically tends towards some long-run supply side-determined full-employment equilibrium – in other words the models incorporate Milton Friedman’s long-run vertical Phillips Curve. Further, in most DSGE models, income distribution has no long-run effect on macroeconomic outcomes.

Post-Keynesian economics, which provides much of the underlying theoretical structure of SFC models, takes issue with these assumptions. Instead, it is argued, Keynes was correct in his assertion that demand deficiency can lead economies to become stuck in equilibria characterised by under-employment or stagnation.

Now, if the economic system is always in the process of returning to the flexible-price full-employment equilibrium, then financial stocks will be, at most, of transitory significance. They may serve to amplify macroeconomic fluctuations, as in the Bernanke-Gertler-Gilchrist models, but they will have no long-run effects. This is the reason that DSGE models which do attempt to incorporate financial leverage also require additional ‘ad-hoc’ adjustments to the deeper model assumptions – for example this model by Kumhof and Ranciere imposes an assumption of non-negative subsistence consumption for households. As a result, when income falls, households are unable to reduce consumption but instead run up debt. For similar reasons, if one tries to abandon the loanable funds theory in DSGE models – one of the key reasons for the insistence on accounting in SFC models – this likewise raises non-trivial issues, as shown in this paper by Benes and Kumhof  (to my knowledge the only attempt so far to produce such a model).

Non-PK-SFC models, such as the UK’s OBR model, can therefore incorporate modelling of sectoral balances and leverage ratios – but these stocks have little effect on the real outcomes of the model.

On the contrary, if long-run disequlibrium is considered a plausible outcome, financial stocks may persist and feedbacks from these stocks to the real economy will have non-trivial effects. In such a situation, attempts by individuals or sectors to achieve some stock-flow ratio can alter the long-run behaviour of the system. If a balance-sheet recession persists, it will have persistent effects on the real economy – such hysteresis effects are increasingly acknowledged in the profession.

This relates to an earlier point made in Simon’s post: ‘the fact that leverage was allowed to increase substantially before the crisis was not something that most macroeconomists were even aware of … it just wasn’t their field’. I’m surprised this is presented as evidence for the defence of mainstream macro.

The central point made by economists like Minsky and Godley was that financial dynamics should be part of our field. The fact that by 2007 it wasn’t, illustrates how badly mainstream macroeconomics went wrong. Between Real Business Cycle models, Rational Expectations, the Efficient Markets Hypothesis and CAPM, economists convinced themselves – and, more importantly, policy-makers – that the financial system was none of their business. The fact that economists forgot to look at leverage ratios wasn’t an absent-minded oversight. As Oliver Blanchard argues:

 ‘… mainstream macroeconomics had taken the financial system for granted. The typical macro treatment of finance was a set of arbitrage equations, under the assumption that we did not need to look at who was doing what on Wall Street. That turned out to be badly wrong.’

This is partially acknowledged by Simon when he argues that the ‘microfoundations revolution’ lies behind economists’ myopia on the financial system. Where I, of course, agree with Simon is that ‘had the microfoundations revolution been more tolerant of other methodologies … macroeconomics may well have done more to integrate the financial sector into their models before the crisis’. Putting aside the point that, for the most part, the microfoundations revolution didn’t actually lead to microfounded models, ‘integrating the financial sector’ into models is exactly what people like Godley, Lavoie and others were doing.

Simon also makes an important point in highlighting the lack of acknowledgement of antecedents by PK-SFC authors and, as a result, a lack of continuity between PK-SFC models and the earlier structural econometric models (SEMs) which were eventually killed off by the shift to microfounded models. There is a rich seam of work here – heterodox economists should both acknowledge this and draw on it in their own work. In many respects, I see the PK-SFC approach as a continuation of the SEM tradition – I was therefore pleased to read this paper in which Simon argues for a return to the use of SEMs alongside DSGE and VAR techniques.

To my mind, this is what is attempted in the Bank of England paper criticised by Simon – the authors develop a non-DSGE, econometrically estimated, structural model of the UK economy in which the financial system is taken seriously. Simon is right, however, that the theoretical justifications for the behavioural specifications and the connections to previous literature could have been spelled out more clearly.

The new Bank of England model is one of a relatively small group of empirically-oriented SFC models. Others include the Levy Institute model of the US, originally developed by Wynne Godley and now maintained by Gennaro Zezza, the UNCTAD Global Policy model, developed in collaboration with Godley’s old colleague Francis Cripps, and the Gudgin and Coutts model of the UK economy (the last of these is not yet fully stock-flow consistent but shares much of its theoretical structure with the other models).

One important area for improvement in these models lies with their econometric specification. The models tend to have large numbers of parameters, making them difficult to estimate other than through individual OLS regressions of behavioural relationships. PK-SFC authors can certainly learn from the older SEM tradition in this area.

I find another point of agreement in Simon’s statement that ‘heterodox economists need to stop being heterodox’. I wouldn’t state this so strongly – I think heterodox economists need to become less heterodox. They should identify and more explicitly acknowledge those areas in which there is common ground with mainstream economics.  In those areas where disagreement persists, they should try to explain more clearly why this is the case. Hopefully this will lead to more fruitful engagement in the future, rather than the negativity which has characterised some recent exchanges.

[1] Simon goes on to argue that stock-flow consistency is not ‘unique to Godley. When I was a young economist at the Treasury in the 1970s, their UK model was ‘stock-flow consistent’, and forecasts routinely looked at sector balances.’  During the 1970s, there was sustained debate between the Treasury and Godley’s Cambridge team, who were, aside from Milton Friedman’s monetarism, the most prominent critics of the Keynesian conventional wisdom of the time – there is an excellent history here. I don’t know the details but I wonder if the awareness of sectoral balances at the Treasury was partly due to Godley’s influence?

Advertisement

The Fable of the Ants, or Why the Representative Agent is No Such Thing

Image reproduced from here

Earlier in the summer, I had a discussion on Twitter with Tony Yates, Israel Arroyo and others on the use of the representative agent in macro modelling.

The starting point for representative agent macro is an insistence that all economic models must be ‘microfounded’. This means that model behaviour must be derived from the optimising behaviour of individuals – even when the object of study is aggregates such as employment, national output or the price level. But given the difficulty – more likely the impossibility – of building an individual-by-individual model of the entire economic system, a convenient short-cut is taken. The decision-making process of one type of agents as a whole (for example consumers or firms) is reduced to that of a single ‘representative’ individual – and  is taken to be identical to that assumed to characterise the behaviour of actual individuals.

For example, in the simple textbook DSGE models taught to macro students, the entire economic system is assumed to behave like a single consumer with fixed and externally imposed preferences over how much they wish to consume in the present relative to the future.

I triggered the Twitter debate by noting that this is equivalent to attempting to model the behaviour of a colony of ants by constructing a model of one large ‘average’ ant. The obvious issue illustrated by the analogy is that ants are relatively simple organisms with a limited range of behaviours – but the aggregate behaviour of an ant colony is both more complex and qualitatively different to that of an individual ant.

This is a well-known topic in computer science: a class of optimisation algorithms were developed by writing code which mimics the way that an ant colony collectively locates food. These algorithms are a sub-group of broader class of ‘swarm intelligence’ algorithms. The common feature is that interaction between ‘agents’ in a population, where the behaviour of each individual is specified as a simple set of rules, produces some emergent ‘intelligent’ behaviour at the population level.

In ants, one such behaviour is the collective food search: ants initially explore at random. If they find food, they lay down pheromone trails on their way back to base. This alters the behaviour of ants that subsequently set out to search for food: the trails attract ants to areas where food was previously located. It turns out this simple rules-based system produces a highly efficient colony-level algorithm for locating the shortest paths to food supplies.

The key point about these algorithms is that the emergent behaviour is qualitatively different from that of individual agents – and is typically robust to changes at the micro level: a reasonably wide degree of variation in ant behaviour at the individual level is possible without disruption to the behaviour of the colony. Further, these emergent properties cannot usually be identified by analysing a single agent in isolation – they will only occur as a result of the interaction between agents (and between agents and their environment).

But this is not how representative agent macro works. Instead, it is assumed that the aggregate behaviour is simply identical to that of individual agents. To take another analogy, it is like a physicist modelling the behaviour of a gas in a room by starting with the assumption of one room-sized molecule.

Presumably economists have good reason to believe that, in the case of economics, this simplifying assumption is valid?

On the contrary, microeconomists have known for a long time that the opposite is the case. Formal proofs demonstrate that a population of agents, each represented using a standard neoclassical inter-temporal utility function will not produce behaviour at the aggregate level which is consistent with a ‘representative’ utility function. In other words, such a system has emergent properties. As Kirman puts it:

“… there is no plausible formal justification for the assumption that the aggregate of individuals, even maximisers, acts itself like an individual maximiser. Individual maximisation does not engender collective rationality, nor does the fact that the collectivity exhibits a certain rationality necessarily imply that individuals act rationaly. There is simply no direct relation between individual and collective behaviour.”

Although the idea of the representative agent isn’t new – it appears in Edgeworth’s 1881 tract on ‘Mathematical Psychics’ – it attained its current dominance as a result of Robert Lucas’ critique of Keynesian structural macroeconomic models. Lucas argued that the behavioural relationships underpinning these models are not be invariant to changes in government policy and therefore should not be used to inform such policy. The conclusion drawn – involving a significant logical leap of faith – was that all macroeconomic models should be based on explicit microeconomic optimization.

This turned out to be rather difficult in practice. In order to produce models which are ‘well-behaved’ at the macro level, one has to impose highly implausible restrictions on individual agents.

A key restriction needed to ensure that microeconomic optimisation behaviour is preserved at the macro level is that of linear ‘Engel curves’. In cross-sectional analysis, this means individuals consume normal and inferior goods in fixed proportions, regardless of their income – a supermarket checkout worker will continue to consume baked beans and Swiss watches in unchanged proportions after she wins the lottery.

In an inter-temporal setting – i.e. in macroeconomic models – this translates to an assumption of constant relative risk aversion. This imposes the constraint that any individual’s aversion to losing a fixed proportion of her income remains constant even as her income changes.

Further, and unfortunately for Lucas, income distribution turns out to matter: if all individuals do not behave identically, then as income distribution changes, aggregate behaviour will also shift. As a result, aggregate utility functions will only be ‘well-behaved’ if, for example, individuals have identical and linear Engel curves, or if individuals have different linear Engel curves but income distribution is not allowed to change.

As well as assuming away any role for, say income distribution or financial interactions, these assumptions contradict well-established empirical facts. The composition of consumption shifts as income increases. It is hard to believe these restrictive special cases provide a sufficient basis on which to construct macro models which can inform policy decisions – but this is exactly what is done.

Kirman notes that ‘a lot of microeconomists said that this was not very good, but macroeconomists did not take that message on board at all. They simply said that we will just have to simplify things until we get to a situation where we do have uniqueness and stability. And then of course we arrive at the famous representative individual.’

The key point here is that a model in which the population as whole collectively solves an inter-temporal optimisation problem – identical to that assumed to be solved by individuals – cannot be held to be ‘micro-founded’ in any serious way. Instead, representative agent models are aggregative macroeconomic models – like Keynesian structural econometric models – but models which impose arbitrary and implausible restrictions on the behaviour of individuals. Instead of being ‘micro-founded’, these models are ‘micro-roofed’ (the term originates with Matheus Grasselli).

It can be argued that old-fashioned Keynesian structural macro behavioural assumptions can in fact stake a stronger claim to compatibility with plausible microeconomic behaviour – precisely because arbitrary restrictions on individual behaviour are not imposed. Like the ant-colony, it can be shown that under sensible assumptions, robust aggregate Keynesian consumption and saving functions can be derived from a range of microeconomic behaviours – both optimising and non-optimising.

So what of the Lucas Critique?

Given that representative agent models are not micro-founded but are aggregate macroeconomic representations, Peter Skott argues that ‘the appropriate definition of the agent will itself typically depend on the policy regime. Thus, the representative-agent models are themselves subject to the Lucas critique. In short, the Lucas inspired research program has been a failure.’

This does not mean that microeconomic behaviour doesn’t matter. Nor is it an argument for a return to the simplistic Keynesian macro modelling of the 1970s. As Hoover puts it:

‘This is not to deny the Lucas critique. Rather it is to suggest that its reach may be sufficiently moderated in aggregate data that there are useful macroeconomic relationships to model that are relatively invariant’

Instead, it should be accepted that some aggregate macroeconomic behavioural relationships are likely to be robust, at least in some contexts and over some periods of time. At the same time, we now have much greater scope to investigate the relationships between micro and macro behaviours. In particular, computing power allows for the use of agent-based simulations to analyse the emergent properties of complex social systems.

This seems a more promising line of enquiry than the dead end of representative agent DSGE modelling.