The Fable of the Ants, or Why the Representative Agent is No Such Thing

Image reproduced from here

Earlier in the summer, I had a discussion on Twitter with Tony Yates, Israel Arroyo and others on the use of the representative agent in macro modelling.

The starting point for representative agent macro is an insistence that all economic models must be ‘microfounded’. This means that model behaviour must be derived from the optimising behaviour of individuals – even when the object of study is aggregates such as employment, national output or the price level. But given the difficulty – more likely the impossibility – of building an individual-by-individual model of the entire economic system, a convenient short-cut is taken. The decision-making process of one type of agents as a whole (for example consumers or firms) is reduced to that of a single ‘representative’ individual – and  is taken to be identical to that assumed to characterise the behaviour of actual individuals.

For example, in the simple textbook DSGE models taught to macro students, the entire economic system is assumed to behave like a single consumer with fixed and externally imposed preferences over how much they wish to consume in the present relative to the future.

I triggered the Twitter debate by noting that this is equivalent to attempting to model the behaviour of a colony of ants by constructing a model of one large ‘average’ ant. The obvious issue illustrated by the analogy is that ants are relatively simple organisms with a limited range of behaviours – but the aggregate behaviour of an ant colony is both more complex and qualitatively different to that of an individual ant.

This is a well-known topic in computer science: a class of optimisation algorithms were developed by writing code which mimics the way that an ant colony collectively locates food. These algorithms are a sub-group of broader class of ‘swarm intelligence’ algorithms. The common feature is that interaction between ‘agents’ in a population, where the behaviour of each individual is specified as a simple set of rules, produces some emergent ‘intelligent’ behaviour at the population level.

In ants, one such behaviour is the collective food search: ants initially explore at random. If they find food, they lay down pheromone trails on their way back to base. This alters the behaviour of ants that subsequently set out to search for food: the trails attract ants to areas where food was previously located. It turns out this simple rules-based system produces a highly efficient colony-level algorithm for locating the shortest paths to food supplies.

The key point about these algorithms is that the emergent behaviour is qualitatively different from that of individual agents – and is typically robust to changes at the micro level: a reasonably wide degree of variation in ant behaviour at the individual level is possible without disruption to the behaviour of the colony. Further, these emergent properties cannot usually be identified by analysing a single agent in isolation – they will only occur as a result of the interaction between agents (and between agents and their environment).

But this is not how representative agent macro works. Instead, it is assumed that the aggregate behaviour is simply identical to that of individual agents. To take another analogy, it is like a physicist modelling the behaviour of a gas in a room by starting with the assumption of one room-sized molecule.

Presumably economists have good reason to believe that, in the case of economics, this simplifying assumption is valid?

On the contrary, microeconomists have known for a long time that the opposite is the case. Formal proofs demonstrate that a population of agents, each represented using a standard neoclassical inter-temporal utility function will not produce behaviour at the aggregate level which is consistent with a ‘representative’ utility function. In other words, such a system has emergent properties. As Kirman puts it:

“… there is no plausible formal justification for the assumption that the aggregate of individuals, even maximisers, acts itself like an individual maximiser. Individual maximisation does not engender collective rationality, nor does the fact that the collectivity exhibits a certain rationality necessarily imply that individuals act rationaly. There is simply no direct relation between individual and collective behaviour.”

Although the idea of the representative agent isn’t new – it appears in Edgeworth’s 1881 tract on ‘Mathematical Psychics’ – it attained its current dominance as a result of Robert Lucas’ critique of Keynesian structural macroeconomic models. Lucas argued that the behavioural relationships underpinning these models are not be invariant to changes in government policy and therefore should not be used to inform such policy. The conclusion drawn – involving a significant logical leap of faith – was that all macroeconomic models should be based on explicit microeconomic optimization.

This turned out to be rather difficult in practice. In order to produce models which are ‘well-behaved’ at the macro level, one has to impose highly implausible restrictions on individual agents.

A key restriction needed to ensure that microeconomic optimisation behaviour is preserved at the macro level is that of linear ‘Engel curves’. In cross-sectional analysis, this means individuals consume normal and inferior goods in fixed proportions, regardless of their income – a supermarket checkout worker will continue to consume baked beans and Swiss watches in unchanged proportions after she wins the lottery.

In an inter-temporal setting – i.e. in macroeconomic models – this translates to an assumption of constant relative risk aversion. This imposes the constraint that any individual’s aversion to losing a fixed proportion of her income remains constant even as her income changes.

Further, and unfortunately for Lucas, income distribution turns out to matter: if all individuals do not behave identically, then as income distribution changes, aggregate behaviour will also shift. As a result, aggregate utility functions will only be ‘well-behaved’ if, for example, individuals have identical and linear Engel curves, or if individuals have different linear Engel curves but income distribution is not allowed to change.

As well as assuming away any role for, say income distribution or financial interactions, these assumptions contradict well-established empirical facts. The composition of consumption shifts as income increases. It is hard to believe these restrictive special cases provide a sufficient basis on which to construct macro models which can inform policy decisions – but this is exactly what is done.

Kirman notes that ‘a lot of microeconomists said that this was not very good, but macroeconomists did not take that message on board at all. They simply said that we will just have to simplify things until we get to a situation where we do have uniqueness and stability. And then of course we arrive at the famous representative individual.’

The key point here is that a model in which the population as whole collectively solves an inter-temporal optimisation problem – identical to that assumed to be solved by individuals – cannot be held to be ‘micro-founded’ in any serious way. Instead, representative agent models are aggregative macroeconomic models – like Keynesian structural econometric models – but models which impose arbitrary and implausible restrictions on the behaviour of individuals. Instead of being ‘micro-founded’, these models are ‘micro-roofed’ (the term originates with Matheus Grasselli).

It can be argued that old-fashioned Keynesian structural macro behavioural assumptions can in fact stake a stronger claim to compatibility with plausible microeconomic behaviour – precisely because arbitrary restrictions on individual behaviour are not imposed. Like the ant-colony, it can be shown that under sensible assumptions, robust aggregate Keynesian consumption and saving functions can be derived from a range of microeconomic behaviours – both optimising and non-optimising.

So what of the Lucas Critique?

Given that representative agent models are not micro-founded but are aggregate macroeconomic representations, Peter Skott argues that ‘the appropriate definition of the agent will itself typically depend on the policy regime. Thus, the representative-agent models are themselves subject to the Lucas critique. In short, the Lucas inspired research program has been a failure.’

This does not mean that microeconomic behaviour doesn’t matter. Nor is it an argument for a return to the simplistic Keynesian macro modelling of the 1970s. As Hoover puts it:

‘This is not to deny the Lucas critique. Rather it is to suggest that its reach may be sufficiently moderated in aggregate data that there are useful macroeconomic relationships to model that are relatively invariant’

Instead, it should be accepted that some aggregate macroeconomic behavioural relationships are likely to be robust, at least in some contexts and over some periods of time. At the same time, we now have much greater scope to investigate the relationships between micro and macro behaviours. In particular, computing power allows for the use of agent-based simulations to analyse the emergent properties of complex social systems.

This seems a more promising line of enquiry than the dead end of representative agent DSGE modelling.

Models, maths and macro: A defence of Godley

To put it bluntly, the discipline of economics has yet to get over its childish passion for mathematics and for purely theoretical and often highly ideological speculation, at the expense of historical research and collaboration with the other social sciences.

The quote is, of course, from Piketty’s Capital in the 21st Century. Judging by Noah Smith’s recent blog entry, there is still progress to be made.

Smith observes that the performance of DSGE models is dependably poor in predicting future macroeconomic outcomes—precisely the task for which they are widely deployed. Critics of DSGE are however dismissed because—in a nutshell—there’s nothing better out there.

This argument is deficient in two respects. First, there is a self-evident flaw in a belief that, despite overwhelming and damning evidence that a particular tool is faulty—and dangerously so—that tool should not be abandoned because there is no obvious replacement.

The second deficiency relates to the claim that there is no alternative way to approach macroeconomics:

When I ask angry “heterodox” people “what better alternative models are there?”, they usually either mention some models but fail to provide links and then quickly change the subject, or they link me to reports that are basically just chartblogging.

Although Smith is too polite to accuse me directly, this refers to a Twitter exchange
from a few days earlier. This was triggered when I took offence at a previous post
of his in which he argues that the triumph of New Keynesian sticky-price models over their Real Business Cycle predecessors was proof that “if you just keep pounding away with theory and evidence, even the toughest orthodoxy in a mean, confrontational field like macroeconomics will eventually have to give you some respect”.

When I put it to him that, rather then supporting his point, the failure of the New Keynesian model to be displaced—despite sustained and substantiated criticism—rather undermined it, he responded—predictably—by asking what should replace it.

The short answer is that there is no single model that will adequately tell you all you need to know about a macroeconomic system. A longer answer requires a discussion of methodology and the way that we, as economists, think about the economy. To diehard supporters of the ailing DSGE tradition, “a model” means a collection of dynamic simultaneous equations constructed on the basis of a narrow set of assumptions around what individual “agents” do—essentially some kind of optimisation problem. Heterodox economists argue for a much broader approach to understanding the economic system in which mathematical models are just one tool to aid us in thinking about economic processes.

What all this means is that it is very difficult to have a discussion with people for whom the only way to view the economy is through the lens of mathematical models—and a particularly narrowly defined class of mathematical models—because those individuals can only engage with an argument by demanding to be shown a sheet of equations.

In repsonse to such a demand, I conceded ground by noting that the sectoral balances approach, most closely associated with the work of Wynne Godley, was one example of mathematical formalism in heterodox economics. I highlighted Godley’s famous 1999 paper
in which, on the basis of simulations from a formal macro model, he produces a remarkably prescient prediction of the 2008 financial crisis:

…Moreover, if, per impossibile, the growth in net lending and the growth in money supply growth were to continue for another eight years, the implied indebtedness of the private sector would then be so extremely large that a sensational day of reckoning could then be at hand.

This prediction was based on simulations of the private sector debt-to-income ratio in a system of equations constructed around the well-known identity that the financial balances of the private, public and foreign sector must sum to zero. Godley’s assertion was that, at some point, the growth of private sector debt relative to income must come to an end, triggering a deflationary deleveraging cycle—and so it turned out.

Despite these predictions being generated on the basis of a fully-specified mathematical model, they are dismissed by Smith as “chartblogging” (see the quote above). If “chartblogging” refers to constructing an argument by highlighting trends in graphical representations of macroeconomic data, this seems an entirely admissible approach to macroeconomic analysis. Academics and policy-makers in the 2000s could certainly have done worse than to examine a chart of the household debt-to-income ratio. This would undoubtedly have proved more instructive than adding another mathematical trill to one of the polynomials of their beloved DSGE models—models, it must be emphasised, once again, in which money, banks and debt are, at best, an afterthought.

But the “chartblogging” slur is not even half-way accurate. The macroeconomic model used by Godley grew out of research at the Cambridge Economic Policy Group in the 1970s when Godley and his colleagues Francis Cripps and Nicholas Kaldor were advisors to the Treasury. It is essentially an old-style macroeconometric model combined with financial and monetary stock-flow accounting. The stock-flow modelling methodology has subsequently developed in a number of directions and detailed expositions are to be found in a wide range of publications including the well-known textbook by Lavoie and Godley—a book which surely contains enough equations to satisfy even Smith. Other well-known macroeconometric models include the model used by the UK Office of Budget Responsibility, the Fair model in the US, and MOSES in Scandinavia, alongside similar models in Norway and Denmark. Closer in spirit to DSGE are the NIESR model and the IMF quarterly forecasting model. On the other hand, there is the CVAR method of Johansen and Juselius and similar approaches of Pesaran et al. These are only a selection of examples—and there is an equally wide range of more theoretically oriented work.

This demonstrates the total ignorance of the mainstream of the range and vibrancy of theoretical and empirical research and debate taking place outside the realm of microfounded general equilibrium modelling. The increasing defensiveness exhibited by neoclassical economists when faced with criticism suggests, moreover, an uncomfortable awareness that all is not well with the orthodoxy. Instead of acknowleding the existence of a formal literature outside the myopia of mainstream academia, the reaction is to try and shut down discussion with inaccurate blanket dismissals.

I conclude by noting that Smith isn’t Godley’s highest-profile detractor. A few years after he died—Godley, that is—Krugman wrote an unsympathetic review of his approach to economics, deriding him—oddly for someone as wedded to the IS-LM system as Krugman—for his “hydraulic Keynesianism”. In Krugman’s view, Godley’s method has been superseded by superior microfounded optimising-agent models:

So why did hydraulic macro get driven out? Partly because economists like to think of agents as maximizers—it’s at the core of what we’re supposed to know—so that other things equal, an analysis in terms of rational behavior always trumps rules of thumb. But there were also some notable predictive failures of hydraulic macro, failures that it seemed could have been avoided by thinking more in maximizing terms.

Predictive failures? Of all the accusations that could be levelled against Godley, that one takes some chutzpah.

Jo Michell