noah smith

What is the Loanable Funds theory?

I had another stimulating discussion with Noah Smith last week. This time the topic was the ‘loanable funds’ theory of the rate of interest. The discussion was triggered by my suggestion that the ‘safe asset shortage’ and associated ‘reach for yield’ are in part caused by rising wealth concentration. The logic is straightforward: since the rich spend less of their income than the poor, wealth concentration tends to increase the rate of saving out of income. This means an increase in desired savings chasing the available stock of financial assets, pushing up the price and lowering the yield.

Noah viewed this as a plausible hypothesis but suggested it relies on the loanable funds model. My view was the opposite – I think this mechanism is incompatible with the loanable funds theory. Such disagreements are often enlightening – either one of us misunderstood the mechanisms under discussion, or we were using different definitions. My instinct was that it was the latter: we meant something different by ‘loanable funds theory’ (LFT hereafter).

To try and clear this up, Noah suggested Mankiw’s textbook as a starting point – and found a set of slides which set out the LFT clearly. The model described was exactly the one I had in mind – but despite agreeing that Mankiw’s exposition of the LFT was accurate it was clear we still didn’t agree about the original point of discussion.

The reason seems to be that Noah understands the LFT to describe any market for loans: there are some people willing to lend and some who wish to borrow. As the rate of interest rises, the volume of available lending increases but the volume of desired borrowing falls. In equilibrium, the rate of interest will settle at r* – the market-clearing  rate.

What’s wrong with this? – It certainly sounds like a market for ‘loanable funds’. The problem is that LFT is not a theory of loan market clearing per se. It’s a theory of macroeconomic equilibrium. It’s not a model of any old loan market: it’s a model of a one very specific market – the market which intermediates total (net) saving with total capital investment in a closed economic system.

OK, but saving equals investment by definition in macroeconomic terms: the famous S = I identity. How can there be a market which operates to ensure equality between two identically equal magnitudes?

The issue – as Keynes explained in the General Theory– is that in a modern capitalist economy, the person who saves and the person who undertakes fixed capital investment are not usually the same. Some mechanism needs to be in place to ensure that a decision to ‘not consume’ somewhere in the system – to save – is always matched by a decision to invest – to build a new machine, road or building – somewhere else in the economy.

To see the issue more clearly consider the ‘corn economy’ used in many standard macro models: one good – corn – is produced. This good can either be consumed or invested (by planting in the ground or storing corn for later consumption). The decision to plant or store corn is simultaneously both a decision to ‘not consume’ and to ‘invest’ (the rate of return on investment will depend on the mix of stored to planted corn). In this simple economy S = I because it can’t be any other way. A market for loanable funds is not required.

But this isn’t how modern capitalism works. Decisions to ‘not consume’ and decisions to invest are distributed throughout the economic system. How can we be sure that these decisions will lead to identical intended saving and investment – what ensures that S and I are equal? The loanable funds theory provides one possible answer to this question.

The theory states that decisions to save (i.e. to not consume) are decisive – investment adjusts automatically to accommodate any change in consumption behaviour. To see how this works, we need to recall how the model is derived. The diagram below shows the basic system (I’ve borrowed the figure from Nick Rowe).


The upward sloping ‘desired saving’ curve is derived on the assumption that people are ‘impatient’ – they prefer current consumption to future consumption. In order to induce people to save,  a return needs to be paid on their savings. As the return paid on savings increases, consumers are collectively willing to forgo a greater volume of current consumption in return for a future payoff.

The downward sloping investment curve is derived on standard neoclassical marginalist principles. ‘Factors of production’ (i.e. labour and capital) receive ‘what they are worth’ in competitive markets. The real wage is equal to the marginal productivity of labour and the return on ‘capital’ is likewise equal to the marginal productivity of capital. As the ‘quantity’ of capital increases, the marginal product – and thus the rate of return – falls.

So the S and I curves depict how much saving and investment would take place at each possible rate of interest. As long as the S and I curves are well-defined and ‘monotonic’ (a strong assumption), there is only one rate of interest at which the amount people wish to lend is equal to the amount (other) people would like to borrow. This is r*, the point of intersection between the curves. This rate of interest is often referred to as the Wicksellian ‘natural rate’.

Now, consider what happens if the collective impatience of society decreases. At any rate of interest, consumption as a share of income will be lower and desired saving correspondingly higher – the S curve moves to the right. As the S curve shifts to the right – assuming no change in the technology determining the slope and position of the I curve – a greater share of national income is ‘not consumed’. But by pushing down the rate of interest in the loanable funds market, reduced consumption – somewhat miraculously – leads to an automatic increase in investment. An outward shift in the S curve is accompanied by a shift along the I curve.

Consider what this means for macroeconomic aggregates. Assuming a closed system, income is, by definition, equal to consumption plus investment: Y = C + I. The LFT says is that in freely adjusting markets, reductions in C due to shifts in preferences are automatically offset by increases in I. Y will remain at the ‘full employment’ rate of output at all times.

The LFT therefore underpins ‘Say’s Law’ – summarised by Keynes as ‘supply creates its own demand’. It was thus a key target for Keynes’ attack on the ‘Law’ in his General Theory. Keynes argued against the notion that saving decisions are strongly influenced by the rate of interest. Instead, he argued consumption is mostly determined by income. If individuals consume a fixed proportion of their income, the S curve in the diagram is no longer well defined – at any given level of output, S is vertical, but the position of the curve shifts with output. This is quite different to the LFT which regards the position of the two curves as determined by the ‘deep’ structural parameters of the system – technology and preferences.

How then is the rate of interest determined in Keynes’ theory? – the answer is ‘liquidity preference’. Rather than desired saving determining the rate of interest, what matters is the composition of financial assets people use to hold their savings. Keynes simplifies the story by assuming only two assets: ‘money’ which pays no interest and ‘bonds’ which do pay interest. It is the interaction of supply and demand in the bond market – not the ‘loanable funds’ market – which determines the rate of interest.

There are two key points here: the first is that saving is a residual – it is determined by output and investment. As such, there is no mechanism to ensure that desired saving and desired investment will be equalised. This means that output, not the rate of interest, will adjust to ensure that saving is equal to investment. There is no mechanism which ensures that output is maintained at full employment levels. The second is that interest rates can move without any change in either desired saving or desired investment. If there is an increase in ‘liquidity preference’ – a desire to hold lower yielding but safer assets, this will cause an increase in the rate of interest on riskier assets.

How can the original question be framed using these two models? – What is the implication of increasing wealth concentration on yields and macro variables?

I think Noah is right that one can think of the mechanism in a loanable funds world. If redistribution towards the rich increases the average propensity to save, this will shift the S curve to the right – as in the example above – reducing the ‘natural’ rate of interest. This is the standard ‘secular stagnation’ story – a ‘global savings glut’ has pushed the natural rate below zero. However, in a loanable funds world this should – all else being equal – lead to an increase in investment. This doesn’t seem to fit the stylised facts: capital investment has been falling as a share of GDP in most advanced nations. (Critics will point out that I’m skirting the issue of the zero lower bound – I’ll have to save that for another time).

My non-LFT interpretation is the following. Firstly, I’d go further than Keynes and argue that the rate of interest is not only relatively unimportant for determining S – it also has little effect on I. There is evidence to suggest that firms’ investment decisions are fairly interest-inelastic. This means that both curves in the diagram above have a steep slope – and they shift as output changes. There is no ‘natural rate’ of interest which brings the macroeconomic system into equilibrium.

In terms of the S = I identity, this means that investment decisions are more important for the determination of macro variables than saving decisions. If total desired saving as a share of income increases – due to wealth concentration, for example – this will have little effect on investment. The volume of realised saving, however, is determined by (and identically equal to) the volume of capital investment. An increase in desired saving manifests itself not as a rise in investment – but as a fall in consumption and output.

In such a scenario – in which a higher share of nominal income is saved – the result will be weak demand for goods but strong demand for financial assets – leading to deflation in the goods market and inflation in the market for financial assets. Strong demand for financial assets will reduce rates of return – but only on financial assets: if investment is inelastic to interest rate there is no reason to believe there will be any shift in investment or in the return on fixed capital investment.

In order explain the relative rates of return on equity and bonds, a re-working of Keynes’ liquidity preference theory is required. Instead of a choice between ‘money’ and ‘bonds’, the choice faced by investors can be characterised as a choice between risky equity and less-risky bonds. Liquidity preference will then make itself felt as an increase in the price of bonds relative to equity – and a corresponding movement in the yields on each asset. On the other hand, an increase in total nominal saving will increase the price of all financial assets and thus reduce yields across the board. Given that it is likely that portfolio managers will have minimum target rates of return, this is will induce a shift into higher-risk assets.

On ‘heterodox’ macroeconomics

Image reproduced from here

Noah Smith has a new post on the failure of mainstream macroeconomics and what he perceives as the lack of ‘heterodox’ alternatives. Noah is correct about the failure of mainstream macroeconomics, particularly the dominant DSGE modelling approach. This failure is increasingly – if reluctantly – accepted within the economics discipline. As Brad Delong puts it, DSGE macro has ‘… proven a degenerating research program and a catastrophic failure: thirty years of work have produced no tools for useful forecasting or policy analysis.’

I disagree with Noah, however, when he argues that ‘heterodox’ economics has little to offer as an alternative to the failed mainstream.

The term ‘heterodox economics’ is a difficult one. I dislike it and resisted adopting it for some time: I would much rather be ‘an economist’ than ‘a heterodox economist’. But it is clear that unless you accept – pretty much without criticism – the assumptions and methodology of the mainstream, you will not be accepted as ‘an economist’. This was not the case when Joan Robinson debated with Solow and Samuelson, or Kaldor debated with Hayek. But it is the case today.

The problem with ‘heterodox economics’ is that it is self-definition in terms of the other. It says ‘we are not them’ – but says nothing about what we are. This is because includes everything outside of the mainstream, from reasonably well-defined and coherent schools of thought such as Post Keynesians, Marxists and Austrians, to much more nebulous and ill-defined discontents of all hues. To put it bluntly, a broad definition of ‘people who disagree with mainstream economics’ is going to include a lot of cranks. People will place the boundary between serious non-mainstream economists and cranks differently, depending on their perspective.

Another problem is that these schools of thought have fundamental differences. Aside from rejecting standard neoclassical economics, the Marxists and the Austrians don’t have a great deal in common.

Noah seems to define heterodox economics as ‘non-mathematical’ economics. This is inaccurate. There is much formal modelling outside of the mainstream. The difference lies with the starting assumptions. Mainstream macro starts from the assumption of inter-temporal optimisation and a system which returns to the supply-side-determined full-employment equilibrium in the long run. Non-mainstream economists reject these in favour of assumptions which they regard as more empirically plausible.

It is true that there are some heterodox economists, for example Tony Lawson and Ben Fine who take the position that maths is an inappropriate tool for economics and should be rejected. (Incidentally, both were originally mathematicians.) This is a minority position, and one I disagree with. The view is influential, however. The highest-ranked heterodox economics journal, the Cambridge Journal of Economics, has recently changed its editorial policy to explicitly discourage the use of mathematics. This is a serious mistake in my opinion.

So Noah’s claim about mathematics is a straw man. He implicitly acknowledges this by discussing one class of mathematical Post Keynesian models, the so-called ‘stock-flow consistent’ models (SFC). He rightly notes that the name is confusing – any correctly specified closed mathematical macro model should be internally consistent and therefore stock-flow consistent. This is certainly true of DSGE models.

SFC refers to a narrower set of models which incorporate detailed modelling of the ‘plumbing’ of the financial system alongside traditional macro Keynesian behavioural assumptions – and reject the standard inter-temporal optimising assumptions of DSGE macro. Marc Lavoie, who originally came up with the name, admits it is misleading and, with hindsight, a more appropriate name should have been chosen. But names stick, so SFC joins a long tradition of badly-named concepts in economics such as ‘real business cycles’ and ‘rational expectations’.

Noah claims that ‘vague ideas can’t be tested against the data and rejected’.  While the characterisation of all heterodox economics as ‘vague ideas’ is another straw man, the falsifiability point is important. As Noah points out, ‘One of mainstream macro’s biggest failings is that theories that don’t fit the data continue to be regarded as good and useful models.’ He also notes that big SFC models have so many parameters that they are essentially impossible to fit to the data.

This raises an important question about what we want economic models to do, and what the criteria should be for acceptance or rejection. The belief that models should provide quantitative predictions of the future has been much too strongly held. Economists need to come to terms with the reality that the future is unknowable – no model will reliably predict the future. For a while, DSGE models seemed to do a reasonable job. With hindsight, this was largely because enough degrees of freedom were added when converting them to econometric equations that they could do a reasonably good job of projecting past trends forward, along with some mean reversion.  This predictive power collapsed totally with the crisis of 2008.

Models then should be seen as ways to gain insight over the mechanisms at work and to test the implications of combining assumptions. I agree with Narayana Kocherlakota when he argues that we need to return to smaller ‘toy models’ to think through economic mechanisms. Larger econometrically estimated models are useful for sketching out future scenarios – but the predictive power assigned to such models needs to be downplayed.

So the question is then – what are the correct assumptions to make when constructing formal macro models? Noah argues that Post Keynesian models ‘don’t take human behaviour into account – the equations are typically all in terms of macroeconomic aggregates – there’s a good chance that the models could fail if policy changes make consumers and companies act differently than expected’

This is of course Robert Lucas’s critique of structural econometric modelling. This critique was a key element in the ‘microfoundations revolution’ which ushered in the so-called Real Business Cycle models which form the core of the disastrous DSGE research programme.

The critique is misguided, however. Aggregate behavioural relationships do have a basis in individual behavour. As Bob Solow puts it:

The original impulse to look for better or more explicit micro foundations was probably reasonable. It overlooked the fact that macroeconomics as practiced by Keynes and Pigou was full of informal microfoundations. … Generalizations about aggregative consumption-saving patterns, investment patterns, money-holding patterns were always rationalized by plausible statements about individual – and, to some extent, market-behavior.

In many ways, aggregate behavioural specifications can make a stronger claim to be based in microeconomic behaviour than the representative agent DSGE models which came to dominate mainstream macro. (I will expand on this point in a separate blog.)

Mainstream macro has reached the point that only two extremes are admitted: formal, internally consistent DSGE models, and atheoretical testing of the data using VAR models. Anything in between – such as structural econometric modelling – is rejected. As Simon Wren-Lewis has argued, this theoretical extremism cannot be justified.

Crucial issues and ideas emphasised by heterodox economists were rejected for decades by the mainstream while it was in thrall to representative-agent DSGE models. These ideas included the role of income distribution, the importance of money, credit and financial structure, the possibility of long-term stagnation due to demand-side shortfalls, the inadequacy of reliance on monetary policy alone for demand management, and the possibility of demand affecting the supply side. All of these ideas are, to a greater or lesser extent, now gradually becoming accepted and absorbed by the mainstream – in many cases with no acknowledgement of the traditions which continued to discuss and study them even as the mainstream dismissed them.

Does this mean that there is a fully-fledged ‘heterodox economics’ waiting in the wings waiting to take over from mainstream macro? It depends what is meant – is there complete model of the economy sitting in a computer waiting for someone to turn it on? No – but there never will be, either within the mainstream or outside it. But Lavoie argues,

if by any bad luck neoclassical economics were to disappear completely from the surface of the Earth, this would leave economics utterly unaffected because heterodox economics has its own agenda, or agendas, and its own methodological approaches and models.

I think this conclusion is too strong – partly because I don’t think the boundary between neoclassical economics and heterodox economics is as clear as some claim. But it highlights the rich tradition of ideas and models outside of the mainstream – many of which have stood the test of time much better than DSGE macro. It is time these ideas are acknowledged.

Models, maths and macro: A defence of Godley

To put it bluntly, the discipline of economics has yet to get over its childish passion for mathematics and for purely theoretical and often highly ideological speculation, at the expense of historical research and collaboration with the other social sciences.

The quote is, of course, from Piketty’s Capital in the 21st Century. Judging by Noah Smith’s recent blog entry, there is still progress to be made.

Smith observes that the performance of DSGE models is dependably poor in predicting future macroeconomic outcomes—precisely the task for which they are widely deployed. Critics of DSGE are however dismissed because—in a nutshell—there’s nothing better out there.

This argument is deficient in two respects. First, there is a self-evident flaw in a belief that, despite overwhelming and damning evidence that a particular tool is faulty—and dangerously so—that tool should not be abandoned because there is no obvious replacement.

The second deficiency relates to the claim that there is no alternative way to approach macroeconomics:

When I ask angry “heterodox” people “what better alternative models are there?”, they usually either mention some models but fail to provide links and then quickly change the subject, or they link me to reports that are basically just chartblogging.

Although Smith is too polite to accuse me directly, this refers to a Twitter exchange
from a few days earlier. This was triggered when I took offence at a previous post
of his in which he argues that the triumph of New Keynesian sticky-price models over their Real Business Cycle predecessors was proof that “if you just keep pounding away with theory and evidence, even the toughest orthodoxy in a mean, confrontational field like macroeconomics will eventually have to give you some respect”.

When I put it to him that, rather then supporting his point, the failure of the New Keynesian model to be displaced—despite sustained and substantiated criticism—rather undermined it, he responded—predictably—by asking what should replace it.

The short answer is that there is no single model that will adequately tell you all you need to know about a macroeconomic system. A longer answer requires a discussion of methodology and the way that we, as economists, think about the economy. To diehard supporters of the ailing DSGE tradition, “a model” means a collection of dynamic simultaneous equations constructed on the basis of a narrow set of assumptions around what individual “agents” do—essentially some kind of optimisation problem. Heterodox economists argue for a much broader approach to understanding the economic system in which mathematical models are just one tool to aid us in thinking about economic processes.

What all this means is that it is very difficult to have a discussion with people for whom the only way to view the economy is through the lens of mathematical models—and a particularly narrowly defined class of mathematical models—because those individuals can only engage with an argument by demanding to be shown a sheet of equations.

In repsonse to such a demand, I conceded ground by noting that the sectoral balances approach, most closely associated with the work of Wynne Godley, was one example of mathematical formalism in heterodox economics. I highlighted Godley’s famous 1999 paper
in which, on the basis of simulations from a formal macro model, he produces a remarkably prescient prediction of the 2008 financial crisis:

…Moreover, if, per impossibile, the growth in net lending and the growth in money supply growth were to continue for another eight years, the implied indebtedness of the private sector would then be so extremely large that a sensational day of reckoning could then be at hand.

This prediction was based on simulations of the private sector debt-to-income ratio in a system of equations constructed around the well-known identity that the financial balances of the private, public and foreign sector must sum to zero. Godley’s assertion was that, at some point, the growth of private sector debt relative to income must come to an end, triggering a deflationary deleveraging cycle—and so it turned out.

Despite these predictions being generated on the basis of a fully-specified mathematical model, they are dismissed by Smith as “chartblogging” (see the quote above). If “chartblogging” refers to constructing an argument by highlighting trends in graphical representations of macroeconomic data, this seems an entirely admissible approach to macroeconomic analysis. Academics and policy-makers in the 2000s could certainly have done worse than to examine a chart of the household debt-to-income ratio. This would undoubtedly have proved more instructive than adding another mathematical trill to one of the polynomials of their beloved DSGE models—models, it must be emphasised, once again, in which money, banks and debt are, at best, an afterthought.

But the “chartblogging” slur is not even half-way accurate. The macroeconomic model used by Godley grew out of research at the Cambridge Economic Policy Group in the 1970s when Godley and his colleagues Francis Cripps and Nicholas Kaldor were advisors to the Treasury. It is essentially an old-style macroeconometric model combined with financial and monetary stock-flow accounting. The stock-flow modelling methodology has subsequently developed in a number of directions and detailed expositions are to be found in a wide range of publications including the well-known textbook by Lavoie and Godley—a book which surely contains enough equations to satisfy even Smith. Other well-known macroeconometric models include the model used by the UK Office of Budget Responsibility, the Fair model in the US, and MOSES in Scandinavia, alongside similar models in Norway and Denmark. Closer in spirit to DSGE are the NIESR model and the IMF quarterly forecasting model. On the other hand, there is the CVAR method of Johansen and Juselius and similar approaches of Pesaran et al. These are only a selection of examples—and there is an equally wide range of more theoretically oriented work.

This demonstrates the total ignorance of the mainstream of the range and vibrancy of theoretical and empirical research and debate taking place outside the realm of microfounded general equilibrium modelling. The increasing defensiveness exhibited by neoclassical economists when faced with criticism suggests, moreover, an uncomfortable awareness that all is not well with the orthodoxy. Instead of acknowleding the existence of a formal literature outside the myopia of mainstream academia, the reaction is to try and shut down discussion with inaccurate blanket dismissals.

I conclude by noting that Smith isn’t Godley’s highest-profile detractor. A few years after he died—Godley, that is—Krugman wrote an unsympathetic review of his approach to economics, deriding him—oddly for someone as wedded to the IS-LM system as Krugman—for his “hydraulic Keynesianism”. In Krugman’s view, Godley’s method has been superseded by superior microfounded optimising-agent models:

So why did hydraulic macro get driven out? Partly because economists like to think of agents as maximizers—it’s at the core of what we’re supposed to know—so that other things equal, an analysis in terms of rational behavior always trumps rules of thumb. But there were also some notable predictive failures of hydraulic macro, failures that it seemed could have been avoided by thinking more in maximizing terms.

Predictive failures? Of all the accusations that could be levelled against Godley, that one takes some chutzpah.

Jo Michell