The Economic Approach to Human Behaviour

Economics is the study of society. It goes beyond the allocation of scarce resources in markets, examining how agents in society behave and how their behaviour influences their environment. This human factor makes the study of Economics both important and exciting. However, how to model it is an open debate in Economics. In today’s post, I want to look at the new classical Economist’s view on humans. It has been shaped by the works of Chicago Economist and Nobel Laureate Gary Becker. In particular, Becker’s book The Economic Approach to Human Behavior (1976) serves as a cornerstone for how one ought to model human behaviour in the economy and society. It argues for (1) maximising behaviour, (2) market equilibrium, and (3) stable preferences.

Maximising Behaviour

In the new classical view, humans are assumed to be utility maximisers. They act as if they maximise their intrinsic utility or wealth function. However, utility maximisation does not necessarily imply that humans act only in their own interest. Becker (1993, p.386) argues that “individuals maximize welfare as they conceive it, whether they be selfish, altruistic, loyal, spiteful, or masochistic”. Hence utility maximisation is consistent with the view that humans have some broader preferences and a wide range of constraints such as income, time, imperfect memory and abilities, limited resources or opportunities available to them (Becker, 1996).

Market Equilibrium

The second pillar is market equilibrium. Markets are in place to coordinate humans’ actions. Although they vary on their degree of efficiency, markets converge towards equilibrium because humans as utility maximising agents exploit all their profit opportunities.

Stable Preferences

The third pillar is not uncontroversial; it is assumed that humans’ preferences are stable over time and similar among people. Such preferences are not preferences regarding goods and services but preferences over the fundamental aspects that people have to deal with in life. Gary Becker (1976) names the areas of health, prestige, sensual pleasure, benevolence, and envy as examples. That humans should be modeled as agents with stable preferences over these fundamental aspects in life is explained as follows:

The assumption of stable preferences provides a stable foundation for generating predictions about responses to various changes, and prevents the analyst from succumbing to the temptation of simply postulating the required shift in preferences to “explain” all apparent contradictions to his predictions (Becker, 1976, p. 5).

It should also be noted that Becker has changed his view on humans’ preferences over the course of his works. In his early works, he advocates a radical version of preference stability with time-invariance and similarity of preferences among the poor and the wealthy as well as different countries and cultures (Becker, 1976). In later works, he reverts to a less radical view on preference stability in which preferences are shaped, for example, by parents during childhood, as well as the media through advertising, or “imagination” capital (Becker, 1996). This less radical view acknowledges that humans’ preferences are relatively stable but can evolve through practice, habituation and learning, and may therefore be heterogeneous (Heckman, 2015).

In sum, the new classical view argues that “all human behaviour can be viewed as involving participants who maximize their utility from a stable set of preferences and accumulate an optimal amount of information and other inputs in a variety of markets” (Becker, 1976, p.14). Hence utility maximisation, market equilibrium and preference stability serve as the three pillars for how to analyse human behaviour in the economy and society. They ensure that new classical Economists apply the same analytical framework because human behaviour “is not compartmentalized, sometimes based on maximizing, sometimes not, sometimes motivated by stable preferences, sometimes by volatile ones, sometimes resulting in an optimal accumulation of information, sometimes not” (Becker, 1976, p.14).

Lastly, I would like to stress that, while Becker’s works have greatly influenced the profession (especially standard economic theory), his arguments are not uncontroversial. For example, although utility maximisation allows for a broader set of preferences and constraints, it models humans in a very mechanical way. Lucas, a central figure in the new classical approach to macroeconomics, once admitted that “we’re programming robot imitations of people, and there are real limitations on what you can get out of that” (Lucas in Klamer, 1984, p.49). In my opinion, this is one of the main reasons why Behavioural Economics has been so successful in the last couple of years, especially after the global financial crisis of 2008. Behavioural Economics, unlike New Classical Economics, models humans as humans and not as robots. While this may complicate the analysis, it allows for descriptive models rather than normative models of human behaviour which are of superior predictive value.

I hope you enjoyed today’s discourse and many thanks for reading,



Becker, G.S. (1976). The Economic Approach to Human Behavior. Chicago: University of Chicago Press.

Becker, G.S. (1996). Accounting for Tastes. Cambridge: Harvard University Press.

Heckman, J.J. (2015). Gary Becker: Model Economic Scientist. The American Economic Review105(5), 74–79.

Klamer, A. (1984). The New Classical Macroeconomics: Conversations with New Classical Economists and Their Opponents. Brighton: Wheatsheaf.

The Economics of Deception and Manipulation

I recently finished George Akerlof and Robert Shiller’s latest book Phishing For Phools. While I also enjoyed their earlier book Animal Spirits I have to say that Phishing For Phools is a hidden gem. So I decided to devote today’s post to the book and why every student of Economics should have a copy of it.

What makes Phishing For Phools different?

Phishing for Phools is different because Akerlof and Shiller give the reader a new perspective on Economics. It is not a re-iteration of New Behavioural Economics because it addresses:

  1. The Role of Equilibrium in Competitive Markets,
  2. The Difficulties with ‘Revealed Preference’ and
  3. Story Grafting.

First, Akerlof and Shiller in their perspective on Economics endorse that economic systems converge towards a general equilibrium, albeit a phishing equilibrium. In contrast, work in Behavioural Economics tends to centre on shrouded markets and economic actors having certain weaknesses (e.g. present bias). While these assumptions make phishing undeniable, the results of these studies are not generalisable. Shiller and Akerlof level criticism at Behavioural Economics in its current form because it misses the generality of phishing for phools in our economy. They describe a range of examples in the book with their favourite probably being Cinnabon® bakeries in airports and shopping malls to show that when “people have informational or psychological weaknesses that can be profitably exploited” (p.170), then we can be certain that phishing for phools is going to happen. Hence phishing for phools is a general feature of our economy rather than an externality of shrouded markets or biases of non-rational economic actors.

phishing equilibrium.png
A Phishing Game

I am thinking of Akerlof and Shiller’s phishing equilibrium in the Cinnabon® example as a Pareto-inferior equilibrium in a simple two-player “Phishing Game” with a consumer (C) and a firm (F). Here the consumer, that is the row player, has some true preferences and some monkey-on-the shoulder tastes. Both preferences map into some choice. However, the choice based on the consumer’s true preferences yields a higher payoff for her than her choice based on her monkey-on-the-shoulder tastes (assuming that the firm simultaneously chooses to provide her with that specific good and not the alternative). The column player, that is the firm, has two profit opportunities. It can open a healthy shop or a sweet & tasty shop in the airport or shopping mall where the consumer can easily be phished for a phool. I have arranged the firm’s and consumer’s payoff similar to the Battle of the Sexes game with the modification that the consumer receives a payoff of 3 and not 2 in the optimal equilibrium. This allows us to distinguish the two equilibria into an equilibrium which maximises social welfare (Healthy Shop | True Preferences) and a Pareto-inferior one, i.e. a phishing equilibrium (Sweet & Tasty Shop | Monkey-on-the-Shoulder Tastes). Both the consumer and firm want to coordinate in the sense that the consumer wants to consume and the firm wants to sell. However, the firm wants to maximise profits by selling its sweet and tasty products rather than selling a healthy product (which might allow for a lower mark-up).

Crucially, Akerlof and Shiller argue that such a ‘general equilibrium’ perspective with phishing for phools as a general feature of the economy gives an answer to why economists did not see the financial crisis coming: they did not look for phishes stemming from the informational and psychological weaknesses of economic actors and the counterparts that profitably exploited them.

Moving on to the second argument; the book is also not a re-iteration of Behavioural Economics because it challenges Revealed Preference. The authors criticise this concept and the general acceptance of it in Behavioural Economics. As mentioned above, Akerlof and Shiller distinguish between what people really want and what they think they want, i.e. their monkey-on-the shoulder tastes (and hence the book’s caption The Economics of Deception and Manipulation). Akerlof and Shiller criticise that both standard economic theory and Behavioural Economics assume that people optimise and therefore make choices which maximise their utility. Both fields tend to assume that people reveal their preferences if free to choose and given all the necessary information. This allows for the simple assumption that, in theory and practice, people’s choices reflect their true preferences. However, this is not what we observe: Akerlof and Shiller give plenty of examples in their book which they call the NO-ONE-COULD-POSSIBLY-WANTs. They categorise them into the areas of (1) personal financial security, (2) the stability of the macroeconomy, (3) health, and (4) the quality of government in order to highlight how prevalent they are. The book therefore challenges both standard economic theory and Behavioural Economics for overlooking this subtle but important difference between true preferences and what people think they want.

Third, Story Grafting makes the book different from Behavioural Economics. Akerlof and Shiller make the case for a new variable in Economics, that is the story that people are telling themselves. While Behavioural Economics has come up with a choice menu of psychological biases to explain non-rational behaviours, it has often eschewed the underlying mental frames of decision-making. Daniel Kahneman (1999, in Kahneman and Tversky, 2000, p.xiv) once said that we

apply the label “frame” to descriptions of decisions at two levels: the formulation to which decision makers are exposed is called a frame and so is the interpretation that they construct for themselves.

New Behavioural Economics has very much focused on the latter. It is the frame which decision-makers have control about. In contrast, the frame which decision-makers are exposed to is much broader and in some sense out of their control. Akerlof and Shiller’s stories describe these broader frames which are shaped in great deal by the media and our environment and peers. Rather than having a choice menu of psychological biases, Akerlof and Shiller argue for recognising these broad mental frames that influence individuals’ decisions. Stories, like phishes, are a general feature of our economy. Economics as a study of society needs to go beyond the analysis of the exchange of scarce resources. It needs to become more inclusive. In particular, Akerlof and Shiller argue that “we should be inclusive of whatever thinking, conscious or subconscious, is the basis for people’s decisions” (p.172).

In my opinion, Akerlof and Shiller have crafted a hidden gem with their book Phishing For Phools because it really offers a new perspective on Economics which goes beyond recent work in New Behavioural Economics. It makes the case for phishes and stories as a general feature of our economy and makes the subtle but important differentiation between true preferences and monkey-on-the-shoulder tastes. This New Economic perspective is more inclusive and much needed to understand how people make decisions.

So I hope that my post today has inspired you to give the book a chance. Many thanks for reading,



Akerlof, G.A., and Shiller, R. (2015). Phishing For Phools: The Economics of Deception and Manipulation. Princeton: Princeton University Press.

Kahneman, D. (1999). Preface. In: Kahneman, D., and Tversky, A., eds. (2000). Choices, Values and Frames. Cambridge: Cambridge University Press, pp. ix-xvii.

Alternative Thinking – What is Econophysics?

Recently I blogged about getting my hands on a copy of Debunking Economics by Steve Keen. At that point I did not really know what to expect but it seemed to be a good read for challenging my conventional economic training in university. My undergraduate classes – both Macroeconomics and Microeconomics – have so far been dominated by the Neoclassical and New Keynesian school of thought. An exception to this monopoly on the undergraduate economics curriculum is probably Behavioural Economics, especially Game Theory, which I really enjoyed last semester! However, it is normally the concepts of rational expectations, utility maximising firms and individuals (constrained optimisation) and equilibrium that dominate the lectures. I am certain that it is not just me, but most undergraduates are tortured with abstract supply and demand analyses and comparative statics.

As Richard Thaler (2015, p.6) in his book Misbehaving: The Making of Behavioural Economics points out, the core premise of economic theory can simplistically be summarised as follows:

Optimization + Equilibrium = Economics

These are the very basics of economic theory and almost never challenged in the undergraduate curriculum. I am honest about this, so far I have followed the standard Economics curriculum sheepishly because you are hardly encouraged to question the core premises. However, my main takeaway from Keen’s book is really to not take Equilibrium and Optimising Behaviour for granted. These should not be core premises of economic theory, because – in practice – they just do not hold. We cannot overlook this and work on the premise that our assumptions of optimisation and equilibrium do not matter as revealed by Milton Friedman’s “paradoxical statement that ‘the more significant the theory, the more unrealistic the assumptions’” (Keen, 2011, p.159). This is because most of the assumptions we make in Economics are not neglibility assumptions but either domain assumptions or heuristic assumptions. In short, assumptions do matter and therefore modeling economic systems based on the flawed premises of optimisation and equilibrium (and a range of other unreasonable assumptions) must also be flawed.

Criticising conventional economic theory is one side of the coin, putting forward promising alternatives is the flipside. This is why Keen devotes his very last chapter of the 2011 Edition to the main alternative schools of thought. In particular, he gives a brief overview on Austrian economics, Post-Keynesian economics, Marxian economics, Sraffian economics, Complexity theory and Econophysics, and Evolutionary economics. However, it should be pointed out that all of them have their own weaknesses and I very much appreciate that Keen discusses both their strengths and weaknesses in the chapter.

According to Keen, one of the promising alternatives is Econophysics – the merger of Economics and Physics – due to its contribution to complexity in economics. The Econophysics approach is empirical, dynamic rather than static, and devoid of equilibrium conditions. This motivated me to devote today’s blog post to Econophysics, giving a short introduction to what Econophysics is about. I am also going to highlight the main areas of application at the moment as well as some interesting articles and books to get started with this multidisciplinary approach.

First, let’s look at what Physics and Economics have in common: They both make use of dense mathematics. Physicists even more so than Economists, because naturally their background is far more mathematical. And this is also where they depart and where Physics can greatly enhance the current state of the Economics profession. Physicists have the tools to investigate complex systems. Econophysics recognises that statistical physics concepts, such as stochastic dynamics, short- and long-range correlations, self-similarity and scaling, can be applied to to understand the global behaviour of economic systems (Mantegna and Stanley, 2000). For Economists this means that they can turn to empirical analysis methods without imposing a priori assumptions. Adopting theoretical tools of Physics allows Economists to model systems with interacting subsystems and this is exactly what we need in order to model the Macro-economy. Rather than building Macroeconomics from Microeconomics this merger of Economics and Physics allows Macroeconomists to model the Macro-economy as something that is more than the sum of its parts. It allows Macroeconomists to abandon representative agent models of the economy which in the past have failed to accurately describe the real economy anyway.

There are two approaches in Physics that Economics can greatly benefit from: complexity theory and chaos theory. While the former is the study of non-deterministic systems the latter are deterministic systems, which might seem a bit counterintuitive.

Chaotic systems are non-linear and dynamic. For example, when we take two variables which are influenced by each other they give constant feedback. Chaotic systems are sensitive to initial conditions, meaning that even a small change in the initial conditions leads to a completely different outcome in the long run and the so-called Butterfly effect (Jacobs, 2006). Therefore, in chaotic systems uncertainty arises because we cannot determine the chaotic system’s initial conditions (Fisher, 2012).

In comparison, complex systems are characterised by emergent behaviour. They are made of agents that interact with and adapt to another. Complex systems can be robust to small shocks at one point in time but fragile at another. Because complex systems are non-deterministic the outcome of the interaction of its agents is unpredictable. This gives rise to uncertainty, because even if we knew the initial conditions of the complex system we could not predict the future (Fisher, 2012).

What are the main areas of application? In the past Econophysics was centred on financial markets due to the availability of high frequency data thanks to electronic trading and financial markets being active 24 hours around the world. Financial markets create a vast amount of data which is needed for modeling. One of the most striking application is risk management which greatly benefits from Econophysics as a multidisciplinary approach making use of financial mathematics, probability theory, physics and economics (Mantegna and Stanley, 2000). However, the discipline is now moving on to explain more general economic phenomena. Starting out as what Keen calls ‘Finaphysics’ the discipline is now becoming more of ‘Econophysics’. One example is the Economic Complexity Index developed by Hidalgo and Hausman which shows “that countries tend to converge to the level of income dictated by the complexity of their productive structures” and which sees the emergence of complexity as a main factor for generating sustained growth and prosperity (Hidalgo and Hausmann, 2009, p. 10570). The Economic Complexity Index has proven to be more accurate in predicting income growth relative to the World Bank’s traditional governance measures. In particular, in the Atlas of Economic Complexity Hausman, Hidalgo et al. conclude that “the Economic Complexity Index captures significantly more growth-relevant information than the 6 World Governance Indicators” (2011, p.33).

Some interesting articles are listed on the website of the Economics: The Open-Access, Open-Assessment E-Journal. For example, in one of the papers Paul Ormerod applied random matrix theory to the analysis of macro-economic time series data. He examined “the evolution of the convergence of the business cycle between capitalist economies from the late 19th century to 2006” (Omerod, 2008, p.1). With the help of random matrix theory Ormerod distinguished true information from noise and showed that there is now a strong level of synchronisation of business cycles which makes it possible to speak of an international business cycle. In another paper Chen, Chang and Wen (2014) examined the effect of social networks on macroeconomic stability. They made use of an agent-based network-based DSGE and showed that both the non-linear and combined effects of network characteristics and the shape of the degree distribution are significant in determining the effect on economic stability. In another paper Challet, Solomon and Yaari deployed a three-parameter equation to model how GDP evolved during recessions and recoveries and argued that their equation is “the response function of the economy to isolated shocks” (2009, p.1) which therefore can help detecting shocks and has predictive power. The last interesting paper I want to point out was only published recently by Solferino and Solferino (2016). They applied the geometrical model of the Möbius strip to a Corporate Social Responsibility context to allow for complex interactions that characterise social and economic relationships today. As discussed before in the paragraph on complex and chaotic systems, this paper makes deliberate use of complexity and nonlinearity and acknowledges that feedback loops make systems interdependent and interacting with their environment and which, according to Solferino and Solferino, is also at the core of the models of Corporate Social Responsibility. For people interested in a comprehensive introduction to the field beyond the Econophysics papers published in journals, Mantegna and Stanley published the book An Introduction to Econophysics: Correlations and Complexity in Finance which might be worthwhile to read!

I hope you enjoyed today’s post. Thanks for reading!




Challet, D., Solomon, S., and Yaari, G. (2009). The Universal Shape of Economic Recession and Recovery after a Shock. Economics: The Open-Access, Open-Assessment E-Journal, 3(2009-36), 1-24.

Chen, S.H., Chang, C.L., and Wen, M.C. (2014). Social Networks and Macroeconomic Stability. Economics: The Open-Access, Open-Assessment E-Journal, 8(2014-16), 1-40.

Fisher, G. (2012, July 14). Chaos Versus Complexity. Retrieved from

Hausmann, R., Hidalgo, C.A., Bustos, S., Coscia, M., Chung, S., Jimenez, J., Simoes, A., and Yildirim, M.A. (2011). The Atlas of Economic Complexity: Mapping Paths to Prosperity. Retrieved from

Hidalgo, C.A., and Hausmann, R. (2009). The building blocks of economic complexity. PNAS, 106(26), 10570-10575.

Jacobs, J. (2006, May 7). Chaos theory, game theory and complexity theory. Retrieved from

Keen, S. (2011). Debunking Economics: The Naked Emperor dethroned? London: Zed Books.

Omerod, P. (2008). Random Matrix Theory and Macro-Economic Time-Series: An Illustration Using the Evolution of Business Cycle Synchronisation, 1886–2006. Economics: The Open-Access, Open-Assessment E-Journal, 2(2008-26), 1-10.

Mantegna, R.N., and Stanley, H.E. (2000). An Introduction to Econophysics: Correlations and Complexity in Finance. Cambridge, UK: Cambridge University Press.

Solferino, N., and Solferino, V. (2016). The Corporate Social Responsibility Is just a Twist in a Möbius Strip. Economics Discussion Papers, No 2016-12, Kiel Institute for the World Economy.

Thaler, R. (2015). Misbehaving: The Making of Behavioural Economis. London, UK: Penguin Books.


Daily Reading: Life Among The Econ

Having finished Thaler’s and Sunstein’s bestseller Nudge recently, I somehow ended up with the second edition of Debunking Economics by Steve Keen (2011). It is more or less an in insiders’ tip for its reckoning with economic theory, going through its flaws both at micro- and macroeconomic level. To be honest, I had not heard about Keen until recently. But then my Macroeconomics lecturer turned to a discourse of the Global Financial Crisis in 2008. Focusing on Fisher’s Debt Deflation Theory and Minsky’s Financial Instability Hypothesis, Steve Keen’s work came into play and I actually ended up catching up on that week’s lecture material with some of Steve Keen’s numerous Youtube videos.

Being quite impressed by Keen’s online lectures I decided to broaden my reading list with his post-Keynesian book Debunking Economics. First of all it is heavy (literally) with more than 450 pages of pure Economics. Second, this is clearly not an easy-going book. I admire his style of writing but it makes the book also less accessible. Being an Economics undergraduate I can, for example, mostly follow his argumentation in chapter 3 ‘The Calculus of Hedonism’ but I am certain it would be hard to sell to a ‘foreigner’ and I am sure I will be mentally challenged over the course of the book.

While still ploughing through the first part, the very opening of Keen’s fourth chapter “Size Does Matter” caught my interest. It refers to Leijonhufvud’s 1973 rather sarcastic paper Life Among The Econs. Keen refers to this paper because it touches upon Economists’ obsession for (strictly downward-sloping) demand and (strictly upward-sloping) supply analyses to find the one and only equilibrium in an economy or a market. However, the overarching idea of observing academic Economists through the lens of an anthropologist initially sounded absurd to me. But what followed hit the nail right on the head and so I decided to have a go at Leijonhufvud’s paper.

First of all, Leijonhufvud’s ‘Life Among The Econs’ (1973) is sarcastic through an through. There is the hypothetical Econ tribe which social structure has the two dimensions of caste and status. While caste is the basic division, status follows at the next level with a network of status relationships of every Econ. What is more, Econs call their castes ‘fields’. Several fields are mentioned by Leijonhufvud (1973). Besides the Micro and the Macro, there is also the Math-Econ and the Develops but there is no clearly set hierarchy despite the general observation that the Math-Econs are being the priests above all. The Econs work in distinct social units, i.e. the villages or ‘depts’, and almost all castes come together and interact in these depts. Thereby the status of the Econ derives from his ability to make ‘modls’ of his field while the trouble that they overall do not seem to have practical use is widely ignored. In particular, the most basic modl of both the Micro and the Macro are called the Totems of the two castes. While the Totem of the Micro is the S-D Model, the Macro’s Totem is the IS-LM Model. Leijonhufvud (1973) points out that both castes adore their Totems to such an extent that intermarriages seem impossible. At the same time they collectively are firm believers in their Totems while there is a decreasing amount of implementarists who question both castes’ modls. Overall, the future of the Econ is bleak to put it into Leijonhufvud’s words. The Econ tribe suffers from poverty, high population growth and there is no reason to hope that the disintegration of Econ culture is about to reverse. The political organisation of the Econ is weakening while rural-urban migration is increasing and Econ turnover from dept to dept is on the rise, even for the seniors among the Econs. What is more, Leijonhufvud (1973) predicts “alienation, disorientation, and a general loss of spiritual values” (p. 336) which could mark the end for the tribe in the future.

That is a really brief overview on Leijonhufvud’s paper but there is much more to it and I recommend anyone with a good taste of humour and sarcasm to have a go at it! It really made my day!


Keen, S. (2011). Debunking Economics: The Naked Emperor Dethroned? New York, NY: Zed Books.

Leijonhufvud, A. (1973). Life Among The Econ. Western Economic Journal, 11(3), 327-337. Retrieved from

The Revival of the Long Run Philips Curve (Part 2)

Today I want to take a closer look at several of the high income OECD countries following yesterday’s post on the short run and potentially long run trade-off between unemployment and price stability. For this I obtained country statistics on the rate of unemployment as percentage of the civilian labour force, the annual percentage change of consumer prices and the NAIRU (Unemployment rate with non-accelerating inflation rate) from the OECD’s Economic Outlook No 91 in June 2012. All measures are available online from OECD Statistics. In particular, I am looking for the standard short run trade-off between unemployment and inflation (Philips curve) as well as a long run trade-off between the NAIRU and inflation below a certain threshold level close to absolute price stability or deflation.

OECD inflation unemployment trade off 1

OECD estimates for the US NAIRU are available from 1970 onwards. The NAIRU is relatively constant over the complete period and ranges from 5.4 to 6.4 percent. Overall, the US has recorded positive levels of inflation (excluding 2009) often above the 2 percent level except for 1986, 1998, 2002 and 2013. The short run trade-off between unemployment and inflation seems to hold on average after 1986 except for a period in the 1990s. There might also be evidence for a long run trade-off as the US inflation rate dropped to -0.4 percent in 2009 with a spike in unemployment and a small increase in the NAIRU by 0.3 percent. Overall though the data remains inconclusive because the US has experienced mostly inflation large enough to ensure wage flexibility.

OECD inflation unemployment trade off 2

Data for Canada is available from 1970 onwards. The country experienced almost a decade of disinflation after the spike at 12.5 percent in 1981. Lately, inflation seems to have fluctuated around 2 percent and prices were almost perfectly stable in the years 1994 (0.2) and 2009 (0.3). However, this was accompanied by a small rise of 0.5 percent and 0.3 percent in the NAIRU, respectively. So there might be a long run trade-off but in this case at a very low threshold of close to zero. The short run trade-off between unemployment and inflation seems to hold on average after 1989.

OECD inflation unemployment trade off 3

NAIRU estimates for France are available from 1970 onwards. After the two inflationary spikes in 1974 and 1980-81 inflation has come down to levels of around 2 percent recently. Overall the short run Philips curve seems to hold after the second inflationary spike. What is more, the NAIRU has more than doubled in line with the unemployment rate since the 1970s which hints at a short run and long run trade-off. There seems to be an increasingly lower inflation threshold of circa 1 or 2 percent at which the NAIRU responds. For example, I would interpret the period from 1992 to 2000 and 2009 to 2013 as such a long run trade-off where France achieved price stability at the cost of a rising NAIRU.

OECD inflation unemployment trade off 4

NAIRU estimates for Ireland are available after 1990 and inflation rates are available from 1977 onwards. Similar to France, Ireland experienced inflationary spikes around the same time but of a greater magnitude. Since then inflation has dropped significantly and exceeded 5 percent only in 2000. Inflation reached its all-time low of -4.5 percent during the GFC in 2009 and has recently recovered to levels close to price stability.  The period of disinflation in the 1980s and 1990s was accompanied by high unemployment exceeding 15 percent for several years. From 2000 to 2007 inflation and unemployment have been moderate with a large fall in the NAIRU to 7.5 percent. However, deflation triggered a sharp rise both in the unemployment rate and NAIRU in 2009 providing evidence for both an adverse short run and long run impact of deflation in Ireland. In case that there is a threshold it would probably be around a zero or 1 percent level.

OECD inflation unemployment trade off 45png

NAIRU estimates for Greece are available from 1995 onwards. After a long period of stagflation and breakdown of the Philips curve until circa 1990 inflation came down to levels below 4 percent while employment rose by 3.9 percent from 1990 to 1999. In the 2000s inflation remained relatively stable and unemployment came down again to 7.3 percent in 2008. Meanwhile the NAIRU remained constant. However, after 2008 the NAIRU rose significantly from 9.9 to 12.3 percent in 2013. During the same period inflation dropped to all-time lows of 1.2 percent in 2009 and -0.9 percent in 2013. Unemployment soared at more than 27 percent. Hence Greece provides some evidence for an inflation threshold of circa 2 percent in order for it to affect both short run and long run unemployment.

 OECD inflation unemployment trade off 6

Data on Italy is available from 1970 onwards. After the inflationary spikes of 1974-77 and 1981 inflation has dropped to levels below 4 percent since 1996. However, the short run Philips curve does not seem to hold well on average for the period before 2008. Italy’s NAIRU steadily increased until 1998 with even higher actual unemployment rates. Unemployment fell to 1970-levels again in 2007 (6.1%), but increased back to 1980/90s-levels in the aftermath of the GFC. Overall, Italy does not provide compelling evidence for a long run trade-off between unemployment and inflation based on OECD NAIRU estimates. This should be treated with caution though because the sharp rise in the unemployment rate after 2008 might actually translate into a rise in the NAIRU like in the case of Greece.

OECD inflation unemployment trade off 7

Data on Portugal’s NAIRU is available from 1980 onwards. Portugal was hit by three massive inflationary spikes in 1974 (15.3), 1977 (31), and 1984 (28.4). Thereafter inflation fell rapidly and has been below 4 percent since 1996 on average. The all-time low was in 2008 with deflation of 0.8 percent and in 2013 inflation again dropped to almost zero (perfect price stability). The short run Philips curve seems to hold on average after 1992. The NAIRU fell by 1.2 percent over the period from 1980 to 2000. Since 2001, however, it has risen by 5 percent to 11 percent in 2013 and unemployment has soared even more sitting at over 30 percent in 2013. During the same period inflation was relatively low. However, the period from 2001 to 2008 is comparable to the late 1990s which did not cause such a massive rise in the NAIRU. Overall, there might be an inflation threshold of around 2 percent at which both unemployment in the short run and long run soar.

OECD inflation unemployment trade off 8

The last country I want to look at today completes the Southern European panel of Greece, Italy and Portugal. Spain’s NAIRU is available from 1978 onwards. Starting at 4.8 percent it rose to 15.8 percent in 1995. After a short recovery until 2005 the NAIRU is now back at an even higher level of 16.5 percent. Similar to other countries inflation spiked in 1977 at 24.5 percent. After a prolonged period of disinflation it is now down to levels below 4 percent. In 2009, Spain actually recorded deflation for the first time and in 2013 inflation was again down to only 1.4 percent. Overall the Philips curve does not seem to hold very well for the time until 2007. Since 2008, however, both the NAIRU and the unemployment rate are rising sharply with inflation below 2 percent from 2008-10 and 2013 providing evidence for both a short run and long run trade off below a certain level. This finding is comparable to Italy. What is more, the inflation trend line looks a little bit like the NAIRU mirrored along a slowly decreasing line through the point of intersection of the NAIRU and the inflation trend which would be another hint for a long run trade off.

In sum, Canada, France, Greece, Portugal and Spain might actually provide evidence for a long run trade-off between unemployment and inflation below a certain inflation threshold due to impaired wage flexibility as discussed yesterday. A main limitation is that the data is relatively noisy and for example the US data is more or less inconclusive due to inflation on overage high enough to allow for wage flexibility. Italy, however, does not fit into the model.

Thanks for reading!


OECD (2012), “OECD Economic Outlook No. 91 (Edition 2012/1)”, OECD Economic Outlook: Statistics and Projections (database).
(Accessed on 14 April 2016)

OECD (2016), “Labour Force Statistics: Summary tables”, OECD Employment and Labour Market Statistics (database).
(Accessed on 14 April 2016)

OECD (2016), “Prices: Consumer prices”, Main Economic Indicators (database).
(Accessed on 14 April 2016)

The Revival of the Long Run Philips Curve

I read Paul Krugman’s essay A Good Word for Inflation (1998) recently where he criticizes that monetary policy is increasingly black and white. Either it is growth or price stability that dominates all other goals. What is even more worrisome, it is often sold as a trade-off: bring back the growth rates of earlier generations or pin down stable prices, but surely you cannot have both! And this is exactly where monetary policy becomes too simplistic according to Krugman.

The main argument in the essay is that the costs of low inflation are elusive and that they really increase nonlinear with higher inflation rates. The gains from total price stability are therefore largely overestimated. In short, zero inflation comes at a cost (due to the pains of disinflation, namely high unemployment and excess capacity) far exceeding its benefits.

What captured my attention though is the paragraph on the non-accelerating inflation rate of unemployment (NAIRU). The widely accepted theory is that the Philips curve describing the inflation-unemployment trade-off only holds in the short run. In the long run there is no trade-off between inflation and unemployment and an economy reverts back to its NAIRU. However, Krugman (1998) points out that “there is some evidence that a push to zero inflation may lead not just to a temporary sacrifice of output but a permanently higher rate of unemployment” (p.118).

One potential cause is wage bargaining which might break down at low levels of inflation. At positive inflation rates, real wages can stagnate or fall easily without labelling it as such. For this to happen, nominal wages growth must only be lower than the rate of inflation. At zero inflation, however, nominal wages would not need to be adjusted anymore. What is more, real wage cuts would now need to be labelled as such rather than being only implicit in wage bargaining. To be clear, the effects of an equal-sized shortfall in nominal wage growth at positive inflation and a real wage fall at zero inflation would be the same, but this “homo economicus” thinking is not really a good assumption for human decision making (Richard Thaler would surely be an expert on this). So what Krugman argues in this paragraph is that there might be a long-run trade-off between very low inflation rates or even deflation and unemployment due to this wage illusion. Krugman concludes the paragraph with the case of Canada as evidence which targeted almost perfect price stability at the time of his writing and suffered high unemployment rates as a consequence of its real wage flexibility.

So what I want to do today is to try capture this idea in a diagram. The diagram is a modified version of the short run Philips curve and the NAIRU based on the theory of rational expectations and the concept that an economy has a natural rate of unemployment, denoted un, in the long run. Normally the NAIRU spans the complete unemployment rate range from zero onwards in textbook diagrams. But what if we assume that the NAIRU only starts after a certain threshold level? After this point nominal wages are flexible enough to make the labour market and wage bargaining independent from the inflation rate. This means that after the threshold the long run Philips curve (LRPC) would be perfectly elastic (or perfectly inelastic depending on how you draw the diagram). So beyond this threshold level nothing changes; we have our short run Philips curve (SRPC) and our long run NAIRU. But what happens if inflation falls below our arbitrarily set threshold level? There might now actually be a long run trade-off between unemployment and inflation due to these difficulties in wage bargaining discussed before. Let’s assume that the labour market will not be flexible enough anymore as workers become reluctant to accept real wage declines that are labelled as such. Hence, at the threshold there is a ‘turning point’ and the NAIRU moves over into an upward-sloping long run Philips curve (LRPC). The concept of the natural rate of unemployment will then break down and any point on the blue LRPC will have the potential to become a long run steady state with low or zero inflation at the cost of high unemployment (this is what Krugman warns against, I think). What is more, targeting levels of inflation close to zero or a positive threshold may be devastating if external factors push down inflation any lower. For example in case of a recession caused by a fall in aggregate demand an economy likely sees its inflation rates falling (during the GFC some countries even experienced deflation), but this could now prove harmful if the drop in the inflation rate moves the country from the red part onto the blue part of the curve LRPC-NAIRU.

What does this leave us with? In case that a threshold level exists, monetary policy should revisit its goals and aim for inflation not too close to this level. It furthermore implies monitoring unemployment as an equally important goal next to price stability is crucial to avoid curing a country’s inflation at the expense of soaring long run unemployment. Hence, one should monitor the economy to be able to act immediately in case the turning point is crossed because once the threshold is passed there would not be any self-correcting long-term mechanisms to revert back to un (making a recession even more severe and dampening recovery from it; not to mention the loss in human capital from long run unemployment).

I have attached both the diagrams I produced below. I like to the representation of inflation on the x-axis more in this case due to the convenience to extend the range to deflation easily to the left while showing that the NAIRU is perfectly elastic (flat red line) beyond the threshold. This reminds me a bit of the liquidity trap concept and makes it easier for me to grasp it. However, I also drew up the standard representation with inflation on the y-axis in the second diagram.

Philips Curve 1
Long Run Philips Curve Version 1
Philips Curve 2
Long Run Philips Curve Version 2

I hope you like today’s post and the theoretical challenging of the NAIRU concept as discussed in Krugman’s essay. The idea of the threshold is only hypothetical so far but I want to have a look at what the data suggests. In particular, does it provide evidence that there is a solid negative correlation between unemployment rates and inflation below a certain inflation rate cut-off? Also, does the NAIRU as the long-term indicator and measured by the OECD change once a country slips into deflation or close to zero inflation? Are there cross-country differences in thresholds and which countries fit into the model and which do not? Do they have other policies in place to offset the reluctance of workers to accept real wage decline at zero inflation rates? All of this will be a challenging task for my next blog post if I don’t fall into despair over it…

Thanks for reading!


Krugman, P. (1998). The Accidential Theorist: And other Dispatches from the Dismal Science. New York, N.Y.: W.W. Norton & Company.

The Hot-dog-and-bun Economy

Today I want to talk about Paul Krugman’s thought experiment of the hot-dog-and-bun economy in The Accidental Theorist (1998, p.18-23). When I first read his essay I didn’t realise how elaborated it actually is. That’s why I went back to it and plotted his example as today’s exercise.

Hot dog and bun economy

Krugman’s hypothetical economy only produces hot dogs and buns with a labour force of 120 million workers. Each worker is able to either produce 0.5 buns or 0.5 hot dogs per day and he assumes that hot dogs and buns are consumed together. So it is desirable to have 60 million workers in the hot dog industry and 60 million workers in the bun industry. The diagram above depicts the economy’s initial Production Possibilities Frontier (PPF) and its current production levels which can easily be calculated:

60 million workers * 0.5 buns per worker per day = 30 million buns per day
60 million workers * 0.5 hot dogs per worker per day = 30 million hot dogs per day

Krugman then introduces a productivity increase only in the hot dog industry. Now each hot dog worker can produce exactly 1 hot dog per day. This pivots the PPF to PPF’. Recall that we still have the constraint to produce as many buns as hot dogs because we sell them together. So the new production level must be proportionally bigger to the initial level. (Imagine a line through the origin and your initial production level. Now continue on it until it crosses the new PPF’. You will end up at (40, 40) without any maths!) The main point here is that this new production level can only be reached through the reallocation of labour from the hot dog industry to the bun industry. In particular, we need the following distribution:

80 million workers * 0.5 buns per worker per day = 40 million buns per day
40 million workers * 1 hot dogs per worker per day = 40 million hot dogs per day

What happens to employment? Employment in the hot dog industry falls by one third whereas employment in the buns industry increases by one third. So the changes within each industry cancel out for the economy as a whole. Total employment remains at a constant level in the economy and there is only a reallocation of labour. In comparison, total output increases in both industries by exactly one third. As far as I understand, the output increase in the hot dogs industry is driven by the productivity increase. The output increase in the buns industry is driven by the labour influx.

To sum up, let’s look at the context of the example. Krugman published this thought experiment in reaction to William Greider’s book One World, Ready or Not: The Manic Logic of Global Capitalism in which Greider links productivity growth to job losses. However, he is misled by the fallacy of composition, because “the logic of the economy as a whole is not the same as the logic of a single market” (Krugman, 1998, p.22). Krugman nicely uncovers Greider’s faulty line of arguments in this thought experiment by showing that productivity growth in one industry (hot dogs) might well reduce employment in this one sector but total employment in the economy will not be reduced. Other sectors will soak up the freed-up labour. Think about this for a minute, what Krugman argues here is that one cannot make a conclusion about total employment in the economy by just looking at one industry.

In the second part of his essay, Krugman applies his hypothetical example to the manufacturing and services industry in the US from 1970 to 1997. He argues that – in a similar fashion – manufacturing output doubled driven by productivity growth while employment declined somewhat. Services output also doubled but driven by employment growth with stagnant productivity. Krugman shows that it would be plainly wrong to argue that productivity growth in manufacturing has caused job losses in the US economy as a whole. What is more, it actually created jobs in the services sector.

Services and manufacturing industry
The final argument in the essay concerns the link between an increase in production and an increase in income and consumption. As an economy’s total production expands, total income will go up. In normal times people will then also consume more (the propensity to consume doesn’t change). However, Krugman admits that this natural relationship between income and consumption may break down during a recession. As far as I understand, this would be caused by cash hoarding in the Keynesian framework of money demand. Keynes argues that the demand for holding money comes from:

  • The Transactions motive
  • The Precautionary motive
  • And the speculative motive.

The first and second motive (M1) depend on a person’s income Y, whereas the third (M2) depends on the interest rate:

Md = M1 + M2 = L1 (Y) + L2 (r) = L (Y,r)

In a recession people allocate their income somewhat differently. They want to hold more cash for precautionary motives but in order to hoard extra cash people cut down on consumption and investment. This will cause a shortfall in aggregate demand. As aggregate demand falls spare capacity may become an issue. Krugman’s main point is here that an increase in the money supply will be the cure without causing inflation. Inflation will not be triggered because – due to excess capacity – money supply will not be growing faster than real output (Pettinger, 2011). This can be summarised by the diagram below:

Money market

Printing money will ease the liquidity contraction caused by cash hoarding and will serve as a stimulus for consumption and investment. This will bring the economy back onto its PPF. Hence, recessions come from a shortfall in aggregate demand and not excess capacity caused by productivity growth. In short: Productivity growth is desirable and will not cause unemployment.

Thanks for reading!


Greider, W. (1998). One World Ready or Not: The Manic Logic of Global Capitalism. New York, N.Y.: Touchstone.

Krugman, P. (1998). The Accidential Theorist: And other Dispatches from the Dismal Science. New York, N.Y.: W.W. Norton & Company.

Pettinger, T. (2011). The link between Money Supply and Inflation. [online] Available at: [Accessed 04/04/2016].