The Hypothetical World of Econs

One of my readings for my class ‘Macroeconomics in the Global Environment’ is George A. Akerlof’s The Missing Motivation in Macroeconomics (2007). It is a reading for the lecture on Business Cycles, however, it is much more intended to give us an idea about the state of the Macroeconomics profession today. The reading is also somehow a justification for why the class is dominated by New Keynesian thinking rather than New Classical thinking.

What I want to look at today is the consequences of New Classical Thinking for the field of Macroeconomics. It is inspired by Akerlof’s paper which gives an overview on the five neutrality results that derive from New Classical Thinking. First I am going to define the New Classical school of thought. Thereafter I am going to look at the implications of this view for the macro-economy before discussing the evidence in favour and against the five neutrality results in today’s economy.

New Classical Macroeconomics evolved in the 1970s and 1980s. It is the revival of the belief that shifts in the aggregate demand curve only change the aggregate price level, but not total output. Krugman and Wells (2009) point out that the return to the Classical view was triggered by two new concepts, namely (1) rational expectations theory and (2) real business cycle theory. The concept of rational expectations came into play in the 1970s and was first formulated by John Muth in 1961. Rational expectations theory argues that individuals and firms are utility maximisers, meaning that economic actors always make optimal decisions and take into account all available information. Rational expectations theory is based on the notion of rationality and the assumption that people are forward-looking creatures, thriving for optimal decisions. Richard Thaler and Cass Sunstein like to call these hypothetical individuals ‘Econs’. Their name stems from the idea of homo economicus. They describe these individuals as creatures that can “think like Albert Einstein, store as much memory as IBM’s Big Blue, and exercise the willpower of Mahatma Gandhi” (2009, p.6). Akerlof explains the revival of the Classical View with the belief that macroeconomic relationships should be built on microeconomic fundamentals, meaning that in order to develop proper macroeconomic theory, one has to take utility-maximising individuals and profit-maximising firms and create a truly rational economic system.

Having defined what New Classical Macroeconomics is (in a rather crude manner) and how it views individuals as Econs and firms as profit-maximisers, let’s take a look at the implications for such an economy. Akerlof points out that there are five separate neutrality results following from the New Classical school of thought. It should be noted, though, that these neutralities are also embraced by many New Keynesians while adding a range of frictions (credit constraint, market imperfections, information failures, tax distortions, staggered, contracts, uncertainty, menu costs or bounded rationality). These five neutrality results of the New Classical school of thought are:

  1. Independence of consumption on current income
  2. Irrelevance of current profits to investment spending
  3. Long-run independence of inflation and unemployment
  4. Inability of monetary policy to stabilise output
  5. Irrelevance of taxes and budget deficits to consumption (Akerlof, 2007)

The first neutrality result is the Life-Cycle Permanent Income Hypothesis, i.e. the concept that consumption depends on wealth and not on current income. Wealth is an individual’s permanent income, i.e. current income and the present value of future income (Akerlof, 2007). In the world of Econs there is no correlation between the consumption and current income, because individuals allocate their expenditures based on the present value of all their life-time earnings. This also implies that these individuals engage in what is called consumption smoothing and proper saving for retirement. Econs save enough of their current income for later and they also do not increase consumption in case of a pay rise or decrease their consumption in case of a pay cut.

The second neutrality result is similar to the first one but in the context of profit-maximising firms. The Modigliani-Miller Theorem states that a firm’s investment strategy does not depend on its current financial position (Akerlof, 2007). This is because a profit-maximising firm will only make profitable investments and therefore Modigliani and Miller (1958) argue that a firm’s liquidity position will not have any effect on current investment.

The third neutrality result is the Natural Rate Theory in Macroeconomics, a theory which is embraced by the majority of Economists today. It is based on the notion that there is an unobserved non-accelerating inflation rate of unemployment (NAIRU) in the economy. The National Rate Theory evolved as a response to the break-down of the perceived trade-off between unemployment and the inflation rate, also known as the Philips curve. In particular, the New Classical school of thought showed that this trade-off is at most a short-run phenomenon. In the long-run, there is a natural unemployment level which occurs when the economy is at its long-run equilibrium, i.e. at its potential output level. In this long-run unemployment trends back to its natural rate no matter what the inflation rate is.

The fourth neutrality result is Rational Expectations Theory which renders monetary policy ineffective for taming the business cycle. This is because wage and price setters will respond systematically to any changes in the money supply (Akerlof, 2007). Robert Lucas contributed a great deal to this neutrality result which is commonly known as the Lucas critique. Lucas argued that:

“Given that the structure of an econometric model consists of optimal decision rules of economic agents, and that optimal decision rules vary systematically with changes in the structure of series relevant to the decision maker, it follows that any change in policy will systematically alter the structure of econometric models.” (Lucas, 1976, p. 41)

In sum, wage and price setters will adjust their expectations in anticipation of monetary policy and will therefore adjust wages and prices accordingly offsetting the effects of an increase or reduction in the money supply.

The fifth neutrality result is the concept of Ricardian Equivalence and Akerlof (2007) points out that this is chronologically the last neutrality result embraced by modern Eonomists. According to this concept lump-sum inter-generational transfers do not impact current consumption. Akerlof (2007) explains this concept with the use of an example. Imagine that there are only two people, a parent and a child and that there are only two periods, period one and two. Furthermore the parent derives not only utility from her own consumption in period 1 but also utility from her child’s consumption in period 2. Then it can be shown that any inter-generational transfer does not change current consumption. In essence, this is because the present value of parents’ and children’s consumption is limited by the present value of the complete family’s earnings and the family’s initial wealth. Lump-sum inter-generational transfers, such as social security payments, do not change the family’s budget constraint. The lump-sum transfer merely redistributes earnings from one generation to another, leaving the aggregate pie unchanged and the parent will take into account that, in order to receive social security payments, her child will be taxed by government later.

Having looked at all five neutrality results in the world of Econs in detail, let’s take a short look at how plausible they are. Do individuals not alter their consumption if their current income changes? Do firms not change their investment strategy if their cash flows and therefore their liquidity position changes? First, there is evidence for a positive relationship between current income and current consumption in today’s economy. Second, there is clear evidence that managers maximise their own interests instead of the interests of their shareholders and that they often engage in so-called empire building because they only care about their own compensation or because of the prestige that comes with it (Akerlof, 2007). There is also an on-going debate about the third neutrality result and some Economists argue that the Philips curve, i.e. the trade-off between unemployment and inflation, might still exist. Especially when Central Banks target very low levels of inflation of 0 to 2 percent, this (almost perfect) price stability might come at a cost of higher long-run rates of unemployment. Assuming that economic actors form rational expectations (the fourth result) is crucial for the world of Econs; however, it does not resemble reality and in recent years fields like behavioural economics have evolved in response. There is compelling evidence that Humans are not Econs and good examples questioning the assumption of rational expectations are herd behaviour or risk aversion. Lastly, there are many reasons for why Ricardian Equivalence does not hold in today’s economy as opposed to the world of Econs. Akerlof (2007) points out that there are for example childless families, there is uncertainty induced by uncertainty about one’s age of death, tax distortions or a mere lack of foresight on the effect of inter generational transfer payments on future taxes. One can easily argue that pensioners are unlikely to take into account that their social security payments will increase the debt burden for future generations to come.

In sum, the New Classical school of thought has created a hypothetical economic system in which utility-maximising individuals and profit-maximising firms are well-behaved and always make optimal decisions leading to the five neutrality results described above. In practice, this is far from reality and one of the reasons for the revival of Keynesian thinking, the New Keynesian school of thought, which has evolved as a response. In addition, the failure of New Classical Macroeconomics has opened the door for new ideas, such as behavioural economics and other unconventional schools of thought. Also the failure of many Economists and Macroeconomic models in predicting the global financial crisis proved the need for such fresh ideas. What is more, it showed that we do not live in a world of Econs but in a world of Humans as Richard Thaler and Cass Sunstein put it aptly.

Thanks for reading!



Akerlof, G.A. (2007). The Missing Motivation in Macroeconomics. American Economic Review, 97(1), 5-36. DOI: 10.1257/aer.97.1.5

Krugman, P., and Wells, R. (2009). Macroeconomics. New York, NY: Worth Publishers.

Lucas, R. (1976). Econometric policy evaluation: A critique. Carnegie-Rochester Conference Series on Public Policy, Elsevier, 1(1), 19-46.

Modigliani, F., and Miller, M.H., 1958. The Cost of Capital, Corporation Finance and the Theory of Investment. American Economic Review, 48(3), 261–97.

Thaler, R.H., and Sunstein, C.R. (2009). Nudge: Improving Decisions About Health, Wealth, and Happiness. New York, NY: Penguin Books.


New Zealand Government Debt and Budget Balance 2003 – 2012

In today’s post I am going to look at the New Zealand government’s budget balance and debt over the period from 2003 to 2012. In particular, I am going to focus on changes in government debt in response to the global downturn of 2008/09 and some of the reasons for the country’s negative budget balance. In the last part of today’s post I will comment on the claim that the government did nothing in response to the recession.

Tax revenues and expenses, as well as central government debt for the period from 2003 to 2012 are graphed in figure 1. This is complemented with an overview on subsidies and other transfers (welfare payments) both in absolute terms as well as a share of government expenses in figure 2. Nominal data series are deflated to constant 2010 levels with the use of the CPI to account for inflation.

new zealand budget deficit 1
(Source: World Bank, 2016c)

New Zealand’s tax revenues and expenses were roughly balanced until 2008. Government expenses started at around 30.4 percent of GDP in 2003 and increased slowly to 32.9 percent of GDP in 2008. Likewise tax revenues increased slowly from 29.2 percent of GDP (2003) to 31.8 percent in 2008. The highest tax revenue was recorded in 2006 (32 percent). Over the same period government debt decreased by almost 11.8 percentage points to 36.4 percent of GDP in 2008. After 2008, however, the government’s tax revenues and expenses diverged significantly. While taxes fell to levels below 30 percent, government expenses increased by almost 13 percentage points within only one year. After stagnating in 2009/10, expenses increased further by 6.5 percentage points to an all-time high of 52.8 percent of GDP in 2012. This trend caused government debt to rise from 36.4 percent (2008) to 67.9 percent of GDP by 2012. This corresponds to an 86.5 percent increase in debt in only 4 years. In 2012, government expenses came down to 45.6 percent of GDP but are still considerably higher than in the early 2000s before the global financial crisis.

Figure 2 looks at the New Zealand government’s expenses more closely. In general, government expenses include (1) compensation of employees, (2) goods and services expenses, (3) interest payments, (4) subsidies and other transfers, as well as (5) other expenses. It can be shown that the spike in government expenses of 2008/09 was mainly caused by the fourth category. Thereby subsidies and other transfers are defined as “all unrequited, nonrepayable transfers on current account to private and public enterprises; grants to foreign governments, international organizations, and other government units; and social security, social assistance benefits, and employer social benefits in cash and in kind” (World Bank, 2016b). However, the lion share in this category are welfare payments made by the New Zealand government.

new zealand budget deficit 2
(Source: World Bank, 2016c)

While subsidies and other transfers remained relatively constant at a level of slightly above $20 billion from 2003 to 2008, they increased by more than $32 billion from 2008 to 2009. This is a 143 percent increase in government subsidies and transfers within only one year. It was mainly driven by increased government spending on family assistance to low income households. The lowest income decile in the population saw its transfers rise by more than 6 percent of disposable income while the second decile saw an increase of almost 10 percent from 2006/07 to 2009/10. Overall, 9 out of ten deciles benefited from an increase in transfers during this period. This includes Working For Families, NZS and Veterans pension, Income replacement and Housing Supplement (Ball & Ryan, 2013).

The second spike of 2010/11 was driven by the fifth category due to the two Canterbury earthquakes. The New Zealand government provided short-term income support, financed public infrastructure reconstruction and repairs and was liable for Earthquake Commission payments to households (Treasury, 2011). Earthquake Expenses were expected to accrue to a sum equivalent to 10 percent of GDP and the net cost to the Crown were estimated to be $13.5 billion in 2011 (Doherty, 2011).

new zealand budget deficit 3
(Source: World Bank, 2016c; own calculations)

Putting expenditures and tax revenues together, one can calculate the budget deficit of New Zealand for the period, as shown in figure 3, by subtracting government expenses from tax revenues. It should be noted though, that this is a simplified calculation of the budget deficit and differs from the cash surplus/deficit quoted in the World Bank database, which includes other revenue such as grants and deducts the net acquisition of nonfinancial assets in addition to expenses (World Bank, 2016a).

Figure 3 supports the findings from my analysis above, namely that New Zealand has been running large budget deficits since 2009 due to (1) the increase in welfare payments during the 2008/09 recession and (2) the costs related to the Canterbury earthquakes in 2010/11. Hence the claim that the government did nothing in response to the crisis does not hold if one looks at the numbers. The New Zealand government did respond with an almost 2.5 fold increase in welfare payments, especially to poorer households through family assistance. The government also had to bear higher costs in terms of unemployment benefits as the eligible population increased from 18,000 in June 2008 to 62,000 working age people in June 2010. It introduced a Youth Opportunities package including initiatives like the Job Ops or the Community Max programme as well as Youth Transition Services to tackle long-term unemployment through training and jobs funded or subsidized by the government (Ministry of Social Development, 2010).

I hope today’s post provided insights into how New Zealand reacted to the global recession of 2008/09 with the goal to shed light on the magnitude of the government’s spending increases and tax decreases over that period.

Thanks for reading!



Ball, C. & Ryan, M. (2013). New Zealand Households and the 2008/09 Recession (New Zealand Treasury Working Paper 13/05). Wellington: The Treasury. Retrieved from:

Doherty, E. (2011). Economic effects of the Canterbury earthquakes (Research Paper December 2011). Wellington: Parliamentary Library. Retrieved from:

Ministry of Social Development (2010). Ministry of Social Development Annual Report 2009/2010. Wellington: New Zealand Government. Retrieved from:

Treasury (2011). Economic and Fiscal Impacts of the Canterbury Earthquakes. Budget Economic and Fiscal Update 2011, 95-101. Retrieved from:

World Bank (2016a). Cash surplus/deficit (% of GDP). Retrieved from:

World Bank (2016b). Subsidies and other transfers (% of expense). Retrieved from:

World Bank (2016c). World Development Indicators: New Zealand [Data file]. Retrieved from:

Voting Behaviour in the United Kingdom – Evidence from the European Social Survey 2012

My paper Applied Econometrics, which I am taking at Auckland University of Technology whilst being on student exchange, included a major study of Voting Behaviour across European countries. The assignment brief was as follows:

Using data from the 2012 European Social Survey write a research report on the factors associated with an individual’s likelihood to vote.

Each student could pick one of the European countries. I decided to focus on the UK and was pleased to conduct empirical work in STATA as a part of university. In particular, the goal of the assignment was to become proficient in the use of econometric techniques when dealing with a categorical variable. In this case it was voter turnout where people decided to vote (Y=1) or decided to abstain (Y=0). We were given the choice of either using a Logit or a Probit model. Before defining our own model it was recommended to carry out a literature review on the determinants of voting behaviour in order to include all significant variables that are commonly used. Thereafter the study should include an overview on the chosen model and its methodology as well as a discussion of the empirical results. In the discussion the focus should be on testing the results for their trustworthiness and any bias. While I did not correct for heteroskedasticity with the use of robust standard errors (which is the main criticism in my feedback), I tested for goodness of fit, model misspecification errors, multicollinearity and influential observations. The study should conclude with a brief summary of the main findings.

Overall I am really proud of my very first own ‘study’. I put an immense amount of effort into it in order to make my work perfect and flawless. This is also why I decided to publish my work on my blog. In addition, I hope to be able to use it as writing sample when applying for Economics graduate school (besides my bachelor’s thesis).

In retrospective, I have learned a lot over the course of my Applied Econometrics paper and I am very thankful that my home university let me choose my fourth paper freely and that, in turn, Auckland University of Technology approved my choice of Applied Econometrics as elective. I knew that it would be a challenging paper but it has been a very rewarding experience throughout.

I hope that you enjoy reading my work! The abstract is included below and complete study is available from here.


Voting Behaviour in the United Kingdom: Evidence from the European Social Survey 2012


Voting is often taken as an indicator for the state of a country’s democratic political system. The study therefore examines voting behaviour in the United Kingdom using data from the European Social Survey 6.0 conducted in 2012. It develops a model based on rational voter theory as well as sociological theories discussed in the literature while controlling for demographic factors. Dimensions included in the sociological approach of the model are deprivation, social capital and civic voluntarism.

The study concludes that British women are significantly more likely to vote than men after controlling for other factors. Other significant demographic factors are family status and age. Age does not exhibit a curvilinear pattern due to life-cycle effects. The deprivation dimension (ethnicity and immigration status) does not have a significant influence in the study while the social capital dimension does turn out to be significant. Trade union membership, religious denomination (Roman Catholic and Anglican) and a composite trust variable measuring one’s trust in others have a significant positive effect on voter turnout in the UK. Civic voluntarism is the most influential dimension for determining participation in the British general elections. Medium and low income households are significantly less likely to vote, ceteris paribus. In the model only tertiary education is a significant positive predictor compared to respondents with primary education only. There is no significant difference between primary and secondary education. Further vocational education does become significant once controlling for influential observations. Political interest and partisanship remain two of the most significant predictors of voting behaviour at the margin. The study concludes that there is a significant relationship between voter mobilisation and a person’s wealth and non-material endowment. This is of concern to ensure representative policies and civic engagement in the future and might also explain recent turnout declines.  A limitation of the study is the low explanatory power of the overall model even if variables are significant at the margin. This is taken as evidence for rational voter theory while being more problematic for sociological approaches.

Droege, J. (2016). Voting Behaviour in the UK: Evidence from the European Social Survey 2012. Auckland University of Technology, Auckland. Retrieved from:

What they do not teach in Undergraduate Macroeconomics

I want to devote today’s post to one of the hot debates in the Economics arena: The Macroeconomics Undergraduate Curriculum.

Currently the New-Keynesian (and Neoclassical) economic school of thought is dominating most universities. While post-graduates might have heard about other approaches, Economics undergraduates are mainly trained in New-Keynesian (and Neoclassical) Economics. Almost all mainstream Macroeconomics textbooks are written by scholars coming from this economic school of thought. Some of the most prominent textbook examples, that I have encountered, are Paul Krugman’s and Gregory Mankiw’s Macroeconomics or Blanchard, Amighini and Giavazzi’s Macroeconomics – A European Perspective.

Given the dominance of New-Keynesianism in the Economics profession today, there is little pressure for pluralism. Other economic schools of thought rarely sneak into the lecture hall. This creates a more or less a self-nurturing circle. Teaching mainstream economics nurtures ‘orthodox’ economic thinking and leaves little room for its counterpart, which is often referred to as ‘heterodox’ economics. However, given that the aim of the undergraduate Economics degree is to communicate the fundamentals of Economics, shouldn’t it teach all current approaches? I think that we are talking about the concept of ‘equality of opportunity’ here. Adopting the Stanford Encyclopedia of Philosophy’s definition (2015), equality of opportunity means that “the assignment of individuals to places in the social hierarchy is determined by some form of competitive process, and all members of society are eligible to compete on equal terms”. I would argue that this should also hold for economic schools of thought. Using the same terminology, the assignment of economic schools of thought to places in the overall hierarchy of economic thinking should be based on how well they perform in explaining our economy. Most importantly, all schools of thought should be eligible to compete on equal terms. This call for pluralism does not imply a defeat of New-Keynesian economics. It merely recognises that diversity in economic thinking adds value to the field. It enables critical thinking, challenges conventional wisdom and scrutinizes some of the simplifying assumptions in mainstream economic models.

It would be vital to expose Economics undergraduates to a more diversified Macroeconomics curriculum. How shall undergraduates develop their own brand, i.e. their very own ‘economic thinking’, if one half of Macroeconomics is withheld from them? I am far from advocating to turn all Economics undergraduates into Post-Keynesians. However, I think that the Post-Keynesian critique of neoclassical economics as well as other schools of thought have to sneak into the lecture hall as soon as possible. Teaching other approaches will enable Economics undergraduates to look over the rim of the Macroeconomics textbook teacup. We have to acknowledge that the standard Macroeconomic assumptions are not carved in stone and do not have to be taken for granted. At the moment, assumptions like rising marginal costs and diminishing marginal productivity or the view that the financial sector does not matter because it merely redistributes money from patient economic actors (lenders) to impatient actors (borrowers) are often perceived to be facts in their own right. They are rarely challenged, neither in the classroom nor in the core textbooks.

Yet I was lucky because my Macroeconomics professor touched upon Minsky’s Financial Instability Hypothesis and Fisher’s Debt Deflation Theory in the last lecture on Friday which dealt with business cycles and the struggle of neoclassical economics to explain the booms and busts modern economies encounter. In particular, the take-away homework of Friday’s lecture was to read Minsky (1992) and we will also continue with Minsky and Fisher in the next two weeks in the context of the Global Financial Crisis! So I took the homework as an opportunity to dive into the Post-Keynesian field. I was lucky to find a comprehensive introduction to Post-Keynesian Economics by Engelbert Stockhammer (2014) and by Steve Keen (author of Debunking Economics; 2014a; 2014b) and I have put the links into the references of today’s post. I would suggest to start with the lecture by Engelbert Stockhammer. In this introductory lecture Stockhammer provides an overview on the three main themes in Post-Keynesian economics: (1) fundamental uncertainty, (2) effective demand and (3) social conflict.

Firstly, Post-Keynesians acknowledge that ‘we don’t know’ and that people do not necessarily act rational but instead rely on conventions. People tend to believe that the future mirrors the past. In this world there is also the possibility for herd behaviour. In addition, due to fundamental uncertainty in the economy, money fulfils a different function compared to its function in neoclassical economics. In Post-Keynesianism money becomes a means to deal with uncertainty (liquidity preference) and economic actors can maintain flexibility by holding liquid assets. Post-Keynesians also recognise that in a world plagued by fundamental uncertainty there can be liquidity crises and panics (Stockhammer, 2014).

Second, in Post-Keynesian models there is social conflict due to asymmetry in the distribution of resources and asymmetry in investment decision making. Stockhammer (2014) points out that these models often have three classes: workers, capital and rentiers. In the Post-Keynesian economy, capitalists hire and fire workers and therefore workers face inherent job insecurity and potentially involuntary unemployment. Capitalists are the main investors while rentiers push forward capital and collect interest and dividends. In the Post-Keynesian economy distributional effects derive from the differing marginal propensities to consume, i.e. workers have a higher MPC than capitalists. Overall institutions are in place to address these inherent social conflicts. However, inflation remains apparent in the Post-Keynesian economic system, being the “outcome of unresolved distributional effects” (Stockhammer, 2014).

Third, there is the concept of effective demand. In particular, the Investment-Savings identity is modified as follows: I(Y) = S(Y). Stockhammer (2014) points out that it is income which adjusts the Investment-Savings equilibrium rather than the interest rate. Whereas in neoclassical economics there is something like a natural interest rate which equilibrates investment and saving, in Post-Keynesian economics this function is taken over by income. However, it should be noted that there is no constraint on investment; rather it is investment that drives the equation. This holds because investment is more or less determined by the availability of finance (from banks) rather than by the amount of savings available in the economy. This assumption seems plausible for modern economies and fractional reserve banking as banks are not only intermediating but also primary lenders. Overall, Post-Keynesians argue that investment expenditures – not being constrained by saving – are the main cause of business cycles. Large fluctuations in investment growth impact GDP growth while consumption growth is a lot less volatile and therefore not the primary cause of booms and busts.

In addition to these three themes – fundamental uncertainty, social conflict and effective demand – Post-Keynesianism also takes into account involuntary unemployment and a bigger role of money and finance. While neoclassical economics argues for a natural rate of unemployment and a self-adjusting labour market in the long-run, there can be involuntary unemployment in Post-Keynesian models. They argue that real wage cuts have a contractionary effect and that the labour and goods market are therefore correlated. In terms of money and finance, money is endogenous rather than exogenous in this economic school of thought. While central banks have the possibility to determine the official interest rate, it is the banks that lend to the public at a rate which reflects their own liquidity preference. Banks mark up a risk premium on the interest rate of the central bank which they see fit given the state of the economy. Furthermore, in Post-Keynesian economics financial markets suffer from instability, aggravating booms and busts and causing debt cycles (see also Minsky’s Financial Instability Hypothesis). This part of Post-Keynesian Economics sounds a lot like one of the causes of the Global Financial Crisis and makes a compelling argument given the recent events in the global economy. Financial institutions are eager to lend during a prolonged boom but credit freezes up as the economic outlook deteriorates because the banks’ liquidity preference fundamentally changes. Thereby financial institutions have the capacity to fuel investment booms (sub-prime lending).

In his introduction to Post-Keynesian Economics, Engelbert Stockhammer also provides an overview on the key differences of the economic schools of thought with which I want to end my post today. The table reproduced below neatly summarises the differences regarding key concepts, behaviour, markets, money, unemployment and policy recommendations:

  Neoclassical theory Keynesian theory
Key concepts Rational behaviour, equilibrium Effective demand, ‘animal spirits’
Behaviour Rational behaviour by selfish individuals Animal spirits (non-rational behaviour) and conventional
Markets Market clearing <- price adjustment Some markets do not clear
Money Classical dichotomy (money is neutral) ‘Money matters’ (has real effects)
Unemployment Voluntary or due to rigidities Involuntary, due to lack of demand in goods markets
Policy Laissez-faire: markets are self-regulating and government should not intervene Market economies are unstable and result in unemployment; government should intervene

(Source: Stockhammer, 2014, YouTube Video ~ 43:51-44:12 min)

There is clearly more to Post-Keynesianism than what I covered in my post today. For a more comprehensive introduction Steve Keen recommended the book The Elgar Companion to Post Keynesian Economics, edited by J.E. King (2012). Personally I have to say that I found this discourse very rewarding and enlightening and I think that the second half of macroeconomics is too important to be left out of the undergraduate Economics curriculum. Pluralism in economic thinking at university should receive more attention and I will certainly follow up more on Post-Keynesianism and on other economic schools of thought in the future.

Thanks for reading,




Keen, S. [ProfSteveKeen]. (2014a, December 2). Free University Berlin: Demand, Competition and Money [Video file].

Keen, S. [ProfSteveKeen]. (2014b, December 7). Hamburg 2014: Post Keynesian economics, falling marginal cost, and money [Video file].

King, J.E. (2012). The Elgar Companion to Post Keynesian Economics (2nd ed.). Cheltenham, UK: Edward Elgar Publishing.

Minsky, H.P. (1992). The Financial Instability Hypothesis (Working Paper No. 74). Annandale-on-Hudson, NY: Levy Economics Institute of Bard College. Retrieved from:

Stanford Encyclopedia of Philosophy (2015, March 25). Equality of Opportunity. Retrieved from:

Stockhammer, E. [Rethinking Economics]. (2014, October 21). Rethinking Economics: Stockhammer’s Intro to Post-Keynesian Economics, London 2014 [Video file].  Retrieved from:


Germany’s Energiewende – The New Electric Mobility Strategy

I am pleased to see that Germany continues to drive its energy transition. The so-called ‘Energiewende’ (German for energy transition) is overhauling the country’s energy concept fundamentally. Thereby the three pillars of the new energy concept are reliability, environmental sustainability and economic viability. The government’s vision is to transform the country into a role model for energy efficiency and a green economy coupled with competitive energy prices and a high level of prosperity (BMWi, 2010). The four main political objectives of the energy transition are to combat climate change, to avoid the risks of nuclear power, to improve Germany’s energy security and to increase competition and growth in the sector (Pescia and Graichen, 2015). But there are more potential benefits to it, including the reduction of energy imports and therefore oil dependency and exposure to external energy supply shocks, as well as the strengthening local economies and the provision of social justice (Morris and Pehnt, 2015).

In order to achieve the ambitious vision, the government’s agenda includes:

  1. Cost-efficient expansion of renewables, e.g. expansion of offshore and onshore wind farming and increasing sustainability and efficiency in the use of bioenergy
  2. Enhancing energy efficiency of private households, the industry and the public sector, e.g. the modernisation campaign for buildings with the vision of energy-efficient buildings by 2050
  3. Shifting the energy mix away from nuclear power and fossil-fuel power plants toward renewable energy sources
  4. Improvements in the country’s grid infrastructure and storage technologies with demand-responsive electricity generation
  5. Electric mobility strategy with one million electric vehicles on Germany’s streets by 2020 and six million by 2030
  6. Energy research programme with focus on innovation and new technologies regarding renewable energies, energy efficiency and storage methods (BMWi, 2010).

Although some of the policy measures, which the government has adopted, are debatable, the overall plan is clearly well thought out. A month ago I already dedicated a blog post to the idea of ecological fiscal reforms (green tax shift) and eco-social market economies. In this post I used Germany as a textbook example for the evidence of the wide-ranging benefits of such green reforms.

In my opinion, the ‘Energiewende’ provides the necessary nudge to the industry, consumers as well as the public sector to enhance their energy efficiency and sustainability. It reshapes the incentives of economic actors in favour of green research, innovation and consumption. In addition, it is also a poster child for demonstrating that “coherent government policy can transform an industry” and that it is possible to “to blend low-risk feed-in tariffs with market price signals” (Fares, 2014).

The motivation for today’s post stems from the fact that Germany is now starting to implement its electric mobility strategy (item 5 on the agenda above). It is about to introduce a new nudge targeting electric cars. In particular, the German Federal Cabinet has just approved a new legislative package for the preferential treatment of electric cars. It will include a subsidy of 4,000 Euro when purchasing a new electric car and 3,000 Euro when purchasing a hybrid car. In addition, electric cars will be exempt from the motor vehicle tax for a period of 10 years (Tagesschau, 2016). This initiative for electric mobility will be funded jointly by the government and the automobile industry, each contributing 0.6 billion of funding. According to the government Daimler, VW and BMW have already agreed to the 50:50 split in costs (ZEIT, 2016). The initiative will be coupled with the roll out of charging points. This, in turn, will be funded by the Federal Government swallowing another 300 million of public funds.

The main goal of the latest initiatives for electric mobility is to achieve a more than ten-fold increase the amount of electric and hybrid cars from currently less than 50,000 to more than 500,000 in the short-run and to more than one million in the medium-run (Tagesschau, 2016). As noted earlier, electric mobility is at the heart of the country’s energy transition. Transport is currently one of the main drivers of Germany’s oil dependence. It continues to rely heavily on fossil fuels rather than renewable energy sources despite efforts like the development of the National Hydrogen and Fuel Cell Technology Innovation Programme (BMWi, 2010). This is why the government is now taking action. It is starting to pave the way for preferential treatment of electric cars in order to increase the incentives for both fleet operators and first-time private buyers to purchase an electric car and to drive its energy transition also in the area of transport.

Overall, the legislative package still has to be discussed and approved by the German Federal Parliament and Federal Council. However, the package is likely to go through shortly with the subsidy for the purchase of electric and electric cars being expected to already begin in May. Subsidies will be claimable through an online application facility (Tagesschau, 2016). So there are interesting times to come; especially whether the subsidy will be sufficient to increase the adoption of the electric mobility technology. Electric cars continue to carry an excessive price tag for their zero emissions image. Even under the assumption that both fleet operators and first-time private buyers care about the image associated with a zero-emission vehicle (BMWi, 2010), it is not clear whether this together with the government’s subsidy and tax exemption is an incentive large enough to justify the higher initial investment costs. One should not forget that it is ultimately the price which determines demand (and supply). The initiative has the potential to break ground, but it is unlikely to turn the larger share of society into electric car users; at least not yet. Still, I would argue that we are heading into the right direction due to the right policy mix. Firstly, Germany focuses on competition and market orientation. Secondly, Germany introduces incentives in favour of greener transportation without restricting society’s choices as well as important incentives for green innovation. Both are key to rethink transportation and mobility issues in a century where renewable energy sources are clearly on the rise.

 Thanks for reading,


BMWi (2010). Energy Concept for an Environmentally Sound, Reliable and Affordable Energy Supply. Berlin: Federal Ministry of Economics and Technology. Retrieved from:,property=pdf,bereich=bmwi,sprache=en,rwb=true.pdf

Fares (2014, 7 October). Energiewende. Two Energy Lessons for the United States from Germany. Retrieved from:

Morris, C., and Pent, M. (2015). Energy Transition: The German Energiewende. Berlin: Heinrich Böll Stiftung. Retrieved from:

Pescia, D., and Graichen, P. (2015). Understanding the Energiewende: FAQ on the ongoing transition of the German power system. Berlin: Agora Energiewende. Retrieved from:

Tagesschau (2016, 18 Mai). Kaufprämien und Steuerboni für Elektroautos: Kabinett beschließt Förderung. Tagesschau Online. Retrieved from:

ZEIT (2016, 27 April). 4.000 Euro Prämie für Kauf eines Elektroautos. ZEIT ONLINE. Retrieved from:

The Global Financial Crisis in the AS/AD Model – The Case of the United States

In today’s post I will take a look at the Global Financial Crisis, but from a rather different perspective than how it is discussed in the media. In particular, I will use of the basic model of aggregate demand (AD) and aggregate supply (AS) to explain the changes in real GDP, CPI inflation and the unemployment rate for the United States. The data for the analysis comes from the World Development Indicators database of the World Bank.

Let’s start with a brief overview on the economic indicators of the United States in the 21st century. For this I have prepared two diagrams. Figure 1 contains the United States’ real GDP (constant 2005 US$) together with the country’s annual percentage growth rate of real GDP over the period from 2000 to 2014. The unemployment rate (ILO estimates) and inflation, as measured by the Consumer Price Index (CPI), are shown in figure 2.

(Source: World Bank, 2016)

First it can be seen that the USA’s real GDP has risen steadily over the period except from 2008 and 2009, which corresponds to the GFC and the last global recession. In 2000, GDP stood at around $11.6 trillion and it has risen to $14.8 trillion in 2014. This is a 28 percent increase in real GDP over the complete period. However, despite an overall increase in GDP, GDP growth has been considerably volatile over the period. The US economy experienced the highest GDP growth in 2000 with a positive growth rate of 4.1 percent. The lowest GDP growth occurred in 2009 with a negative growth rate of 2.8 percent. This corresponds to a variation of 6.9 percentage points. The US surpassed its pre-downturn output level in 2011, meaning that it took the economy almost two years two recover from the 2008/09 recession. After the GFC, the economy has experienced more even growth from 2010 to 2014 compared to the clear rise and fall in the growth rate over the period from 2001 to 2007, which was driven by the US housing boom amongst other factors.

(Source: World Bank, 2016)

Unemployment ranged from 4.1 percent (2000) to 9.7 percent (2010) over the period from 2000 to 2014. In the beginning of the century it rose until 2003 and then fell until 2006. After stagnating in 2007 unemployment more than doubled by the end of 2010. Since then, unemployment has fallen significantly to 6.2 percent. However, this is still 1.5 percentage points higher than the pre-downturn unemployment rate.

Inflation ranged from an inflation rate of 3.8 percent in 2008 to deflation of 0.4 percent in 2009. The highest volatility in the inflation rate therefore occurred during the GFC. In the beginning of the century inflation came down to 1.6 percent (2002) and thereafter accelerated to 3.4 percent in 2005. During the period preceding the GFC it remained at a relatively high level compared to the years before. After a drop by almost 3.5 percent from 2008 to 2009 inflation receded to 3.2 percent in 2011 but it has recently dropped below the 2 percent level.

Figure 3 Supply and Demand Shock during the GFC

(Source: Own work)

The fluctuations in real GDP, unemployment and inflation for the period of the GFC of 2007 can be summarized in the AS/AD model as follows (figure 3). Firstly, from 2007 to 2008 the US economy faced a negative supply shock (1) due to the collapse of a domestic housing bubble, a doubling of the oil price, as well as large price increases in other commodities (Krugman, 2009). This shifted the short run aggregate supply curve (SRAS) to the left. This negative supply shock explains the beginning of the GFC, i.e. the stagflation of 2007/08, because a negative supply shock triggers both rising inflation and lower output, as well as higher unemployment. However, this is not the end of the story, because the negative supply shock was followed by a negative demand shock in 2008/09. The aggregate demand (AD) curve also shifted to the left (2), causing disinflation in 2008 and deflation in 2009, negative output growth of almost 3 percent in 2009, as well as a further increase in the unemployment rate to almost 10 percent. It took the AD curve until 2011 to shift back to its initial position (3) as the government intervened and consumer and business confidence recovered. This caused inflation and GDP growth to pick up again as shown in figure 2. Subsequently, unemployment fell as demand recovered. Since 2011 it can be observed that unemployment and inflation are falling in tandem. This can be explained by the shifting of the SRAS curve back to its initial position (4) after the negative supply shock which occurred in 2007/08.

In sum, there are 4 shifts – 2 shifts in the AD curve and 2 shifts in the AS curve – based on the economic indicators of the US. However, it should be noted that it is a rather simplified version of the GFC story in the AS/AD model. There are certainly other factors important for explaining the GFC which are not captured in the model. Still it is great for explaining the changes in unemployment, inflation and GDP (growth). It was actually part of an assignment for one of my classes with the goal to infer from the data which of the curves in the model moved in which direction.

Many thanks for reading,


Krugman, P., & Wells, R. (2009). Macroeconomics (2nd ed.). New York, N.Y.: Worth Publishers.

World Bank (2016). World Development Indicators: United States [Data file]. Retrieved from:



Smoking Behaviour – Evidence from the European Social Survey 2014/15

In today’s post I want to take a closer look at how to interpret the multinomial logistic regression output of STATA and for this I am going to use the example of smoking behaviour. In Econometrics the multinomial logit model (MLM) is used if the dependent variable on the left-hand side of the equation has several discrete alternatives as opposed to a binary variable (0=no, 1=yes). Furthermore, the independent variables on the right-hand side can be based on chooser-specific data (e.g. gender, education or income) but also choice-specific data. However, I am going to focus only on choice-specific data today. My model is therefore going to explain how the respondents’ characteristics affect their choice of an alternative among a set of alternatives.

In particular the model is going to establish which characteristics of respondents are determinants of smoking behaviour. For this I have obtained the European Social Survey 7.0 which was conducted in 2014. I am using edition 1.0 which was released on 28 October 2015. Among the aims of the ESS are monitoring changes in public attitudes as well as developing a series of European social and attitudinal indicators. The seventh round of the survey covered 22 countries and 28,221 individuals. The survey consists of an hour-long face-to-face interview with core sections as well as rotating modules. The core sections cover the socio-demographic profile of the respondents as well as things like social trust, political interest, socio-political orientations and human values. In the seventh round the two rotating modules covered (1) social inequalities in health and their determinants and (2) respondents’ attitudes towards immigration. One of the questions in the first rotating module assesses respondents’ smoking behaviour. Respondents were asked which of the following descriptions best described their smoking behaviour:

  1. I smoke daily
  2. I smoke but not every day
  3. I don’t smoke but I used to
  4. I have only smoked a few times
  5. I have never smoked

This allows me to construct a detailed multinomial logit model in which the first and second answer define current smokers, the third answer equals former daily smokers and the fourth former party smokers, while the fifth answer to the question defines respondents that have never smoked.

In terms of the independent variables, I include a range of demographic control factors, namely age, gender, ethnicity, immigration and family status, education, income, employment status. In addition, I include two dummy variables for mild and significant depression as well as four dummy variables for various levels of alcohol consumption. For more information regarding the coding of the variables, please refer to the coding overview below.

Coding overview

  • Smoking: never smoked=1, former party smoker (I have only smoked a few times)=2, former daily smoker (I used to smoke)=3, current smoker (I smoke daily or I smoke but not every day)=4
  • Low_educ (ref.): 1 if lower secondary education or less, 0 otherwise
  • Medium_educ: 1 if upper secondary education, 0 otherwise
  • High_education: 1 if post-school education (vocational or tertiary), 0 otherwise
  • Low_income: 1 if income in 1st – 3rd decile, 0 otherwise
  • Medium_income: 1 if income in 4th – 6th decile, 0 otherwise
  • High_income: 1 if income 7th – 10th decile, 0 otherwise
  • Age: age in years
  • Female: 1 if female, 0 otherwise
  • Employed: 1 if respondent was employed in the past 7 days, 0 otherwise
  • Children: 1 if children currently living at home, 0 otherwise
  • Minority: 1 if respondent belongs to a minority ethnic group in country, 0 otherwise
  • Immigrant: 1 if respondent was not born in the country
  • No depression (ref.): 1 if respondent felt depressed none or almost none of the time in the past week, 0 otherwise
  • Mild depression: 1 if respondent felt depressed some of the time in the past week, 0 otherwise
  • High depression: 1 if respondent felt depressed most of the time or all/ almost all of the time in the past week, 0 otherwise
  • Daily drinker: 1 if respondent consumes alcohol every day, 0 otherwise
  • Frequent drinker: 1 if respondent consumes alcohol several times a week, 0 otherwise
  • Weekly drinker: 1 if respondent consumes alcohol once a week, 0 otherwise
  • Monthly drinker: 1 if respondent consumes alcohol 2-3 times a month or once a month, 0 otherwise
  • No/Infrequent drinker (ref.): 1 if respondent consumes alcohol less than once a month or never, 0 otherwise

Summary statistics

Before proceeding to the estimation of the MLM, let’s take a quick look at the summary statistics to ensure that there are no coding errors. With the help of the – sum – command STATA produces an overview on mean, standard deviation as well as minimum and maximum for each of the variables. The table shows no anomalies except for age. When examining the outlier of 114, one can immediately see that age is likely to be a coding error due to the respondent being in paid work and not being retired. Therefore, age is recoded to missing for this observation. There are still 5 observations with an age of 100 or older. However, a closer look at their responses suggest that they are valid as all of them are retired.

In my sample 42 percent of the respondents said that they have never smoked while around 10.8 percent are former party smokers, 23.4 percent are former daily smokers and the remaining 23.8 are percent are currently smoking. In terms of education, 24.42 percent received only little education, 37.84 percent have upper secondary education (medium) and 37.74 percent have post-school education (high). Almost 30 percent of the respondents fall into the lower three income deciles (low income), Around 32 percent fall into the 4th to 6th income decile (middle income) while the remainder of the respondents (38 percent) fall into the high income category. It should be noted that the decile cut-offs vary between countries, so that these income categories will differ in what exact amount of money they represent among countries. However, the interpretation does not change, because respondents compared themselves to national standards and have low, medium or high incomes compared to the population in their respective country.

The age of respondents ranges from 14 to 104 years. The median age is 49 and therefore very close to the mean. Around 68 percent of the respondents in the sample are between 30 and 68 years old. There are slightly more females (52 percent) in the sample than males. 53 percent of the respondents said that they were employed during the last 7 days. Almost 33 percent of the respondents have children living at home, 5.65 percent belong to a minority, and 10.75 percent were born in another country. 67.74 percent of the respondents said that they felt depressed none or almost none of the time in the past week. On the other hand, 26.55 percent felt depressed some of the time (mild depression) and 5.7 percent felt depressed most of the time or all/almost all of the time (high depression). In the sample, 6.34 percent consume alcohol every day (daily drinker). 16.61 percent drink alcohol several times a week (frequent drinker), while 19.56 percent of the respondents drink alcohol once a week (weekly drinker). Monthly drinkers (2-3 times a month or once a month) are 24.26 percent of the respondents while the remainder are infrequent drinkers that either drink less than once a month or never (33.22 percent).

Regression results

I will present both the regression results in form of coefficients as well as the relative risk ratios (RRR). Let’s begin with the coefficients and a general analysis of my model.


First, it can be seen that the model includes only 22,018 observations as STATA deletes incomplete cases list-wise. Second, the Likelihood Ratio Chi-Square Statistic is 3182.55. The corresponding LR Chi-Square Test tests the assumption that the coefficients of all independent variables are jointly equal to zero. The probability of obtaining an LR Test statistic of 3182.55 or more if all coefficients were jointly equal to zero (the null hypothesis) is practically zero. It can be concluded that the model as a whole is significant.

Thereafter one can interpret the significance of the coefficients. In the first panel ‘former party smoker’ minority and mild depression are significant at the 10 percent level. Immigrant is significant at the 5 percent level and medium and high education, high income, age, female, children and all drinker dummies are significant at the 1 percent level. In the second panel ‘former daily smoker’ the variables medium and high income are significant at 5 percent level. The dummy variables on education and drinking behaviour as well as age, female, children, minority and high depression are significant at 1 percent level. In the third panel ‘current smoker’ the dummy variables on depression, drinking behaviour and income are all significant at 1 percent level. Also high education, age, female, employed and children are significant at 1 percent level.

The sign of the coefficients can be interpreted as follows: A positive coefficient indicates increased odds for the outcome 2 over 1, outcome 3 over 1, or outcome 4 over 1. A negative coefficient indicates decreased odds for the outcome 2 over 1, outcome 3 over 1, or outcome 4 over 1. The regression result always has to be interpreted relative to the base outcome, which is that the respondent has never smoked. For example, higher incomes increase the odds of being a former party smoker or former daily smoker but decreases the odds of being a current smoker compared to the odds of having never smoked, ceteris paribus. Similarly, having obtained more education is associated with an increase in the odds of being a former party smoker or former daily smoker (only medium educ) but with a decrease in the odds of being a current smoker compared to the odds of having never smoked everything else held constant. Being female reduces the odds of all outcomes compared to the odds of the base outcome, ceteris paribus. The other coefficients can be interpreted in a similar fashion. However, the numbers cannot be interpreted easily. This is why it is common to turn to relative risk ratios instead.

Relative Risk Ratios

Relative risk ratios (RRR) can be interpreted in a similar manner to odds ratios in the ordinary logit model. They are merely the exponentiated MLM coefficients from the regression output above. STATA can compute them automatically if one adds RRR or RR to the – mlogit – command:

mlogit smoking  medium_educ high_educ  medium_income high_income age female employed children minority immigrant  mild_depression high_depression  daily_drinker frequent_drinker  weekly_drinker  monthly_drinker, baseoutcome(1) rrr


Let’s start with the RRRs in the first panel. The risk of being a former party smoker vs. never smoker for respondents with medium education compared to respondents with low education is 1.20 times greater, i.e. 120 percent. Likewise the risk of being a former party smoker vs. never smoker for respondents with high education compared to respondents with low education is 1.37 times greater, i.e. 137 percent, ceteris paribus.

The risk of being a former party smoker vs. never smoker for respondents with a high income relative to respondents with a low income is about 18.85 percent higher, holding everything else constant. The risk of being a former party smoker vs. never smoker falls by about 1.7 percent for each additional year of age, all else being equal. The risk of being a former party smoker vs. never smoker is 21.71 percent lower for females relative to males, when everything else is held constant. The risk of being a former party smoker vs. never smoker is 14.63 percent lower for respondents living with children compared to respondents that do not. The risk of being a former party smoker vs. never smoker is 20.59 percent lower for members of a minority ethnic group compared to non-minorities, ceteris paribus. Similarly, the risk is 17.01 percent lower for immigrants compared to non-immigrants.

Having a mild depression increases the risk of being a former party smoker vs. never smoker by about 10.25 percent compared to no depression. Lastly, the risk of being a former party smoker vs. never smoker is 106.34 percent higher for daily drinkers, 105.48 percent higher for frequent drinkers, 88.62 percent higher for weekly drinkers and 75 percent higher for monthly drinkers compared to less frequent drinkers, ceteris paribus. Hence, drinking alcohol more frequently increases the odds of being a former party smoker over being a never smoker significantly and shows that alcohol and cigarette consumption tend to go together. It does not have to be a causal relationship but one could argue that social drinking induces individuals to at least try smoking once in their lives.

The RRRs in the second panel can be interpreted as follows. Having obtained a medium level of education compared to low education levels increases the odds of being a former daily smoker vs. never smoker by 14.22 percent, ceteris paribus. Likewise, earning a medium income or a high income increases the odds in favour of former daily smoking over never have smoked by 11.64 percent and 12.96 percent, respectively. An additional year of age increases the odds of being a former daily smoker instead of having never smoked by 1.85 percent, holding everything else constant. The odds of being a former daily smoker vs. never smoker is 38.95 percent lower for females compared to males. The risk of being a former daily smoker rather than a never smoker increases by 15.43 percent if the respondent has children living at home compared to respondents without children at home. The risk of being a former daily smoker vs. never smoker is 36.24 percent lower for minorities compared to non-minorities. High levels of depression increase the odds in favour of being a former daily smoker relative to never having smoked by 25.55 percent. The risk of being a former daily smoker vs. never smoker is 213.84 percent higher for daily drinkers, 158.73 percent higher for frequent drinkers, 80.64 percent higher for weekly drinkers and 59.67 percent higher for monthly drinkers compared to less frequent drinkers, ceteris paribus. Again, drinking alcohol more frequently increases the odds of being a former daily smoker over being a never smoker significantly and confirms the view that alcohol and cigarette consumption tend to go together. Respondents that drink more than once a week are predicted to be former daily smokers rather than never smokers.

In the third panel the RRRs can be interpreted as follows. Having obtained a high level of education compared to low education levels decreases the odds of being a current smoker vs. never smoker by 45.84 percent, ceteris paribus. Likewise, earning a medium income or a high income decreases the odds in favour of currently smoking over never having smoked by 23.78 percent and 41.65 percent, respectively. An additional year of age decreases the odds of currently smoking over having never smoked by 1.82 percent, holding everything else constant. The odds of being a current vs. never smoker is 32.96 percent lower for females compared to males. The risk of being a current smoker vs. never smoker is 38.27 percent higher for respondents currently employed to respondents not currently employed.

Medium levels of depression compared to no depression increase the odds in favour of being a current smoker relative to never having smoked by 31.49 percent. High levels of depression increase the odds in favour of being a current smoker relative to never having smoked by 122.13 percent. The risk of being a current smoker vs. never smoker is 289.93 percent higher for daily drinkers, 166.62 percent higher for frequent drinkers, 82.29 percent higher for weekly drinkers and 41.33 percent higher for monthly drinkers compared to less frequent drinkers, ceteris paribus. Again this confirms that drinking alcohol more frequently increases the odds of being a former daily smoker over being a never smoker significantly and confirms also that alcohol and cigarette consumption tend to go together. Respondents that drink more than once a week are predicted to be current smokers rather than never smokers.


The risk of being a current smoker rather than never smoker increases by 11.90 percent if the respondent has children living at home compared to respondents without children at home. This does not infer a causal relationship in the sense that children cause people to smoke. However, this finding is troublesome in the sense that the RRR was expected to be negative, i.e. that having children at home decrease the odds in favour of being a current smoker over never smoker. The command – adjrr children – can be used to shed light on the relationship between having children living at home or not and the smoking outcomes. Respondents with children at home are 4.17 percent less likely to be a never smoker than respondents without children at home. This group is also 18.08 percent less likely to be a former party smoker compared to respondents not living with children. However, this group is 9.57 percent more likely to be former daily smokers compared to respondents not living with children at home and this group is also 6.55 percent more likely to be current smokers compared to respondents not living with children at home. In terms of absolute differences, respondents with children at home are 1.52 percentage points more likely to be current smokers than respondents without children at home, on average. They are also 2.33 percentage points more likely to be former daily smokers, on average.

Measures of fit

After having described the findings the – fitstat – command can shed light on the overall goodness of fit.

ESS 10

For example, the adjusted count R-squared measures the proportion of correct predictions beyond the baseline model (IDRE, 2011). It shows that the percentage of correct predictions beyond this baseline model is 8.6 percent. Hence, while my variables turn out to be significant at the margin, the overall decision to smoke or having tried smoking remains still largely random and is not captured in the model. There might be other factors that could do a better job and should be included in the model.

Marginal effects

STATA allows for the computation of marginal effects with the help of the – margins – command. Marginal effects differ for discrete and continuous variable where the former are discrete changes, i.e. from 0 to 1, and the latter are instantaneous rates of change. Marginal effects are commonly calculated at the means of the independent variables. Therefore STATA first presents all means before printing the results.

ESS 12ESS 13

First of all it can be noted that only age is a continuous variable. All other variables are binary and take only a value of 0 or 1. The marginal effects for all those variables therefore show how P(Y=1) changes as these independent binary variables change from 0 to 1 while all other variables are held constant at their means (Williams, 2016). For example, the predicted probability of being a current smoker compared to never having smoked is 0.158 greater for daily drinkers and 0.102 greater for frequent drinker if you take two hypothetical respondents evaluated at the means. Another example is education. For two hypothetical respondents evaluated at the means of the sample, having obtained high education reduces the probability of being a current smoker, i.e. the predicted probability of being a current smoker is 0.118 smaller for individuals in the high education group compared to individuals in the low education group. In contrast, the negative effect of secondary education is a lot smaller and less significant.

Regression diagnostics: I. Multicollinearity

The model can be tested for collinearity with the – collin – command, which would cause standard errors to be inflated.

ESS 11

There are different rules of thumb for detecting multicollinearity. The most rigorous is probably a Variance Inflating Factor (VIF) of greater than 2 and therefore a tolerance of lower than 0.5 (1/VIF). However, my model does not suffer from inflated standard errors and a mean VIF of 1.37 is pretty good.

II. Tests of independent variables

The – mlogtest – command allows for testing for independent variables. There is the option for a likelihood ratio test (lr) as well as a wald test (wald). Both test the null hypothesis whether all coefficients associated with the given variables are in fact zero (Williams, 2015).

Both tests reject the null hypothesis for all variables at the 1 percent level except for the immigrant variable. For this one the null hypothesis can be rejected at the 5 percent level. Hence each variable’s effects are highly significant in the model.

III. Tests for combining dependent categories

The – mlogtest – command also allows for testing whether the categories of the dependent variable should in fact be combined. Again, there is the option for a Likelihood-Ratio test (lrcomb) as well as a Wald test (combine). The null hypothesis is that all coefficients except intercepts associated with a given pair of alternatives are in fact zero, meaning that the alternatives can be collapsed for a more efficient estimation (Williams, 2015).

Overall, both the LR and the Wald Test confirm that none of the categories should be combined. They are significant at the 1 percent level. It can be concluded that the outcomes are distinguishable with respect to the variables included in the model.

IV. Tests for independence of irrelevant alternatives

Lastly, the – mlogtest – command can test the independence of irrelevant alternatives (IIA) assumption which is crucial for the multinomial logit model. If violated one can revert to an alternative specific multinomial probit or a nested logit model. Both relax the IIA assumption (IDRE, 2010). The test for IIA is either based on a Hausman test, a suest-based Hausman test or a Small-Hsiao test. All of the three tests work in a similar manner; for each alternative in the model they drop the individuals that choose that particular alternative and then re-estimate the model with the alternatives that remain (Allison, 2012). Because I have 3 alternatives in my model (beyond the base outcome of never having smoked), the tests proceed in three steps. They first drop being a former party smoker (fparty), then drop being a former daily smoker (fdaily) and lastly drop being a current smoker (current). If the IIA assumption were to hold, the results of the restricted model (2 alternatives) should not differ from the unrestricted model (3 alternatives).

It should be noted that the tests have been criticized, because they are typically inconclusive or even contradictory. In case you want more information on this, Peter Allison (2012) has devoted a complete blog post to the drawbacks of the three tests. One of the major criticisms is that the Small-Hsiao test results in different outcomes every time because it splits the sample into two halves and also the Hausman test results in different outcomes if one changes the base category (Sarkisian, n.d.). This is why it is often recommended to instead focus on the Hausman test which uses seemingly unrelated estimation (SUE) as methodology (Long and Freese, 2005).

ESS 18

In STATA one can obtain the three tests with the command – mlogtest, iia. Firstly, the Hausman test does not provide me with anything because the Chi2<0 and therefore my model does not meet the asymptotic assumptions of the test. Second, the suest-based Hausman test provides strong evidence against independence of irrelevant alternatives in the sample. It rejects the null hyptothesis at the 1 percent level. However the third test, i.e. the Small-Hsiao test of the IIA assumption, cannot reject the null hypothesis that the odds are independent of other alternatives. It contradicts the results of the suest-based Hausman test. As noted earlier, this is in line with the major criticisms toward IIA testing. To ensure that the violation of the IIA assumption does not interfere with my results, I should consider running an alternative specific multinomial probit or a nested logit model. However, I’ll leave this for another blog post in the future!

Thanks for reading,


Data and Documentation

ESS Round 7: European Social Survey (2015): ESS-7 2014 Documentation Report. Edition 1.0. Bergen, European Social Survey Data Archive, Norwegian Social Science Data Services for ESS ERIC.

ESS Round 7: European Social Survey Round 7 Data (2014). Data file edition 1.0. Norwegian Social Science Data Services, Norway – Data Archive and distributor of ESS data for ESS ERIC.

Inspiration for the Model

Brown, D.C. (n.d.). Models for Ordered and Unordered Categorical Variables [pdf]. Population Research Center. Retrieved from:


Allison, P. (2012, 8 October). How Relevant is the Independence of Irrelevant Alternatives? Statistical Horizons. Retrieved from:

IDRE (23 April, 2010). Stata Data Analysis Examples: Multinomial Logistic Regression. Institute for Digital Research and Education. Retrieved from:

IDRE (2011, 20 October). FAQ: What are pseudo R-squareds? Statistical Horizons. Retrieved from:

Long, J., and Freese, J. (2005). Regression Models For Categorical Dependent Variables Using Stata (2nd ed.). College Station, TX: Stata Press.

Sarkisian, N. (n.d.). Sociology 704: Topics in Multivariate Statistics – Multinomial Logit [pdf]. Retreived from:

Williams, R. (2015, 21 February). Post-Estimation Commands for MLogit [pdf]. Retrieved from:

Williams, R. (2016, 23 January). Marginal Effects for Continuous Variables [pdf] Retrieved from: