New Zealand’s Output Gap and the OCR

Today my post is ECON101 stuff. It is going to be about output, unemployment, and Okun’s Law – one of the basic concepts in Economics – but in the context of New Zealand for the period from 2000 to 2014. In the second part of the post I will also dicuss the effects of the decrease in New Zealand’s OCR in March 2016. This is part of one of my assignments for Macroeconomics and therefore I take it as an opportunity to use it as content for today’s post!

So let’s look at the theory first. Okun’s Law states that there is a negative relationship between a country’s output gap and deviations from the natural rate of unemployment (NAIRU), i.e. unemployment higher than normal corresponds to a recessionary (negative) output gap and unemployment lower than normal corresponds to an inflationary (positive) output gap (Krugman & Wells, 2009).

In order to visualize this relationship for New Zealand for the time period from 2000 to 2014, the following inputs are needed:

  • The actual rate of unemployment and actual output from 2000 to 2014
  • The estimated natural rate of unemployment

New Zealand’s unemployment rate (ILO estimates) and its real GDP (constant 2005 US$) can be obtained from the World Bank database. In addition, its natural rate of unemployment comes from the Macroeconomics textbook and is estimated at 5.34 percent, which is the average unemployment rate from 1996 to 2006 (Krugman & Wells, 2009).

After having obtained these two data series and the natural unemployment estimate, potential GDP can be calculated as follows, where the unemployment rates are stated in decimals:
okuns law 1

okuns law 2.png
(Source: World Bank, 2016)

We are then ready to plot the results. The diagram above shows New Zealand’s actual GDP (constant 2005 US$) and potential output over time. One can see that from 2002 to 2008 potential GDP was below actual GDP. Since 2009 potential GDP has been above actual GDP. Overall, potential GDP has risen steadily over the period except from 2007 to 2008. Actual GDP has risen steadily over the period except from 2007 to 2009, i.e. one year less than potential GDP. The difference between actual and potential output was highest in 2007 ($2.07 billion) and lowest in 2012 (- $2.12 billion). Furthermore, the output gap can also be calculated as output gap as percentage of potential GDP and unemployment can be shown as the difference between actual and natural unemployment:

okuns law 3

Taken together, the diagram below highlights the negative relationship between New Zealand’s output gap and cyclical unemployment as stated in the beginning. If unemployment is below its long-term trend, New Zealand’s economy is overheating, meaning that it is running above capacity and experiences a positive output gap. Conversely, if unemployment is above its long-term trend, New Zealand’s economy is running below capacity and experiences a negative output gap.

okuns law 4.png
(Source: World Bank, 2016)

The diagram shows that New Zealand is now almost back at full capacity. In 2014, the recessionary output gap decreased to 0.27 percent of potential GDP. This trend is likely to continue in 2015 and 2016 and therefore the output gap is expected to become positive at some point in the near future. However, on 10 March 2016 the Reserve Bank of New Zealand decided to lower the Official Cash Rate (OCR) by 25 basis points to 2.25 percent (RBNZ, 2016). We are now ready to analyse what is likely to happen as a result of this intervention. Let me give you my answer first and I will explain it in depth: The lowering of the OCR will accelerate an overheating of the economy rather than accelerating a recovery from the 2008/09 recession.

In theory there are two possible scenarios; either New Zealand’s economy currently suffers from excess capacity or the economy is at its long run equilibrium close to its natural rate of output and unemployment. The ultimate effect of a decrease in the OCR will differ depending on these circumstances as shown in the panels (i) and (ii) below. In both scenarios a decrease in the OCR will initially increase aggregate autonomous spending (AE0) at any level of GDP. This assumption holds true, because intuitively a lower interest rate induces people to increase spending. The formal explanation is the following: either the opportunity cost of saving the money that people kept in their vault falls due to the fall of the OCR, because they do not earn as much interest as before and therefore go spend the money instead, or people can now borrow at a lower interest rate, which will render some unprofitable investments profitable, and will do so. Overall, the lowering of the OCR will both induce consumers and businesses to increase their autonomous spending. This will shift the planned aggregate spending line to AE’planned and the aggregate demand line to D’ in both scenarios. However, thereafter the outcomes differ.

Another important point I want to make on the go is that this is probably a re-run of the story of the early 2000s, where low interest rates set by the Federal Reserve (FED) triggered a housing boom in the US. A similar thing is happening in New Zealand; Auckland is at the forefront of this with skyrocketing house prices. But let’s get back to the theory for now.

Copy of Decrease OCR Income Expenditure Model (8)

In the first scenario New Zealand’s economy starts below its long-run equilibrium with excess capacity. The short-run aggregate supply curve (SRAS) is perfectly elastic, because firms can readily meet increased demand by scaling up production. This does not put inflationary pressure on the country’s price level P in this scenario, because there is an excess supply of labour (positive unemployment gap) and an excess amount of inventories, for example due to the earlier shortfall of aggregate demand in the global financial crisis (GFC). In this case, the lowering of the OCR pushes the economy back to its long-run equilibrium at point B where actual GDP is equal to potential GDP (Yn). The intervention of the Reserve Bank of New Zealand to lower the OCR by 25 basis points therefore has a positive long-run effect.

Copy of Decrease OCR Income Expenditure Model (7)

In the second scenario New Zealand’s economy is already at its long-run equilibrium (point A). This is the case when there is no output gap and unemployment is at its natural rate. An increase in the OCR in this scenario triggers a positive output gap coupled with an increase in the price level to P1 in the short-run because the short-run supply curve is not perfectly elastic in this case. As actual output increases to levels above potential output, unemployment falls below its natural rate and becomes relatively scarce in the economy (at least skilled labour). This will allow workers to bargain for higher wages. In the long-run firms will need to adjust by cutting back on their supply. They will be affected by the higher price level through the rise of input prices (labour, capital). This moves the economy back to its natural rate of output (Yn) at an even higher price level P3 (point C). Hence the decrease in the OCR causes an overheating of New Zealand’s economy in the short-run; thereafter further inflation and a contraction of GDP in the long-run. In this case, the intervention has no long-run effect on GDP; the short-run boost in actual output is not sustainable. What is more, the intervention likely to harm the economy due to the costs of inflation (distortions, e.g. for home-owners in the housing market; menu costs).

In summary, it can be argued that New Zealand is in the second scenario, because it is (1) close to its natural output level and (2) the unemployment rate has come down to 5.6 percent over the last two years. The lowering of the OCR will therefore do more harm than good. If this intervention had happened for example in 2011 or 2012, then the economy would have benefited from a faster recovery from excess capacity and an excess labour supply after the GFC. Now, however, this intervention is expected to trigger an inflationary output gap through channels like a housing boom. In the long-run this boom will be unsustainable. It is forecast to trigger inflation, a contraction of output, which is bringing GDP back to its natural level, and an increase in unemployment rates, which is bringing unemployment back to its normal level.

So that’s me for today. I hope you enjoyed the analysis,


Krugman, P., & Wells, R. (2009). Macroeconomics (2nd ed.). New York, N.Y.: Worth Publishers.

RBNZ (2016, 10 March). Official Cash Rate reduced to 2.25 percent [online]. Wellington: Reserve Bank of New Zealand. Retrieved from:

World Bank (2016). World Development Indicators: New Zealand [Data file]. Retrieved from:


What Makes A Musician? Econometrics

Today my post is about econometrics; in particular gettting a handle on the Logit model. I took it as an opportunity to investigate why some people have come to play a musical instrument and others haven’t. In order to do so I needed to find a dataset which also includes questions about an individual’s free time activities, i.e. a survey that asks respondents whether they play an instrument for leisure. This is why I obtained the National Survey of Culture, Leisure and Sport 2014-2015 from the UK Data Service. The survey is carried out by the Department of Culture, Media and Sport and its partner organisations Sport England, English Heritage and Arts Council England. Since the 2012/13 survey, the study also includes longitudinal elements and for the 2014/15 survey the target was to achieve a sample size of 10,000 respondents, equally split between longitudinal and new respondents. This study asks people a large range of questions concerning their cultural activities, events they participate in, as well as hobbies and sports. It has also introduced a section about what people were doing when they were growing up recently, which will come handy for constructing a variable on whether a person played an instrument during childhood.

Summary statistics

Let’s start with an overview on the data. The dataset includes 9,817 observations of which 5,480 are female (55.82 percent). Regarding the demographics, the mean age in the sample is 53 years while the median lies at 54 years. The minimum age is 16 (91 observations) and the maximum age is 100 (1 observation). The standard deviation is around 18.53, meaning that around 68 percent of the observations are between 35 and 72 years old. More than half of the respondents are married, 20 percent are single, while around 12 percent are either widowed or divorced. Around 5.5 percent of the respondents are lone parents with live-in children and around 18.4 percent of the respondents have live-in children and currently live with a partner. Almost 32 percent of the respondents have obtained higher education and professional/vocational equivalents as their highest qualification level, meaning that the other 5,409 respondents have obtained qualifications lower than this. In addition, 48.57 percent of the respondents are currently in paid work. The remainder are for example unemployed, students, retired, sick people, people looking after family, or people in training schemes.

In their free time, 914 out of the 9,817 respondents play a musical instrument. This is equal to 9.31 percent of the sample. Besides, 171 respondents (1.74 percent) have written music in the 12 months preceding the questionnaire while the remainder of 9,646 have not. Also around 12 percent of the observations have participated in painting, drawing, printmaking or sculpture in the 12 months preceding the interview. Overwhelmingly, almost 66 percent of the respondents said that they have read for pleasure in the last 12 months, which excludes newspapers, magazines or comics. In terms of music making, 427 of the respondents have done singing as performance or rehearsal/ practice in the last 12 months, which is equal to 4.35 percent of the sample. 2,206 respondents played a musical instrument, acted, danced or sang when they were growing up. Growing up is defined as the period from around age 11 to age 15. On the other hand, 2,621 of the respondents did not participate in such activities. Another 4,990 observations were not asked this question in their longitudinal questionnaire. Hence this will reduce the sample to an effective size of 4,827 observations at most later on.

The Logit Model

This leaves us with 11 dependent regressors and the independent dummy variable of playing an instrument (1=yes) for the Logit model. 9 of the dependent regressors are binary themselves. In addition, age is measured in 4 categories with the 16-24 age category as reference group. This will show whether people are less likely to play an instrument when getting older compared to the youngest generation. Parental status has 3 categories with no live-in children as reference category. It assesses whether people with children are less likely to play an instrument due to time constraints, especially when they are lone parents.

Coding Overview

Instrument 1 if playing an instrument
Gender 1 if female
  • 16-24 (ref.)
  • 25-44
  • 45-64
  • >65
Marital status 1 if (de facto) married
Parental status
  • No live-in children (ref.)
  • Lone parent with live-in children
  • Partnered with live-in children
Education 1 if higher education
Work 1 if currently in paid work
Written music 1 if written music in the last 12 months
Read books 1 if read for pleasure in the last 12 months
Painting 1 if participated in painting, drawing, printmaking or sculpture in the last 12 months
Singing 1 if sang in front of an audience or practiced singing in the last 12 months
Childhood instrument 1 if played a music instrument, acted, danced or sang when growing up (age 11-15)

Regression Results

The table below summarizes the initial regression results obtained by estimating a Logit model in STATA, where one, two and three asterisks indicate significance at 10, 5 and 1 percent level, respectively.


Logit Model 1

Logit Model 2

 Playing an instrument OR STDE   OR STDE
female 0.34 0.04 *** 0.34 0.04 ***
age 25-44 0.67 0.14 * 0.66 0.14 *
age 45-64 0.78 0.16 0.77 0.16
age above 65 0.47 0.11 *** 0.48 0.11 ***
married 0.87 0.12 0.89 0.13
lone parent with children 0.70 0.19 0.71 0.19
partnered with children 0.79 0.15 0.80 0.15
higher education 1.52 0.19 *** 1.58 0.20 ***
paid work 0.83 0.11 0.82 0.11
read books 1.37 0.20 ** 1.39 0.20 **
painting 1.81 0.26 *** 1.76 0.26 ***
singing 4.36 0.85 *** 4.22 0.83 ***
write music 29.63 12.13 *** 29.47 12.09 ***
singing*writing music 0.16 0.09 *** 0.16 0.10 ***
instrument as child 5.25 0.75 *** 5.35 0.77 ***
_cons 0.06 0.01 *** 0.06 0.01 ***
Number of obs 3912 3827
LR chi2(15) 608.32 608.39
Prob > chi2 0.00 0
Pseudo R2 0.22 0.23

In my first model 10 regressors turn out to be significant at least at 10 percent level, of which 8 are highly significant at 1 percent level. The significant variables and their odds ratios can be interpreted as follows: In the sample females are 197.47 percent less likely to play an instrument relative to males, ceteris paribus. People in the age group 25 to 44 are 49.65 percent less likely to play an instrument compared to people aged 16 to 24 everything else being equal. People aged above 65 are even 113.83 percent less likely to play an instrument relative to people aged 16 to 24, while the age group 45 to 64 is not statistically significant. This penalty for the age group 25 to 44 might well be explained by busy schedules due to work and family commitments while the age group 45 to 64 regains more flexibility for example once children left the household. The penalty for people above 65 might derive from deteriorating health conditions, for example eyesight to read music or hearing loss. While being in paid work does not turn out to be significant, higher education does have a positive impact, i.e. people with the highest level of education are 51.76 percent more likely to play an instrument relative to their peers with lower education levels.

The next significant set of variables are other leisure activities as predictors for playing an instrument. People that read books for leisure are 37 percent more likely to play an instrument relative to people that do not read for pleasure. People that paint etc. are 81 percent more likely to play an instrument relative to people that do not. People that sing are 336 percent more likely to play an instrument, highlighting that this is a strong predictor for playing an instrument. This is a rather intuitive finding as people with a talent or interest for singing are more likely to be interested or talented in playing an instrument as well (musicianship/ musical ability). Even more influential are the variables having played an instrument, acted, danced or sung as a child as well as currently writing music as a hobby. They are very strong predictors for playing an instrument in the model.

Overall this reveals an important trend: people tend to learn an instrument mainly during childhood. They are probably either enrolled by their parents or themselves wish to learn an instrument. They then either continue this hobby in later life or drop it at some point. This seems to be the main path to learn an instrument and in later life there is more something like a demographic ‘penalty’, especially age, which reduces the probability of playing an instrument rather than incentives for adults to acquire new skills and develop their musical ability. What strikes me though is the large gender gap in the dataset after controlling for other demographic influences like parental status.

Goodness of fit

Overall the Log Likelihood Chi-square statistic with 15 degrees of freedom is 608.32 and its p-value is practically zero, meaning that we can reject the null model, which would always predict 0 (no), in favour of my model. It can be concluded that my model as a whole is statistically significant. Likewise the Homer-Lemeshow’s goodness-of-fit test accepts that the model fits the data with a Hosmer-Lemeshow Chi-square statistic of 6.97 with 8 degrees of freedom and a p-value of 0.5401. One can also take the Count R-squared and the adjusted Count R-squared into account. The former is at 0.903 while the latter is 0.121. The adjusted Count R-squared gives the proportion of correct predictions beyond the baseline model of always predicting 0. The estimated model therefore makes 12.1 percent more correct predictions. As the dataset contains a large number of non-musicians of above 90 percent interpretations of the pseudo R-squared statistics should be treated with caution and I will therefore focus on what determines playing an instrument at the margin.

Measures of Fit Model 1

Log-Lik Intercept Only: -1356.989 Log-Lik Full Model: -1052.83
D(3896): 2105.66 LR(15): 608.318
Prob > LR: 0
McFadden’s R2: 0.224 McFadden’s Adj R2: 0.212
ML (Cox-Snell) R2: 0.144 Cragg-Uhler (Nagelkerke) R2: 0.288
McKelvey & Zavoina’s R2: 0.328 Efron’s R2: 0.212
Variance of y*: 4.893 Variance of error: 3.29
Count R2: 0.903 Adj Count R2: 0.121
AIC: 0.546 AIC*n: 2137.66
BIC: -30121.288 BIC’: -484.241
BIC used by Stata: 2238.009 AIC used by Stata: 2137.66

Misspecification errors

The next step is to test my model on specification errors. It could be that the relationship is not linear or that I missed out on either a relevant regressor or a linear combination of my regressors. Note that singing and writing music is positively correlated (a person that writes music is more likely to sing as well). Therefore I already included an interaction term in case one sings and writes music in my model to avoid misspecification errors in these regards. The Linktest shows that while the _hat value is statistically significant at the 1 percent level, the _hatsq is not significant with a p-value of 0.325. It can be concluded that my model includes all relevant regressors and is correctly specified.


My model is likely to suffer from multicollinearity as at least writing music, reading books, painting and singing are similar in their nature, i.e. artistic or cultural leisure activities. This could be a source for severe multicollinearity and inflate my standard errors misleading one to conclude that regressors are in significant when they in fact need to be included. The tolerance of all regressors is greater than 0.1 which is the threshold as a rule of thumb below which one should have concerns about multicollinearity. Likewise, the variance inflating factors (VIF) are all less than 10 with a mean VIF of 1.77, which is pretty good. The highest VIFs have the age category dummies of 3.54, 3.27 and 2.92 respectively. Therefore my model’s standard errors are sufficiently robust and not significantly inflated.

Influential observations

To determine whether there are influential observations due to coding errors or other issues as well as plainly legitimate outliers which might be of interest for further study, one can use a plot of the standardized Pearson and Deviance residuals as well as leverage.

The highest outlier, as shown best in the Pearson residual index plot, is observation number 9329. Looking at the data one can see that this female is in the age range 25 to 44 and a lone parent with live-in children. She is currently in paid work and does painting in her leisure time. However, she did not play an instrument, acted, danced or sang during childhood. The model predicts that she is not playing an instrument (p=0.04) when she in fact now does so. Therefore this respondent probably started playing an instrument at a later age (after 15) despite time constraints regarding family and work and is therefore a significant positive outlier. The lowest outlier is observation number 9321. This person is between age 16 and 24, currently in paid work and also played an instrument, acted, danced or sang during childhood. This male currently writes music in his leisure time and has also done painting in the last 12 months. The model strongly predicts that this respondent plays an instrument with a probability of more than 0.93. However, this person in fact does not play an instrument. Another interesting outlier is observation number 7641. This female currently plays an instrument despite being a lone parent with live-in children. She is in the age range of 16 to 24, in paid work and has played an instrument, acted, danced or sang during childhood. The model predicts that this respondent does not play an instrument due to her time constraints (p=0.06) when she in fact does so. The last interesting observation number I want to discuss is 6314. This male is in the age range of 45 to 64 and in paid work. He is currently married but not a lone parent or partnered with live-in children. The respondent does singing and painting in his leisure time, reads books and also played an instrument, acted, danced or sang during childhood. The model predicts that this respondent plays an instrument with a probability of more than 0.90 when in fact he does not.

When using the rule of thumb that a leverage of three times the average leverage (0.044) is a threshold for influential observations, i.e. a value of greater than 0.132, the model includes 85 influential observations and 3,827 non-influential observations. However, when excluding the former in a second model, the significance of the regressors does not change. The odds ratios of some regressors do change to a small extent but the main results are the same.


Today’s exercise was all about using the Logit model in practice while shedding light on what might be a determinant of playing an instrument. First and foremost, it is driven by acquiring the skills during childhood (age 11 to 15) as well as currently writing music. This is followed by singing. There is an age penalty; as people become older they are less likely to play. Females are significantly less likely to play an instrument in the dataset. This might well derive from family and household commitments which females tend to pursue more often than males leaving them less free time to be allocated to their own hobbies, but that is my own interpretation.

Thanks for reading! I hope you enjoyed the exercise,


Department for Culture, Media and Sport. (2016). Taking Part: the National Survey of Culture, Leisure and Sport, 2014-2015; Adult and Child Data. [data collection]. UK Data Service. SN: 7872,

The Price of Postcards around the World

My blog post today is rather short and more like a little fun exercise to gain familiarity with the concept of Purchasing Power Parity and the PPP exchange rate. I researched the price for a postcard as well as a 100 gram (3.53oz) letter charged by the post offices of ten countries around the world and then used the PPP exchange rate* to convert the values into international dollars. My initial goal was to see whether there are substantial differences that can be explained due to competition or non-competition in the market. However, as the table below shows, each country has very different thresholds which makes cross-country comparisons for at least the 100 gram letter somewhat difficult. It skews the picture in particular for New Zealand, Germany and South Africa, which do not have a category for the 100 gram letter itself.

Country max weight max length max width max thickness
Bangladesh 100g n/a n/a n/a
Canada 100g n/a n/a n/a
France 100g n/a n/a 3cm
Germany 500g 35.3cm 25cm 2cm
India 100g n/a n/a n/a
Japan 100g n/a n/a n/a
Norway 100g 35.3cm 25cm 2cm
New Zealand 500g 23.5cm 13cm 6mm
South Africa 1kg 25cm 17.6cm 1cm
UK 100g 24cm 16.5cm 0.5cm
USA 3.5oz 11-1/2inch 6-1/8inch 1/4inch

It is also important to note that Canada’s, France’s New Zealand’s, and the UK’s rates for postcards are identical to their rates for small letters, while countries like Germany or the USA price differentiate between these two. In general, postcards are mostly taken as any cards below 10 grams, but the exact maximum length and width can vary. From personal experience, I know that Germany can be quite strict in these regards, levying surcharges for non-standard sizes if you’re unlucky.

Country PPP Exchange rate* Postcard LCU Letter LCU Postcard int.$ Letter int.$
Bangladesh 27.05 2 20 0.07 0.74
Canada 1.23 1.00 1.80 0.81 1.46
France 0.82 0.70 1.40 0.85 1.71
Germany 0.78 0.45 1.45 0.58 1.87
India 16.98 10 25 0.59 1.47
Japan 104.72 52 140 0.50 1.34
New Zealand 1.42 0.80 0.80 0.56 0.56
Norway 9.34 11 21 1.18 2.25
South Africa 5.39 3.60 7.15 0.67 1.33
UK 0.70 0.55 0.55 0.79 0.79
USA 1.00 0.34 1.10 0.34 1.10

After obtaining the prices for the ten countries I went to the World Development Indicators published by the World Bank (2016) and downloaded the PPP conversion factor for each. The latest year available is 2014. One then has to divide the currency by the PPP exchange rate to obtain the value in international dollars which can then be compared while accounting for different purchasing power parity.

In both categories Norway has the highest prices. A domestic 100 gram letter, for example, costs twice as much in Norway compared to the USA. The UK and New Zealand benefit in this category in particular, because their post offices (Royal Mail and NZ Post) have a single band for letters up to 100 grams and 500 grams, respectively. They do not price differentiate while countries like France or Germany have the thirst threshold, i.e. price increase, after 20 grams already. This is why New Zealand ends up being actually cheapest in my ranking here. So, the fun fact: if you could really fit 500 gram into a standard New Zealand letter, you would get the greatest value for your money relative to all the other countries.

letter 2

In terms of the postcards, the picture is less skewed as there is less volatility in size and weight. It is a less heterogeneous product as the postcard mail service tends to be more standardized. It is therefore probably better for comparison if one really wanted to study postal services and their effectiveness across countries. Postcards are cheapest in Bangladesh at only 7 cents. France, Canada and the UK rank in the upper part, mainly because they charge the same prices for small letters and postcards putting their services at the more expensive end. This could arguable be a deterrent for writing postcards, inducing people to maximize their utility gained from the postal service and writing heavier letters instead than light-weight postcards. Germany’s postcards are actually quite cheap and comparable to India’s prices when using the PPP exchange rate method. This does not take into account the quality of service though. And while the USA’s postal service (USPS) is still a monopoly, it actually does perform pretty well in my postcard comparison, coming second after Bangladesh if one just looks at the prices.

letter 1

As highlighted before, these statistics are more a little exercise and are of limited value because the postal service is not a homogeneous good and countries do vary greatly in their service offerings. What is more, there will be cross-country variation in the quality of service. Thanks for reading!



*PPP conversion factor, GDP (LCU per international $)

World Bank (2016). PPP conversion factor, GDP (LCU per international $) [Data file]. Retrieved from:












Why are we missing out on Environmental Institutions?

I worked on an assignment for my class Growth and Development Economics today which discusses the link between institutional development and economic growth in the context of South Africa. One of the core readings is the NBER working paper Institutions as the Fundamental Cause of Long-Run Growth by Acemoglu, Johnson and Robinson (2004). I scribbled down some little diagrams to grasp their argument about the link between the distribution of resources, political institutions and economic prosperity when I realised that the same chain of arguments could potentially be adapted to reason why we need well-defined property rights over natural resources and the establishment of an “environmental market” (I can’t think of a good word just now).

Acemoglu, Johnson and Robinson argue in their paper that there are two main state variables governing the system, i.e. determining long-run growth. The first variable is ‘political institutions’ which determine the distribution of de jure (institutional) political power. The second one is the ‘distribution of resources’ which determines de facto political power. There is also a variable considering the possibility for collective action if groups in society can coordinate to act as collective. However it is considerably weakened by the free-rider problem which makes it hard for groups to mobilize the public in their interest. In sum, these two variables are sufficient for calculating all other variables in the system, also economic performance as the bottom line. They see it as a “natural hierarchy of institutions” (p.5) with political institutions at the top which influence equilibrium economic institutions in the middle which in turn influence economic outcomes as the bottom line – both through direct and indirect channels (if I get it right here). A fundamental part of their theory is endogeneity, which makes the model a relatively complex social system, and the view that society is the backbone. Society both consciously chooses its economic institutions as well as the distribution of political power.

The theory can prove that political and economic institutions shape people’s incentives and enforce rules and regulation in society. It proves that property rights and the presence of efficient markets are a fundamental prerequisite for long-run economic growth. Therefore the question for my thought experiment today is: Can this insight help us improve our current state of environment protection, i.e. internalize the costs imposed by negative externalities like pollution, environmental degradation or traffic jams? Can it help us develop a framework for green, sustainable growth in the long-run?


I think about it in a similar manner as the argument about political institutions; so start by replacing environmental for political institutions. Then there is a new system, let’s call it ‘Market Environmentalism’ at the top of the natural hierarchy. In the top system society determines the choices through de facto and de jure (institutional) environmental power. If it can coordinate to act as a collective, e.g. demonstrations, environmentalist groups, petitions, it can create demand for strong environmental institutions. On the other hand, there is the aspect of how resources (wealth) are distributed in that society. If they are held by a small share of the population and it can exert significant power or if companies have close ties with politics, then they can lobby them in favour of their individual interests or business. While collective action tends to establish the demand for the environmental institutions, lobbying probably tends to be the opposite. I called this first system in the diagram ‘Market Environmentalism’ because it reminded me of the second theme ‘environment as a property’ and ‘free market environmentalism’ in my blog post 2016 vs 1996 on 23 April.

In the top system the environmental institutions set the framework for the ‘green’ part of the economy. They establish the rules for the natural resource market/ environment and create property rights over environmental resources leading to an efficient ‘environmental market’. This is then transmitted into the equilibrium market economy where it may influence society’s choice over economic institutions shifting the dynamic equilibrium. Ultimately the bottom line is economic performance and environmental sustainability achieved together. Once this circle is set off it may be virtuous in nature stabilizing the upper systems in the natural hierarchy.

One might think about a conventional economy like lacking the upper part of the diagram due to lacking environmental institutions. A solution could be to

  • enable collective action to establish sufficient demand for environmental institutions
  • achieve a more equal distribution of resources to prevent the influence of individuals’ interests preventing the establishment of environmental institutions
  • establish environmental institutions with the help of external actors in the international environment.

I know that the diagram is somewhat simplistic but it is a good start for critical thinking! Thanks for reading,


Acemoglu, D., Johnson, S. and Robinson, J. (2004). Institutions as the Fundamental Cause of Long-Run Growth. NBER Working Paper No. 10481. Retrieved from:

Is Caring for the Environment a Luxury Good?

I finally got hold of a library copy of Naked Economics: Undressing the Dismal Science by Charles Wheelan. In the first chapter The Power of Markets Wheelan notes that

“concern for the environment is a luxury good” (2012, p.7).

The argument is the following if I get it right: People with higher incomes can basically afford to care, whereas poorer people have a smaller fraction of their incomes available to spend on environmentally-conscious goods as more of their income goes towards necessities. Intuitively I agree with Wheelan. Environmentalism carries a hefty price tag in today’s economy which is counterproductive to promoting sustainability. If one wants to cater for the mass market it is all about prices; they create the incentives to buy and not their ‘environmental-friendliness’ grading. If sustainable products become cheaper than the conventional alternatives, this induces people to change their consumption pattern. They will substitute something for the more sustainable alternative and will thereby also make society better off in the long run (positive externality).

But here is today’s question: if a sense for environmentalism does increase with income, does this also hold for countries? Do countries care more about the environment as they get richer? It is not an easy question but I’ll try to come up with some data for a short post today. I went to the OECD’s Green Growth Indicators and downloaded the following indicators to STATA for the 34 OECD countries for the period 2000 to 2012:

  • Real GDP per capita
  • Development of environment-related technologies, inventions per capita
  • Municipal waste generated, kg per capita

Firstly, I want to test whether there is a positive correlation between the countries’ income levels and their focus on research and development in cleaner technologies. The correlation turns out to be 0.48 for my dataset and here is how the story looks like for the latest year available:

Real GDP per capita vs Inventions per capita 2012

There is a positive relationship between inventions per capita and real GDP per capita in the data. However, there are considerable outliers worth looking at. Firstly, there is Luxembourg which is isolated from the rest of the OECD countries. It could be a negative outlier given its high income level, i.e. the inventions per capita predicted might be higher than observed. Then there is a cluster of six countries that do not fit into the picture: Austria, Denmark, Finland, Japan, Korea and Germany. Given their income levels they produce more environmentally-related inventions per capita than predicted (positive outliers). For the remainder of countries the variability in outcomes increases with higher income levels (heteroskedasticity). However, the initial hypothesis seems to hold at a crude level. As income levels rise countries tend to develop more environment-related technologies which I take as a proxy for ‘caring for the environment’ here.

Real GDP per capita vs Waste per capita 2012

Before finishing off I want to picture a second story though. People not only care more for the environment as they grow richer, they also consume more, i.e. they become more consumption-oriented. So while richer people can afford to care for the environment they do not necessarily choose to do so. At country level, this might well be depicted in the diagram above. Higher incomes are significantly positively correlated with more waste per capita (0.64) in the dataset. While the innovative outliers Japan and Korea also produce less waste given their income levels and compared to other countries like New Zealand or Israel which are in their income group, the innovative outlier Denmark is actually at the high end of the spectrum in terms of waste together with Switzerland and the USA.

In sum, I would add to the quote from the introduction that while caring for the environment might well be a luxury good there is a second aspect to the discussion at country level: whether a country that can afford to spend more on these more sustainable goods actively chooses to do so and whether it creates the right incentives for its population. If not, then the relationship between income and spending on more environmentally friendly goods, technologies etc. breaks down in practice. Furthermore, there are noteworthy outliers that have decoupled income from innovation in environmental technologies.

Thanks for reading!


OECD (2016). Green Growth Indicators Database – Green Growth Indicators (Last updated: April 2016). [Data file] Retrieved from OECD.Stat database:

Wheelan, C. (2012). Naked economics: undressing the dismal science. New York, N.Y.: W.W. Norton & Company.

2016 vs 1996

The last chapter of Paul Krugman’s Book The Accidental Theorist (1998) is an interesting one. The essay is called Looking Backward and was originally published in New York Times Magazine on 29 September 1996. The main theme of the essay is a proposition of five economic trends of 1996 that we should have expected but yet failed to anticipate. According to Krugman (pp.198-202) these are:

  1. Soaring resource prices
  2. The environment as a property 
  3. The rebirth of the big city
  4. The devaluation of higher education
  5. The celebrity society

The second and third one probably need some explanation. The environment as a property describes the trend to establish free and foremost efficient markets around natural resources to tackle environmental issues like polluting clean air with clearly established property rights.  This trend is driven by the realisation that our environment has limits and natural resources are scarce. Putting a price tag on these helps to reduce inefficient overuse (free rider problem, tragedy of the Commons) and makes people internalize social costs. The rebirth of the big city describes the revival of urban living. This is based on the observation that low-skilled jobs are mainly located in rural areas while high-skilled jobs tend to cluster in cities due to the need for face-to-face interaction etc. As low-skilled jobs vanished people flocked back into the cities they left for the low-skilled jobs in the first place.

What I want to blog about today is to ask the following: Do these observations hold for the year 2016 as well as they did twenty years ago?

Let’s start with resource prices. At the time Krugman wrote his essay, nominal crude oil prices had risen to 22.26$/bbl (September 1996) after the drop to 13.77$/bbl in December 1993. Looking more closely at the monthly commodity price indices compiled by the World Bank one can see that over the period from 1993 to 1996 commodity prices for food, energy or raw materials rose. 1997 and 1998 marked two years of decline before commodity prices took off to unprecedented levels.  Recently, however, commodity prices have fallen remarkably. Most notably, the World Bank’s energy index fell from levels in the 130s in 2014 to around 40.5 in January 2016 (2010=100). This is due to the large drop in crude oil prices both driven by supply and demand factors – especially by the increase in oil production of OPEC countries triggering an oil glut and their failure to coordinate to lower production as a response to supply exceeding demand (IMF, 2016). But also the raw material index fell after a spike in early 2011 and the food index likewise fell after mid-2012.

commodity prices.png

What are the expectations for 2016 though? If one takes the Dow Jones Commodity Index as an indicator, commodity prices are likely to revert again as the Commodity Index has risen more than 13 percent this year (DiChristopher, 2016). Also the benchmark copper price on the London Metal Exchange as well as iron ore prices have picked up again and there are clear signals that we have passed the trough and markets are turning around. ANZ Research for example predicts a 17 percent price rise in nickel, 13 percent price rise in copper and 7 percent in zinc over the next 12 months as well as the bet that short term winners are sugar, nickel, corn, palladium and thermal coal (Fensom, 2016). Also Scotiabank predicts the beginning of a price rally in 2016 driven by a weaker U.S. dollar and less concerns over China (Crawford, 2016). But another important point remains here which Krugman already made in his essay in 1996. Natural resources are ultimately scarce and we are far past an era of low natural resource prices. This is more or less the basic law of supply and demand from ECON101. With ultimately limited supply but increasing world demand, prices will spiral upwards sooner or later and people who are willing to pay the most will win the bidding.

The second theme concerns the promotion of property rights in natural resource markets. In fact, notable research on this can even be dated back to 1991 when Anderson and Leal published their book Free Market Environmentalism (FME). Anderson (2007) notes that there have been FME success stories concerning land and water markets. However, there has been more a trend towards government regulations and active government intervention with the goal to fight climate change rather than the establishment of free markets to let people internalize the costs of environmental degradation etc. (Downey, 2016). The US Environmental Protection Agency is an example of active government intervention to cure market failure and environmental problems and is often criticized for its detailed regulation without improving America’s environment (Smith, 2011). On the other hand, there is evidence for the free market approach in the Kyoto Protocol signed by 192 parties. It does include natural resource market mechanisms, in particular carbon emissions trading to lower greenhouse gas emissions. It has had mixed results in terms of commitment to specific targets (USA, China, India) but the EU Emissions Trading System (ETS) does seem to work with a ‘cap and trade’ scheme which is in principle the least-cost method to reduce emissions (Dawson, 2011). In sum, the environment as a property trend does not seem to accurately describe our state of environmentalism today as we moved more towards detailed regulation rather than a free market approach although there are some well-working counter-examples like the ETS.

The third trend concerns urbanisation. This holds true for today as well as it did for 1996 and – although Krugman probably focused on America – this is a global trend. According to the UNFPA (2016) the world now experiences the highest urban growth in history and today urban areas are home to half of the world’s population. Among the most urbanized regions are North America, Latin America and the Carribbean. What is more, there is a trend towards megacities. These are cities with more than 10 million people of which there are 28 currently (United Nations, 2014). There also continues to be a correlation between a country’s income level and its urbanisation level, i.e. increasing urbanisation is generally associated with increasing economic prosperity. Hence the hypothesis still seem to hold that high-skilled (high-paying) jobs are clustered in population-dense areas.

The fourth observation surprised me a little: The devaluation of higher education. Krugman notes in his essay that the pay-off of higher education has shrunk. Other post-secondary training has taken over university degrees, requiring less time for job training and preparation and therefore lower opportunity costs. He also notes that today’s elite universities are more similar to the nineteenth century as a social institution rather than a scholarly one. Supporting evidence for North America at least comes from the gross tertiary enrolment ratio (World Bank, 2016b). Over the period from 1996 to 2000 enrolment did drop by 11.5 percent. However, contrary to the prediction of its declining value, enrolment thereafter picked up again. Especially during the Global Financial Crisis and its aftermath enrolment increased considerably. This likely stems from the fact that during recessions the opportunity cost of education are lower as less jobs tend to be available. Similar to the experience in 1996 one can see a renewed fall in enrolment in North America. The statistics for the US in particular show a peak of around 96 percent in enrolment in 2011 which is now back down to 89 percent in 2014. This drop might well be explained by improving labour market prospects. The diagram also reveals trends among different income groups of countries; all of which have been on the rise until recently indicating the increasing importance of tertiary education in building human capital even in high income regions like the EU. The world trend in tertiary enrolment has now surpassed 30 percent. However, since the world economy’s recovery enrolment seems to have stagnated (lower middle income) or fallen (low income, high income) with the exception being upper middle income countries.

tertiary enrolment.png

This analysis does not answer the question completely though, i.e. whether higher education is losing in value today. There is evidence that higher education has transformed over the last years due to fierce competition with other post-secondary schooling. Buller (2014) for example notes that there is a re-orientation towards job training and career preparation and a shift away from pure research to applied research.   Furthermore he argues that American higher education which used to be both a preparation for career and for life has now lost the preparation-for-life goal out of sight in favour of job training. This is a bit different to what I understand Krugman observed in 1996. Universities do not seem to turn back to the old days being merely social institutions but rather lean more and more towards job training and career preparation. Evidence for this could be the rise of MBAs and other professional degrees which are very much based on job experience and learning in an applied setting. What is probably more worrisome than a devaluation of higher education in 2016 is credential inflation. Rather than having less graduates on the job market employers face more and more high-skilled graduate applications and can be selective (again the law of supply and demand) so that at some point the master’s degree becomes the new bachelor’s (Pappano, 2011).  In sum, I would argue that higher education has revalued over the last decade even if enrolment ratios are currently decreasing slightly. Higher education has increasingly become a source of job training and the higher education landscape has adapted in 2016 to lessen the opportunity cost of gaining a degree (short courses, distance and online learning etc.) as the demand shifted towards job training. It remains questionable whether the developments in higher education are overall a good or bad thing but they seem to be different from 1996.

Lastly, the celebrity society theme is more or less self-explanatory. It is at least as important as 1996 with social media, television and other means being its drivers. Today being a celebrity pays rather than the content because this person is creating an intangible brand that can be marketed without the possibility of “copyright infringements”. Fame has become a true asset in today’s economy with increasing value; even more so than in 1996.

I hope you enjoyed today’s post as a sort of comparison of what changed and what actually didn’t. Thanks for reading!


Anderson, T. (2007). Free Market Environmentalism. PERC Report, 25(1). Retrieved from:

Buller, J.L. (2014). The Two Cultures of Higher Education in the Twenty-First Century and Their Impact on Academic Freedom [pdf]. Journal of Academic Freedom, Volume 5. Retrieved from:

Crawford, E. (2016). Commodity prices set for significant rebound in 2016: Scotiabank. Retrieved from:

Dawson, G. (2011). Free Markets, Property Rights and Climate Change: Hot to Privatize Climate Policy. Libertarian Papers, 3(10). Retrieved from:

DiChristoper, T. (2016, 21 April 2016). Rising commodity prices could spell trouble for Fed: Boockvar [online]. CNBC. Retrieved from:

Downey, H. (2016). TBT: A Free Market Earth Day. Retrieved from:

Fensom, A. (2016, 28 March 2016). Commodity Prices: The Cycle Turns? The Diplomat. Retrieved from:

IMF (2016). Commodity Special Feature from World Economic Outlook April 2016 [pdf]. Retrieved from:

Krugman, P. (1998). The Accidential Theorist: And other Dispatches from the Dismal Science. New York, N.Y.: W.W. Norton & Company.

Pappano, L. (2011, 22 July 2011). The Master’s as the New Bachelor’s [online]. The New York Times. Retrieved from:

United Nations, Department of Economic and Social Affairs, Population Division (2014). World Urbanization Prospects. The 2014 Revision, Highlights (ST/ESA/SER.A/352).

Smith, F.L. (2011). A Free Market Environmental Program. Retrieved from:

UNFPA (2016). Urbanization. Retrieved from:

World Bank (2016a). Commodity Markets Monthly Data [data file]. Retrieved from:

World Bank (2016b). World Development Indicators [data file]. Retrieved from World Development Indicators (WDI) database: