Okun’s law

Professor House mentioned the “statistical pattern” known as Okun’s law in class. From the “Lecture_2_handouts”: \[ \% \Delta \text{GDP} \approx 3 \% - 2 \% \times \Delta \text{unemployment rate}. \] Plotting this relationship involves two steps. We first need to grab the data from FRED. Second, we need to regress the percent change in GDP against the change in the unemployment rate to uncover estimates of the two above parameters that define the linear relationship. The R code replicates figure 10-4 in Mankiw, 9e, p 286.

Getting the data from FRED

The first step is getting the data from FRED and creating annual averages of the data.

library(tidyverse)
library(lubridate)
library(tidyquant)
dat <- tq_get(c("UNRATE", "GDPC1"), 
              get = "economic.data",
              from = "1940-01-01")
# "Spread" the data
dat <- dat %>% 
  spread(key = symbol, value = price) %>% 
  mutate(year = year(date)) %>% 
  select(-date)
# Get annual averages
by_year <- group_by(dat, year)
dat <- summarise_all(by_year, mean, na.rm = TRUE)
head(dat)
## # A tibble: 6 x 3
##    year GDPC1 UNRATE
##   <dbl> <dbl>  <dbl>
## 1 1947. 1939. NaN   
## 2 1948. 2020.   3.75
## 3 1949. 2009.   6.05
## 4 1950. 2184.   5.21
## 5 1951. 2360.   3.28
## 6 1952. 2456.   3.02
tail(dat)
## # A tibble: 6 x 3
##    year  GDPC1 UNRATE
##   <dbl>  <dbl>  <dbl>
## 1 2013. 15612.   7.36
## 2 2014. 16013.   6.18
## 3 2015. 16472.   5.27
## 4 2016. 16716.   4.87
## 5 2017. 17096.   4.35
## 6 2018.   NaN    4.10

We’re interested in the percent change in real GDP and the difference in the unemployment rates:

# Calculate percent change in rGDP and the difference in the unemployment rate
dat <- dat %>% 
  mutate(gdp = 100*(GDPC1 / lag(GDPC1) - 1),
         ur = UNRATE - lag(UNRATE))

Estimating the pattern

At this point we’re ready to estimate the relationship between the percentage change in real GDP and the change in the unemployment rate posited by Okun:

# Run the regression
okun <- lm(gdp ~ ur, data = dat)
print(okun)
## 
## Call:
## lm(formula = gdp ~ ur, data = dat)
## 
## Coefficients:
## (Intercept)           ur  
##       3.186       -1.751

With rounding, we’re at the coefficients mentioned in class, which is a minor victory.

The last step is plotting the data. To do so, we need to add the regression line:

xur <- seq(-2, 3)
reg_line <- okun$coefficients[[1]] + okun$coefficients[[2]]*xur
fit <- tibble(xur, reg_line)

ggplot(data = dat) +
  geom_point(mapping = aes(x = ur, y = gdp)) +
  geom_line(mapping = aes(x = xur, y = reg_line), 
            data = fit,
            color = "blue", 
            size = 1.2) +
  xlab("Change in the unemployment rate") + 
  ylab("Percentage change in real GDP") +
  ggtitle("Okun's law")

Just fyi: you can add a regression line in R without having to run the regression using R’s built-in commands. I did these calculations by hand just for practice.

Real interest rates from homework 1

Read in the data that is provided on Canvas:

library(tidyverse)
library(lubridate)
dat <- readxl::read_excel("../PS1-data_2018.xlsx")
head(dat)
## # A tibble: 6 x 4
##    Year Month  CPIU FEDFUNDS
##   <dbl> <dbl> <dbl>    <dbl>
## 1 1960.    1.  29.4     3.99
## 2 1960.    2.  29.4     3.97
## 3 1960.    3.  29.4     3.84
## 4 1960.    4.  29.5     3.92
## 5 1960.    5.  29.6     3.85
## 6 1960.    6.  29.6     3.32
tail(dat)
## # A tibble: 6 x 4
##    Year Month  CPIU FEDFUNDS
##   <dbl> <dbl> <dbl>    <dbl>
## 1 2017.    6.  244.     1.04
## 2 2017.    7.  244.     1.15
## 3 2017.    8.  245.     1.16
## 4 2017.    9.  246.     1.15
## 5 2017.   10.  247.     1.15
## 6 2017.   11.  248.     1.16

Next we want to compute the inflation rate as a percent change at an annual rate. The formula for computing a compounded annual rate of change is \[ \left[ \left( \frac{x_t}{x_{t-1}} \right)^{\text{number obs per year}} - 1 \right] \times 100, \] where \(x_t\) is the value of the series at time \(t\). For monthly data the number of observations per year is \(12\). A useful overview of similar transformations is available at the FRED website. To accomplish this in R we can use the lag command:

dat <- dat %>% 
  mutate(LCPIU = lag(CPIU))
head(dat)
## # A tibble: 6 x 5
##    Year Month  CPIU FEDFUNDS LCPIU
##   <dbl> <dbl> <dbl>    <dbl> <dbl>
## 1 1960.    1.  29.4     3.99  NA  
## 2 1960.    2.  29.4     3.97  29.4
## 3 1960.    3.  29.4     3.84  29.4
## 4 1960.    4.  29.5     3.92  29.4
## 5 1960.    5.  29.6     3.85  29.5
## 6 1960.    6.  29.6     3.32  29.6
tail(dat)
## # A tibble: 6 x 5
##    Year Month  CPIU FEDFUNDS LCPIU
##   <dbl> <dbl> <dbl>    <dbl> <dbl>
## 1 2017.    6.  244.     1.04  244.
## 2 2017.    7.  244.     1.15  244.
## 3 2017.    8.  245.     1.16  244.
## 4 2017.    9.  246.     1.15  245.
## 5 2017.   10.  247.     1.15  246.
## 6 2017.   11.  248.     1.16  247.
dat <- dat %>% 
  mutate(infl = 100*((CPIU / LCPIU)^12 - 1),
         datem = ymd(paste0(Year, "-", Month, "-01")))

dat <- dat %>%
  filter(datem <= ymd("2015-11-01"),
         datem >= ymd("1960-01-01"))
ggplot(data = dat) +
  geom_line(mapping = aes(x = datem, y = infl)) +
  xlab("") + ylab("Percent per year") + ggtitle("Inflation")

Now we can compute the ex post real interest rate (see the post here). The Fisher equation is better expressed as \(r_t = i_t - \mathbb{E}_t \pi_{t+1}\). At time \(t\) we don’t know expected inflation. There are a couple of ways you could imagine proceeding. The ex post real interest rate is constructed by putting actual inflation: \[ r_t = \text{Fed Funds Rate}_t - \text{inflation}_{t+1}. \]

dat <- dat %>% 
  mutate(r = FEDFUNDS - lead(infl)) %>% 
  filter(!is.na(r))

And plot the resulting series:

ggplot(data = dat) +
  geom_line(mapping = aes(x = datem, y = r)) +
  xlab("") + ylab("Percent per year") + ggtitle("Real interest rates")

avg <- mean(dat$r, na.rm = TRUE)

The real interest rate in 2015-10-01 is -1.7170486. The average real interest rate over the sample is 1.3743401.

Ideally you’d like to list those numbers using R code. Here’s the code I used: max(dat$datem) and dat$r[[which(dat$datem == max(dat$datem))]]. Putting an r in from of those commands lists the output within the R markdown, resulting in the output you see above.

We can add recession shading and the sample avarage:

recessions_df <- read.table(textConnection(
"Peak, Trough
1857-06-01, 1858-12-01
1860-10-01, 1861-06-01
1865-04-01, 1867-12-01
1869-06-01, 1870-12-01
1873-10-01, 1879-03-01
1882-03-01, 1885-05-01
1887-03-01, 1888-04-01
1890-07-01, 1891-05-01
1893-01-01, 1894-06-01
1895-12-01, 1897-06-01
1899-06-01, 1900-12-01
1902-09-01, 1904-08-01
1907-05-01, 1908-06-01
1910-01-01, 1912-01-01
1913-01-01, 1914-12-01
1918-08-01, 1919-03-01
1920-01-01, 1921-07-01
1923-05-01, 1924-07-01
1926-10-01, 1927-11-01
1929-08-01, 1933-03-01
1937-05-01, 1938-06-01
1945-02-01, 1945-10-01
1948-11-01, 1949-10-01
1953-07-01, 1954-05-01
1957-08-01, 1958-04-01
1960-04-01, 1961-02-01
1969-12-01, 1970-11-01
1973-11-01, 1975-03-01
1980-01-01, 1980-07-01
1981-07-01, 1982-11-01
1990-07-01, 1991-03-01
2001-03-01, 2001-11-01
2007-12-01, 2009-06-01"), sep=',', 
colClasses=c('Date', 'Date'), 
header=TRUE)

recessions_trim <- subset(recessions_df, Peak >= min(dat$datem))
ggplot(data = dat) +
  geom_line(mapping = aes(x = datem, y = r)) +
  geom_hline(yintercept = avg, color = "red") +
  geom_rect(data = recessions_trim, 
            aes(xmin = Peak, xmax = Trough,
                ymin = -Inf, ymax = +Inf),
            fill = "pink", alpha = 0.3) +
  xlab("") + ylab("Percent per year") + 
  ggtitle("Real interest rates", subtitle = "with recession shading")

The red line shows the sample average.

Consumption around the Great Recession

You explored the motive for smoothing consumption in class. So you can imagine the shock economists experienced when they saw this picture as the Great Recession unfolded:

# Real Personal Consumption Expenditures (PCECC96)  DOWNLOAD 
# Observation: Q3 2017: 11,916.576 (+ more)  
# Updated: Dec 21, 2017
# Units: Billions of Chained 2009 Dollars, Seasonally Adjusted Annual Rate
# Frequency: Quarterly

dat <- tq_get("PCECC96", get = "economic.data", 
              from = "2005-01-01",
              to = "2012-01-01")

recessions_trim <- subset(recessions_df, Peak >= min(dat$date))
ggplot(data = dat) +
  geom_line(mapping = aes(x = date, y = log(price))) +
  geom_rect(data = recessions_trim, 
            aes(xmin = Peak, xmax = Trough,
                ymin = -Inf, ymax = +Inf),
            fill = "pink", alpha = 0.4) +
  xlab("") + ylab("Log(Billions of real 2009 dollars)") + 
  ggtitle("Real personal consumption expenditures")

Solow convergence

Recall the Solow model described in the section 6 notes: \[ \begin{align*} K_{t+1} &= (1-\delta)K_t + I_t \\ &= (1-\delta)K_t + sY_t \\ &= (1-\delta)K_t + sA^{1-\alpha}K_t^{\alpha}{N_t}^{1-\alpha}. \end{align*} \] Consider an economy with no population growth: \(N_t = 1\) for all \(t\). The steady-state level of capital is defined by \(K_t = K_{t+1} = K_{\star}\): \[ \begin{align*} K_{\star} &= (1-\delta)K_{\star} + sA^{1-\alpha}K_{\star}^{\alpha} \\ \therefore 1 &= (1-\delta) + sA^{1-\alpha}K_{\star}^{\alpha - 1} \\ \therefore K_{\star}^{\alpha - 1} &= \frac{\delta}{s A^{1-\alpha}} \end{align*} \] Simplifying the latter yields

\[ \begin{align*} K_{\star} &= \left( \frac{\delta}{s A^{1-\alpha}} \right)^{\frac{1}{\alpha - 1}} \\ \therefore K_{\star} &= \left( \frac{s A^{1-\alpha}}{\delta} \right)^{\frac{1}{1-\alpha}}. \end{align*} \] Or \[ K_{\star} = A \left( \frac{s}{\delta} \right)^{\frac{1}{1-\alpha}}. \]

Define parameters:

s <- 0.15
del <- 0.1
alph <- 0.35
N <- 1.0
A <- 1.0

The steady-state level of capital is

Kstar <- A*(s / del)^(1 / (1-alph))

Paths of capital stock for different starting levels of capital

Now we’re interested in starting the economy from the steady-state level of capital and then observing how the sequence of capital stocks unfolds. We want to initialize the capital stocks and then iterate the capital stocks using a for loop with: \[ K_{t+1} = (1-\delta)K_t + sA^{1-\alpha}K_t^{\alpha}. \]

horizon <- 100 # like the 100 spreadsheet rows
K <- rep(NaN, horizon)
# Now make the first period
K[[1]] <- Kstar
for (ii in 2:horizon) {
  K[[ii]] <- (1 - del)*K[[ii-1]] + s*A^(1-alph)*K[[ii-1]]^alph
}

Now we can start the capital 25 percent above and below the steady-state level and check for convergence:

Khi <- rep(NaN, horizon)
Klo <- Khi
# Now make the first period
Khi[[1]] <- Kstar*(1 + 0.25)
Klo[[1]] <- Kstar*(1 - 0.25)
for (ii in 2:horizon) {
  Khi[[ii]] <- (1 - del)*Khi[[ii-1]] + s*A^(1-alph)*Khi[[ii-1]]^alph
  Klo[[ii]] <- (1 - del)*Klo[[ii-1]] + s*A^(1-alph)*Klo[[ii-1]]^alph
}

Now we can plot the sequences of capital stocks:

toplot <- cbind(1:horizon, K, Klo, Khi)
toplot <- as_tibble(toplot) %>% 
  rename(ind = V1)
toplot
## # A tibble: 100 x 4
##      ind     K   Klo   Khi
##    <dbl> <dbl> <dbl> <dbl>
##  1    1.  1.87  1.40  2.33
##  2    2.  1.87  1.43  2.30
##  3    3.  1.87  1.46  2.27
##  4    4.  1.87  1.48  2.24
##  5    5.  1.87  1.50  2.22
##  6    6.  1.87  1.53  2.20
##  7    7.  1.87  1.55  2.17
##  8    8.  1.87  1.57  2.15
##  9    9.  1.87  1.59  2.13
## 10   10.  1.87  1.60  2.12
## # ... with 90 more rows
# Reshape the data to plot using ggplot
toplot_K <- toplot %>% 
  gather(K, Klo, Khi, key = "series", value = "lvl")
toplot_K
## # A tibble: 300 x 3
##      ind series   lvl
##    <dbl> <chr>  <dbl>
##  1    1. K       1.87
##  2    2. K       1.87
##  3    3. K       1.87
##  4    4. K       1.87
##  5    5. K       1.87
##  6    6. K       1.87
##  7    7. K       1.87
##  8    8. K       1.87
##  9    9. K       1.87
## 10   10. K       1.87
## # ... with 290 more rows
# Now plot the sequences
ggplot(data = toplot_K) +
  geom_line(mapping = aes(x = ind, y= lvl, 
                          color = series, 
                          linetype = series), size = 2) +
  xlab("Index") + ylab("Capital stock") +
  scale_color_brewer(palette = "Set2")

Changing the saving rate

Now starting in period 51 the saving rate changes to .25. We’re also interested in plotting the series for 125 periods. Let’s start the problem over and make the saving rates a vector.

horizon <- 125
K <- rep(NA, horizon)
s <- append(rep(0.15, 50), rep(0.25, horizon - 50))
# Check
s[[50]]
## [1] 0.15
s[[51]]
## [1] 0.25
length(s)
## [1] 125

Now we want to calculate the capital stock just like before—making suring that we index s appropriately:

K[[1]] <- Kstar
for (ii in 2:horizon) {
  K[[ii]] <- (1 - del)*K[[ii-1]] + s[[ii-1]]*A^(1-alph)*K[[ii-1]]^alph
}

Calculating \(C\), \(Y\), and \(I\) is easy. We want to create a data frame:

dat <- as_tibble(cbind(1:horizon, K, s)) %>% 
  rename(ind = V1)
# Calculate the macroeconomic series
dat <- dat %>% 
  mutate(Y = A*K^alph,
         I = s*Y,
         C = (1-s)*Y)
head(dat)
## # A tibble: 6 x 6
##     ind     K     s     Y     I     C
##   <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1    1.  1.87 0.150  1.24 0.187  1.06
## 2    2.  1.87 0.150  1.24 0.187  1.06
## 3    3.  1.87 0.150  1.24 0.187  1.06
## 4    4.  1.87 0.150  1.24 0.187  1.06
## 5    5.  1.87 0.150  1.24 0.187  1.06
## 6    6.  1.87 0.150  1.24 0.187  1.06
tail(dat)
## # A tibble: 6 x 6
##     ind     K     s     Y     I     C
##   <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1  120.  4.07 0.250  1.63 0.409  1.23
## 2  121.  4.07 0.250  1.63 0.409  1.23
## 3  122.  4.07 0.250  1.63 0.409  1.23
## 4  123.  4.07 0.250  1.64 0.409  1.23
## 5  124.  4.08 0.250  1.64 0.409  1.23
## 6  125.  4.08 0.250  1.64 0.409  1.23

We can also add in the steady-state level of capital when s_end = .25.

s_end <- .25
Kstar_end <- A*(s_end / del)^(1 / (1-alph))
dat <- dat %>% 
  mutate(`New steady-state K` = Kstar_end)

Now we can plot everything:

Kplot <- dat %>% 
  gather(K, `New steady-state K`, key = "Series", value = "lvl")
ggplot(data = Kplot) +
  geom_line(mapping = aes(x = ind, y = lvl, color = Series)) +
  scale_color_brewer(palette = "Set1") +
  xlab("Index") + ylab("Capital stock")

macro <- dat %>% 
  gather(Y, I, C, key = "Series", value = "lvl")
ggplot(data = macro) +
  geom_line(mapping = aes(x = ind, y = lvl, color = Series)) +
  xlab("Index") + ylab("Real levels")

Changes in government spending and interest rates

This section uses Python code to study an extension to the basic Solow model.

Solow assumptions

Consider a Solow economy where a representative household saves a constant fraction of after-tax income and a government the levies lump-sum taxes to finanace government consumption, while running a balanced budget. Government consumption is a constant fraction, \(\phi\), of output.

The system of equations that describes the economy is given by \[ \begin{align*} Y_t &= A^{1-\alpha}K_{t}^{\alpha}N_t^{1-\alpha} \\ N_t &= 1 \\ Y_t &= C_{t} + I_{t} + G_{t} \\ I_t &= s (Y_{t} - T_{t}) \\ T_t &= G_t = \phi Y_t \\ K_{t+1} &= (1-\delta)K_t + I_t. \end{align*} \]

The steady state of this model is given by \(K_{t+1} = K_{t} = K_{\star}\). Developing the equation that governs the evolution of capital yields \[ \begin{align*} K_{t+1} &= (1-\delta) K_{t} + I_t \\ &= (1-\delta) K_{t} + s (Y_t - T_t) \\ &= (1-\delta) K_{t} + s (Y_t - G_t) \\ &= (1-\delta) K_{t} + s (Y_t - \phi Y_t) \\ &= (1-\delta) K_{t} + s (1-\phi)Y_t \\ &= (1-\delta) K_{t} + s (1-\phi)A^{1-\alpha}K_t^{\alpha}. \end{align*} \] In steady state, the latter evaluates to \[ \begin{align*} K_{\star} &= (1-\delta) K_{\star} + s (1-\phi)A^{1-\alpha}K_{\star}^{\alpha} \\ \therefore \delta K_{\star} &= s (1-\phi)A^{1-\alpha}K_{\star}^{\alpha} \\ \therefore \delta &= s (1-\phi)A^{1-\alpha}K_{\star}^{\alpha-1} \\ \therefore K_{\star}^{\alpha-1} &= \frac{\delta}{s (1-\phi)A^{1-\alpha}} \\ \therefore K_{\star} &= \left( \frac{\delta}{s (1-\phi)A^{1-\alpha}} \right)^{\frac{1}{\alpha - 1}} \end{align*} \] or \[ \begin{equation} K_{\star} = A \left( \frac{s(1-\phi)}{\delta} \right)^{\frac{1}{1-\alpha}}. \end{equation} \] The latter is an expression for steady-state capital using the Solow assumption. When \(\phi = 0\), then we are back to the expression for \(K_{\star}\) in the section on Solow convergence.

Steady-state output is \[ \begin{align*} Y_{\star} &= A^{1-\alpha}K_{\star}^{\alpha} \\ &= A^{1-\alpha} A^{\alpha} \left( \frac{s(1-\phi)}{\delta} \right)^{\frac{\alpha}{1-\alpha}} \\ &= A \left( \frac{s(1-\phi)}{\delta} \right)^{\frac{\alpha}{1-\alpha}}. \end{align*} \] Consumption is given by \[ \begin{align*} Y_t &= C_t + I_t + G_t \\ \therefore C_t &= Y_t - I_t - G_t \\ &= Y_t - s(Y_t - T_t) - G_t = Y_t - s(Y_t - G_t) - G_t \\ &= Y_t - s(Y_t - \phi Y_t) - \phi Y_t \\ &= Y_t - s(1-\phi)Y_t - \phi Y_t \\ &= (1-\phi)Y_t - s (1-\phi) Y_t \\ &= (1-s)(1-\phi)Y_t. \end{align*} \] Steady-state consumption is given by \[ \begin{equation} C_{\star} = (1-s) (1-\phi) A \left( \frac{s(1-\phi)}{\delta} \right)^{\frac{\alpha}{1-\alpha}}. \end{equation} \]

“Standard preferences” with a behavioral rule for expectations

A behavioral rule for expectations

With “standard preference” the consumption Euler equation would be (with \(\beta = 1\) and log preferences) \[ \begin{align*} \frac{1}{C_t} = (1+r_t) \mathbb{E}_t \left[ \frac{1}{C_{t+1}} \right]. \end{align*} \] We are going to posit that consumers expect \(C_{t+1}\) to equal \(C_{\star}\), making the interest rate \[ \begin{align*} 1+r_t = \frac{C_{\star}}{C_t}. \end{align*} \]

Computer simulation

The following simulation is done in Python, which can be used in R Markdown through the reticulate package.

Suppose that \(\phi_{t}\) is uniformly distributed between .17 and .23. We’re going to simulate the economy, using \(\phi_{t}\) as the driving force.

import numpy as np
import matplotlib.pyplot as plt
import random
# Parameter values
s = 0.2
delta = 0.1
alpha = 0.35
A = 10.0
phi_ss = 0.2
random.seed(402)

Calculate and display steady-state levels:

Kstar = A*(s*(1-phi_ss)/delta)**(1/(1-alpha))
Ystar = (A**(1-alpha))*Kstar**(alpha)
Cstar = (1-s)*(1-phi_ss)*Ystar
Istar = s*(1-phi_ss)*Ystar
Gstar = phi_ss*Ystar
print("Steady-state capital is", Kstar)
## Steady-state capital is 20.607757849261993
print("Steady-state output is", Ystar)
## Steady-state output is 12.879848655788745
check = Ystar - Cstar - Istar - Gstar
print("Check!", check)
## Check! -2.220446049250313e-15

The experiment starts the economy out at steady state and the investigates what happens when \(\phi_t\) bounces around.

horizon = 125
Ksim = Kstar*np.ones((horizon, 1))
Ysim = Ystar*np.ones((horizon, 1))
Isim = Istar*np.ones((horizon, 1))
Csim = Cstar*np.ones((horizon, 1))
Gsim = Gstar*np.ones((horizon, 1))
rsim = 0.0*np.ones((horizon, 1))
phi_sim = 0.17 + 0.06*np.random.rand(horizon,1)
for ii in range(25, horizon):
    Ksim[ii] = (1-delta)*Ksim[ii-1] + Isim[ii-1]
    Ysim[ii] = A**(1-alpha)*(Ksim[ii])**alpha
    Isim[ii] = s*(1-phi_sim[ii])*Ysim[ii]
    Csim[ii] = (1-s)*(1-phi_sim[ii])*Ysim[ii]
    rsim[ii] = Cstar/Csim[ii] - 1
    Gsim[ii] = phi_sim[ii]*Ysim[ii]

Now calculate the percent differences from steady state:

Gtilde = (Gsim - Gstar)/Gstar

Go back to R to plot in R Markdown:

plot(py$Ksim, type = "o", col = "blue")
title("K simulation")

plot(py$rsim, type = "o", col = "blue")
title("r simulation")

How do fluctuations in \(G\) relate to \(r\)? Is this the relationship predicted by the IS$ndash;LM framework?

plot(py$Gtilde, py$rsim)

The figures show the economy is bouncing around the steady state. This was a very quick overview into something you can do for the homework. The code starts from a steady state and then considers the effect of fluctuations in the investment share. Python is both powerful and fun.