library(alr4) # NEW! For data. Let me know if you have issues installing this one)
library(GGally) # New ... maybe? For pairwise plots
library(nullabor) # for picking out residual plots from a "line-up" of plots
library(tidyverse) # for plotting and summarizing
library(moderndive) # for nice model output
library(broom) # for nice model output
library(patchwork) # for nicely organizing plots
library(openintro) # for data
theme_set(theme_minimal()) #changes the theme of ggplots to theme_minimal, my personal favorite
GOAL:
By the end of these notes and activities, you should be able to perform the following tasks.
No explanatory variables. Every predicted value is the average response.
One quantitative explanatory variable.
Models with interaction effects. Interpretations change depending on what types of variables are in the model, but models with interaction effects allow the relationship between each explanatory variable involved in the interaction and the response variable to differ depending on the value of the other explanatory variable involved in the interaction.
(We haven’t actually discussed this, but we’re ready to!) Models with more than two variables. The most important piece is to always acknowledge the that you’ve accounted for other variables when interpreting specific coefficients.
The residuals in linear models are assumed to follow some assumptions.
Violations of these assumptions can cause problems when we want to make inferences, which we’ll want to do very soon. We can check many of these assumptions by looking at a plot of the residuals vs. fitted values and a histogram of the residuals.
Below I simulated some data to assure assumptions are met. Simulated means I created the data in a specific way to adhere to certain properties. Do not worry about the code.
set.seed(10)
x <- rnorm(500, mean = 400, sd = 200)
y <- 100 + 4*x + rnorm(500, mean = 0, sd = 300)
simdat <- tibble(x, y)
ggplot(simdat) +
geom_point(aes(x = x, y = y)) +
theme_minimal()
Let’s fit a model and look at some helpful plots.
sim_mod <- lm(y ~ x, data=simdat)
tidy(sim_mod)
augment(sim_mod) %>%
# plot residuals on y-axis and fitted values on x-axis
ggplot(aes(x=.fitted, y=.resid)) +
geom_point() +
# add a line that shows a "moving average" of the fitted values
geom_smooth(se = FALSE) +
# add a horizontal line at y = 0
geom_hline(yintercept = 0, color = "red")
augment(sim_mod) %>%
ggplot(aes(x=.resid)) +
geom_histogram(bins = 30)
YOUR TURN!
In the first plot, we can assess if the residuals have constant variance and mean zero. How? What might the plot look like if they didn’t?
The second plot allows us to check the normality of the residuals. How? What might it look like if it weren’t normal?
It is difficult to check for independence. Can you think of any scenarios where our data might not be independent?
It can be difficult to decide if the residual plot shows any “weird” patterns. Our eyeballs and brains are pretty good at seeing patterns where they don’t actually exist.
One way we can combat this is by looking at the plot from our model among a lineup of other plots where there is no pattern. If we can’t pick out the plot from our model, then likely there is no weird pattern.
I won’t ever ask you to write this code on your own, but I will explain what it is doing. First, we create 20 datasets that have .fitted
and .resid
. One of the datasets is our actual dataset, created using the augment()
function. The other 19 are the same except the residuals are permuted (randomly mixed up).
The weird code in the output can be used to find out which .sample
is the true data. Run it in the console after looking at the plots to find out if your guess what correct.
set.seed(155) # for reproducibility
# create a "lineup" of data from our model
# BUT, in each dataset, the residuals are permuted (mixed up)
mod_lineup <- lineup(null_permute(".resid"),
true = augment(sim_mod))
## decrypt("sD0f gCdC En JP2EdEPn 8j")
mod_lineup
Then, we look at all 20 .resid
vs. .fitted
plots. Can we pick out our residual plot from the lineup? If not, there’s likely nothing “weird” about it and our assumptions are probably satisfied (at least the ones we can check with this plot). If we CAN pick it out, then an assumption is likely violated.
ggplot(mod_lineup) +
geom_point(aes(x = .fitted, y = .resid)) +
facet_wrap(vars(.sample))
Let’s try this with the King County house data. First, we read in the data and do some slight modifications.
kc_house_data2 <-
house_prices %>%
filter(bedrooms<=5, bedrooms>0) %>%
mutate(grade_CAT = fct_relevel(ifelse(grade %in% "1":"7", "Low",
ifelse(grade == "8", "Medium","High")),
"Low", "Medium", "High"),
age=2015-yr_built)
Next, fit the model price ~ sqft_living15 + age
.
kc_2var <- lm(price ~ sqft_living15 + age,
data=kc_house_data2)
tidy(kc_2var)
Now, let’s look at a plot of residuals vs. fitted values and a histogram of the residuals.
a1 <- augment(kc_2var) %>%
ggplot(aes(x=.fitted, y = .resid)) +
geom_point(size = .5, alpha = .3) +
geom_smooth(se=FALSE) +
geom_hline(yintercept = 0, color = "darkred") +
scale_y_continuous(labels = scales::comma) +
labs(x = "Fitted Values", y = "Residuals")
a2 <- augment(kc_2var) %>%
ggplot(aes(x=.resid)) +
geom_histogram(bins=50) +
scale_x_continuous(labels = scales::comma)
a1 + a2
I will also create the lineup of residuals vs. fitted values. Can you pick out one that looks different? After you’ve looked and think you might have found it (if you think you can), you can copy and paste the decrypt()
code in the console to see if it’s the one you thought.
set.seed(155) # for reproducibility
kc_lineup <- lineup(null_permute(".resid"),
true = augment(kc_2var))
## decrypt("sD0f gCdC En JP2EdEPn 8T")
#
#null_lm(price ~ sqft_living15 + age, method = 'rotate')
ggplot(kc_lineup) +
geom_point(aes(x = .fitted, y = .resid),
size = .5, alpha = .3) +
facet_wrap(vars(.sample))
YOUR TURN!
What do you observe in these plots? Do any assumptions appear violated?
Let’s look at another plot. This is a new dataset that has the age and length of walleye (a type of fish), from the alr4
package.
ggplot(walleye) +
geom_jitter(aes(x = age, y = length)) +
theme_minimal()
YOUR TURN!
age
to explain length
.There are many sophisticated methods that can be used to fix problems with linear model assumptions. But a fairly simple solution that often works well is to transform variables. We will discuss two different transformations and apply them to the two examples from above.
When variables range across more than one order of magnitude in their values (ie. they have values in the 1,000’s AND 10,000’s or 100,000’s AND the 1,000,000’s), log transforming can often help fix non-constant variance. Let’s look at the following scatterplot matrix. I did a log base 2 transformation of both price and sqft_living15. What do you notice?
kc_house_data2 %>%
mutate(log2_price = log2(price),
log2_sqft = log2(sqft_living15)) %>%
select(age, sqft_living15, log2_sqft, log2_price) %>%
ggpairs()
Now, let’s fit a model that uses log2_price as the response and age and log2_sqft as explanatory variables. First, I need to create a new dataset with these variables.
kc_house_log <- kc_house_data2 %>%
mutate(log2_price = log2(price),
log2_sqft = log2(sqft_living15))
kc_log <- lm(log2_price ~ age + log2_sqft,
data = kc_house_log)
tidy(kc_log)
YOUR TURN!
Use the residual plots to check the model assumptions. How do they look now?
For the walleye data, the biggest problem seemed to be that the mean was not zero throughout the range of fitted values. This is because the relationship between age
and length
was not linear to begin with.
We can try to address this by adding a quadratic (or higher order polynomial) term to our model. I have done that below. Notice that you need to enclose the polynomial term in I()
.
walleye_quadratic <- lm(length ~ age + I(age^2),
data=walleye)
tidy(walleye_quadratic)
YOUR TURN!
age
versus length
.Interpretation can get a bit tricky after transforming variables. We usually still prefer to interpret the model in the original units. When doing log transformations, it will be helpful to remember some rules of logs of exponents (see the help sheet on the moodle page). Below you will work through an example using the following model.
tidy(kc_log)
YOUR TURN!
Write down the model equation in terms of price
. That is, rather than having log2_price
on the left hand side of the equation, price
is on the left hand side.
Using the equation from the previous step, how does an increase of 1 year in age of a home typically affect the price (with all other variables in the model held fixed)? It might be helpful to try an example.
How does doubling square footage typically affect the price (with all other variables in the model held fixed)? There is a reason I’m asking this question in this way …
Now build a more complex model with log2_price as the response. Add a categorical variable and/or interaction effect. How do you interpret the coefficients of a categorical variable? An interaction effect?
In our class, anytime we use \(log\) without a subscript, it should be taken as the natural log, \(log_e\) or \(ln\).
In the rules below, \(a, b\) are numbers.
R tips
exp(x)
= \(e^x\)log(x)
= \(log_e(x)\)log2()
and log10()
, respectively.