# Appendix

## R packages

The following is a non-exhaustive list of R packages which contain GAM functionality. Each is linked to the CRAN page for the package. Note also that several build upon the mgcv package used for this document. I haven’t really looked much lately, as between mgcv and brms there is little you can’t do. I can vouch that gamlss and VGAM are decent too, but I’ve not used either in a long time.

brms Allows for Bayesian GAMs via the Stan modeling language (very new implementation).

CausalGAM This package implements various estimators for average treatment effects.

gam Functions for fitting and working with generalized additive models.

gamboostLSS: Boosting models for fitting generalized additive models for location, shape and scale (gamLSS models).

GAMens: This package implements the GAMbag, GAMrsm and GAMens ensemble classifiers for binary classification.

gamlss: Generalized additive models for location, shape, and scale.

gamm4: Fit generalized additive mixed models via a version of mgcv’s gamm function.

gss: A comprehensive package for structural multivariate function estimation using smoothing splines.

mgcv: Routines for GAMs and other generalized ridge regression with multiple smoothing parameter selection by GCV, REML or UBRE/AIC. Also GAMMs.

VGAM: Vector generalized linear and additive models, and associated models.

## A comparison to mixed models

We noted previously that there were ties between generalized additive and mixed models. Aside from the identical matrix representation noted in the technical section, one of the key ideas is that the penalty parameter for the smooth coefficients reflects the ratio of the residual variance to the variance components for the random effects (see Fahrmeier et al., 2013, p. 483). Conversely, we can recover the variance components by dividing the scale by the penalty parameter.

To demonstrate this, we can set things up by running what will amount to equivalent models in both mgcv and lme4 using the sleepstudy data set that comes from the latter^{55}. I’ll run a model with random intercepts and slopes, and for this comparison the two random effects will not be correlated. We will use the standard smoothing approach in mgcv, just with the basis specification for random effects - `bs='re'`

. In addition, we’ll use restricted maximum likelihood as is the typical default in mixed models.

```
library(lme4)
= lmer(Reaction ~ Days + (1|Subject) + (0 + Days|Subject),
mixed_model data = sleepstudy)
= gam(
ga_model ~ Days + s(Subject, bs = 're') + s(Days, Subject, bs = 're'),
Reaction data = sleepstudy,
method = 'REML'
)
```

In the following we can see they agree on the fixed/parametric effects, but our output for the GAM is in the usual, albeit, uninterpretable form. So, we’ll have to translate the smooth terms from the GAM to variance components as in the mixed model.

`summary(mixed_model)`

```
Linear mixed model fit by REML ['lmerMod']
Formula: Reaction ~ Days + (1 | Subject) + (0 + Days | Subject)
Data: sleepstudy
REML criterion at convergence: 1743.7
Scaled residuals:
Min 1Q Median 3Q Max
-3.9626 -0.4625 0.0204 0.4653 5.1860
Random effects:
Groups Name Variance Std.Dev.
Subject (Intercept) 627.57 25.051
Subject.1 Days 35.86 5.988
Residual 653.58 25.565
Number of obs: 180, groups: Subject, 18
Fixed effects:
Estimate Std. Error t value
(Intercept) 251.405 6.885 36.513
Days 10.467 1.560 6.712
Correlation of Fixed Effects:
(Intr)
Days -0.184
```

`summary(ga_model)`

```
Family: gaussian
Link function: identity
Formula:
Reaction ~ Days + s(Subject, bs = "re") + s(Days, Subject, bs = "re")
Parametric coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 251.405 6.885 36.513 < 2e-16 ***
Days 10.467 1.560 6.712 3.67e-10 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Approximate significance of smooth terms:
edf Ref.df F p-value
s(Subject) 12.94 17 89.29 1.09e-06 ***
s(Days,Subject) 14.41 17 104.56 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
R-sq.(adj) = 0.794 Deviance explained = 82.7%
-REML = 871.83 Scale est. = 653.58 n = 180
```

Conceptually, we can demonstrate the relationship with the following code that divides the scale by the penalty parameters, one for each of the smooth terms. However, there has been some rescaling behind the scenes regarding the Days effect, so we have to rescale it to get what we need.

```
= c(
rescaled_results $reml.scale / ga_model$sp[1],
ga_model$reml.scale / (ga_model$sp[2] / ga_model$smooth[[2]]$S.scale),
ga_modelNA
)
= VarCorr(mixed_model) %>% data.frame()
lmer_vcov = data.frame(var = rescaled_results, gam.vcomp(ga_model)) gam_vcov
```

My personal package mixedup does this for you, and otherwise makes comparing mixed models from different sources easier.

```
::extract_variance_components(mixed_model)
mixedup::extract_variance_components(ga_model) mixedup
```

model | group | effect | variance | sd | sd_2.5 | sd_97.5 | var_prop |
---|---|---|---|---|---|---|---|

mixed | Subject | Intercept | 627.57 | 25.05 | 15.26 | 37.79 | 0.48 |

mixed | Subject.1 | Days | 35.86 | 5.99 | 3.96 | 8.77 | 0.03 |

mixed | Residual | 653.58 | 25.57 | 22.88 | 28.79 | 0.50 | |

gam | Subject | Intercept | 627.57 | 25.05 | 16.09 | 39.02 | 0.48 |

gam | Subject | Days | 35.86 | 5.99 | 4.03 | 8.91 | 0.03 |

gam | Residual | 653.58 | 25.57 | 22.79 | 28.68 | 0.50 |

Think about it this way. Essentially what is happening behind the scenes is that effect interactions with the grouping variable are added to the model matrix (e.g. `~ ... + Days:Subject - 1`

)^{56}. The coefficients pertaining to the interaction terms are then penalized in the typical GAM estimation process. A smaller estimated penalty parameter suggests more variability in the random effects. A larger penalty means more shrinkage of the random intercepts and slopes toward the population level (fixed) effects.

Going further, we can think of smooth terms as adding random effects to the linear component^{57}. A large enough penalty and the result is simply the linear part of the model. In this example here, that would be akin to relatively little random effect variance.

## Time and Space

One of the things to know about GAMs is just how flexible they are. Along with all that we have mentioned, they can also be applied to situations where one is interested in temporal trends or the effects of spatial aspects of the data. The penalized regression approach used by GAMs can easily extend such situations, and the mgcv package in particular has a lot of options here.

### Time

A natural setting for GAMs is where there are observations over time. Perhaps we want to examine the trend over time. The SLiM would posit a linear trend, but we often would doubt that is the case. How would we do this with a GAM? We can incorporate a feature representing the time component and add it as a smooth term. There will be some additional issues though as we will see.

Here I use the data and example at Gavin Simpon’s nifty blog, though with my own edits, updated data, and different model^{58}. The data regards global temperature anomalies.

```
## Global temperatures
# Original found at "https://crudata.uea.ac.uk/cru/data/temperature/"
load(url('https://github.com/m-clark/generalized-additive-models/raw/master/data/global_temperatures.RData'))
```

Fitting a straight line to this would be disastrous, so let’s do a GAM.

```
= gam(Annual ~ s(Year), data = gtemp)
hot_gam summary(hot_gam)
```

```
Family: gaussian
Link function: identity
Formula:
Annual ~ s(Year)
Parametric coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.076564 0.007551 -10.14 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Approximate significance of smooth terms:
edf Ref.df F p-value
s(Year) 7.923 8.696 182.2 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
R-sq.(adj) = 0.903 Deviance explained = 90.7%
GCV = 0.010342 Scale est. = 0.0098059 n = 172
```

We can see that the trend is generally increasing, and has been more or less since the beginning of the 20th century. We have a remaining issue though. In general, a time series is autocorrelated, i.e. correlated with itself over time. We can see this in the following plot.

`acf(gtemp$Annual)`

What the plot shows is the correlation of the values with themselves at different *lags*, or time spacings. Lag 0 is it’s correlation with itself, so the value is 1.0. It’s correlation with itself at the previous time point, i.e. lag = 1, is 0.92, it’s correlation with itself at two time points ago is slightly less, 0.86, and the decreasing trend continues slowly. The dotted lines indicate a 95% confidence interval around zero, meaning that the autocorrelation is still significant 25 years apart.

With our model, the issue remains in that there is still autocorrelation among the residuals, at least at lag 1.

The practical implications of autocorrelated residuals is that this positive correlation would result in variance estimates that are too low. However, we can take this into account with a slight tweaking of our model to incorporate such autocorrelation. For our purposes, we’ll switch to the gamm function. It adds additional functionality for generalized additive *mixed* models, though we can just use it to incorporate autocorrelation of the residuals. In running this, two sets of output are provided, one in our familiar gam model object, and the other as a lme object from the nlme package.

```
= gamm(Annual ~ s(Year),
hot_gam_ar data = gtemp,
correlation = corAR1(form = ~ Year))
summary(hot_gam_ar$gam)
```

```
Family: gaussian
Link function: identity
Formula:
Annual ~ s(Year)
Parametric coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.07696 0.01130 -6.812 1.73e-10 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Approximate significance of smooth terms:
edf Ref.df F p-value
s(Year) 6.879 6.879 104.1 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
R-sq.(adj) = 0.901
Scale est. = 0.010297 n = 172
```

`summary(hot_gam_ar$lme)`

```
Linear mixed-effects model fit by maximum likelihood
Data: strip.offset(mf)
AIC BIC logLik
-289.0641 -273.3266 149.5321
Random effects:
Formula: ~Xr - 1 | g
Structure: pdIdnot
Xr1 Xr2 Xr3 Xr4 Xr5 Xr6 Xr7 Xr8 Residual
StdDev: 1.252053 1.252053 1.252053 1.252053 1.252053 1.252053 1.252053 1.252053 0.1014737
Correlation Structure: AR(1)
Formula: ~Year | g
Parameter estimate(s):
Phi
0.3614622
Fixed effects: y ~ X - 1
Value Std.Error DF t-value p-value
X(Intercept) -0.0769567 0.0113296 170 -6.792539 0.0000
Xs(Year)Fx1 0.4282956 0.1692888 170 2.529970 0.0123
Correlation:
X(Int)
Xs(Year)Fx1 0
Standardized Within-Group Residuals:
Min Q1 Med Q3 Max
-2.21288171 -0.73869325 0.04665656 0.70416540 3.25638634
Number of Observations: 172
Number of Groups: 1
```

In the gam output, we see some slight differences from the original model, but not much (and we wouldn’t expect it). From the lme output we can see the estimated autocorrelation value denoted as `Phi`

^{59}. Let’s see what it does for the uncertainty in our model estimates.

We can in fact see that we were a bit optimistic in the previous fit (darker band). Our new fit expreses more uncertainty at every point^{60}. So, in using a GAM for time-series data, we have similar issues that we’d have with standard regression settings, and we can deal with them in much the same way to get a better sense of the uncertainty in our estimates.

### Space

Consider a data set with latitude and longitude coordinates to go along with other features used to model some target variable. A spatial regression analysis uses an approach to account for spatial covariance among the observation points. A common technique used is a special case of *Gaussian process* which, as we noted previously, certain types of GAMs can be seen as such also. In addition, some types of spatial models can be seen similar to random effects models, much like GAMs. Such connections mean that we can add spatial models to the sorts of models covered by GAMs too.

When dealing with space, we may have spatial locations of a continuous sort, such as with latitude and longitude, or in a discrete sense, such as regions. In what follows we’ll examine both cases.

#### Continuous Spatial Setting

Our example^{61} will use census data from New Zealand and focus on median income. It uses the nzcensus package^{62} which includes median income, latitude, longitude and several dozen other variables. The latitude and longitude are actually centroids of the area unit, so this technically could also be used as a discrete example based on the unit.

Let’s take an initial peek. You can hover over the points to get the location and income information.

```
library(nzcensus)
= AreaUnits2013 %>%
nz_census filter(WGS84Longitude > 0 & !is.na(MedianIncome2013)) %>%
rename(lon = WGS84Longitude,
lat = WGS84Latitude,
Income = MedianIncome2013) %>%
drop_na()
```

So we can go ahead and run a model predicting median income solely by geography. We’ll use a Gaussian process basis, and allowing latitude and longitude to interact (bumping up the default wiggliness possible to allow for a little more nuance). What the GAM will allow us to do is smooth our predictions beyond the points we have in the data to get a more complete picture of income distribution across the whole area^{63}^{64}.

```
= gam(Income ~ s(lon, lat, bs = 'gp', k = 100, m = 2), data = nz_census)
nz_gam summary(nz_gam)
```

```
Family: gaussian
Link function: identity
Formula:
Income ~ s(lon, lat, bs = "gp", k = 100, m = 2)
Parametric coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 29497.8 148.1 199.2 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Approximate significance of smooth terms:
edf Ref.df F p-value
s(lon,lat) 76.38 90.1 7.445 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
R-sq.(adj) = 0.27 Deviance explained = 30.1%
GCV = 4.0878e+07 Scale est. = 3.9105e+07 n = 1784
```