The case for GAMs
Why not just use standard methods?
We have seen that GAMs generalize our typical approaches, but can they really help? The standard linear model is ubiquitous in statistical training and application, and for good reason. It is simple to do and easy to understand. Let’s go ahead and do one to get things started with some simulated data.
= lm(y ~ x1 + x2, data = dat)
mod summary(mod)
Estimate | Std. Error | t value | Pr(>|t|) | |
---|---|---|---|---|
(Intercept) | 7.3 | 0.3 | 21.8 | 0 |
x1 | 6.4 | 0.4 | 14.6 | 0 |
x2 | -5.1 | 0.5 | -11.3 | 0 |
Observations | Residual Std. Error | \(R^2\) | Adjusted \(R^2\) |
---|---|---|---|
400 | 2.532 | 0.4594 | 0.4567 |
Everything is nice and tidy. We have straightforward information, positive effect of x1
, negative for x2
, and familiar output. Depending on your context, the R2 may or may not be something exciting. Let’s look at some diagnostics5.
Some issues might be present, as we might be getting a little more variance with some, especially higher, fitted values. We’re also a little loose in the tails of the distribution of the residuals. Let’s compare our predictions to the data. With a strong model, we might see a cigar shaped cloud converging to a line with slope 1 as the fit gets better. We seem to be having some issues here, as the previous residual plot seemed to indicate.
Now let’s go back and visualize the data. The following plots both features against the target variable.
Yikes. We certainly have a positive effect for x1, but it looks rather curvy. The other feature doesn’t appear to have a relationship that could be classified as easily. It is sharply positive for low values of x2, but negative thereafter.
Heteroscedasticity, non-normality etc.
In many cases as above, people have some standby methods for dealing with the problem. For example, they might see the qq-plot for the residuals and think some of those cases are ‘outliers’, perhaps even dropping them from analysis. Others might try a transformation of the target variable, for example, in cases of heteroscedasticity (not because of non-normality!) some might take the log.
= lm(log(y) ~ x1 + x2, dat)
modlog summary(modlog)
Estimate | Std. Error | t value | Pr(>|t|) | |
---|---|---|---|---|
(Intercept) | 1.83 | 0.05 | 33.84 | 0 |
x1 | 0.94 | 0.07 | 13.29 | 0 |
x2 | -0.69 | 0.07 | -9.37 | 0 |
Observations | Residual Std. Error | \(R^2\) | Adjusted \(R^2\) |
---|---|---|---|
400 | 0.4102 | 0.3968 | 0.3937 |
Well, our fit in terms of R2 has actually gone down. Let’s check the diagnostics.
The transformation may have helped in some ways, but made other things worse.
We continue to see some poor fitting cases and now our fit is flattening even more than it was.
This is a fairly typical result. Transformations often exacerbate data issues or fail to help. What’s more, some of them lead to more difficult interpretation, or aren’t even applicable (e.g. categorical, ordinal targets).
Outliers, if there was actually a standard for deeming something as such, are just indications that your model doesn’t capture the data generating process in some fashion. Cutting data out of the modeling process for that reason hasn’t been acceptable for a long time (if it ever was).
Data abounds where a standard linear model performs poorly or doesn’t do a good enough job capturing the nuances of the data. There may be nonlinear relationships as above, dependency in the observations, known non-Gaussian data etc. One should be prepared to use models better suited to the situation, rather than torturing the data to fit a simplified modeling scheme.
Polynomial Regression
A common application in regression to deal with nonlinear relationships involves polynomial regression. For the feature in question, \(x\), we add terms e.g. quadratic (\(x^2\)), cubic (\(x^3\)) etc. to get a better fit. Consider the following data situation.
Let’s fit a quadratic term and note the result.
= lm(y ~ poly(x, 2)) mod_poly
The R2 for this is 0.78, that’s great! But look closely and you might notice that we aren’t capturing the lower tail of this target at all, and not doing so great for observations that are very high on both variables. Here’s what the data and fit would look like if we extend the data based on the underlying true function, and things only get worse.
Part of the reason is that, outside of deterministic relationships due to known physical or other causes, you are unlikely to discover a quadratic relationship between variables among found data. Even when it appears to fit, without a lot of data it is almost certainly overfit due to this reason. Fitting a polynomial is more akin to enforcing our vision of how the data should be, rather than letting the data speak for itself. Sure, it might be a good approximation some of the time, just as assuming a linear relationship is, but often it’s just wishful thinking.
Compare the previous result to the following fit from a generalized additive model. GAMs are susceptible to extrapolation, as is every statistical model ever created. However, the original fit (in red) is much better. Notice how it was better able to follow the straightened-out data points at the high end, rather than continuing the bend that the quadratic approach enforced.
A more complex relationship
Perhaps you would have been satisfied with the initial quadratic fit above or perhaps a cubic fit6. We may come across a situation where the target of interest \(y\) is a function of some covariate \(x\), whose effect is not straightforward at all. Consider the following functional form for x:
\[f(x) = sin(2(4x-2)) + 2e^{-(16^2)(x-.5)^2} + \epsilon\] \[\epsilon \sim N(0,.3^2)\]
Let’s generate some data and take a look at it visually.
set.seed(123)
= runif(500)
x = sin(2 * (4 * x - 2)) + 2 * exp(-(16 ^ 2) * ((x - .5) ^ 2))
mu = rnorm(500, mu, .3)
y = data.frame(x, y) d
Polynomial regression is problematic
A standard linear regression is definitely not going to capture this relationship. As above, we could try and use polynomial regression here, e.g. fitting a quadratic or cubic function within the standard regression framework. However, this is unrealistic at best, and at worst, isn’t useful for complex relationships. In the following, even with a polynomial of degree 15 the fit is fairly poor in many areas, and ‘wiggles’ in some places where there doesn’t appear to be a need to. You can (double) click to isolate specific fits.
The same would hold true for other approaches that require the functional form to be specified (e.g. so-called logistic growth curve models). It’s maybe also obvious that a target transformation (e.g. log) isn’t going to help us in this case either. In general, we’ll need tools better suited to more complex relationships, or simply ones that don’t require us to overfit/simplify the relationship we see, or guess about the form randomly until finding a decent fit.