7  Common Models

Before really getting into some machine learning models, let’s get one thing straight from the outset: any model may be used in machine learning, from a standard linear model to a deep neural network. The key focus in ML is on performance, and generally we’ll go with what works. This means that the modeler is often less concerned with the interpretation of the model, and more with the ability of the model to predict well on new data, but as we’ll see we can do both if desired. In this chapter, we will explore some of the more common machine learning models and techniques.

7.1 Key Ideas

The take home messages from this section include the following:

  • Any model can be used with machine learning
  • A good and simple baseline is essential for interpreting your performance results
  • One only needs a small set of tools (models) to go very far with machine learning

7.1.1 Why this matters

Having the right tools in data science saves time and improves results, and using well-known tools means you’ll have plenty of resources for help. It also allows you to focus more on the data and the problem, rather than the details of the model. A simple model might be all you need, but if you need something more complex, these models can still provide a performance benchmark.

7.1.2 Good to Know

Before diving in, it’d be helpful to be familiar with the following:

  • Linear models, esp. linear and logistic regression (Chapter 1, Chapter 4)
  • Basic machine learning concepts as outlined in the the ML Concepts chapter (Chapter 6)
  • Model estimation as outlined in the Estimation chapter (Chapter 3)

7.2 General Approach

Let’s start with a general approach to machine learning to help us get some bearings. Here is an example outline of the process we could take. This incorporates some of the ideas we also cover in other chapters, and we’ll demonstrate most of this in the following sections.

  • Define the problem, including the target variable(s)
  • Select the model(s) to be used, including one baseline model
  • Define the performance objective and metric(s) used for model assessment
  • Define the search space (parameters, hyperparameters) for those models
  • Define the search method (optimization)
  • Implement some sort of validation technique and collect the corresponding performance metrics
  • Evaluate the results on unseen data with the chosen model
  • Interpret the results

Here is a more concrete example:

  • Define the problem: predict the probability of heart disease given a set of features
  • Select the model(s) to be used: ridge regression, standard regression with no penalty as baseline
  • Define the objective and performance metric(s): RMSE, R-squared
  • Define the search space (parameters, hyperparameters) for those models: penalty parameter
  • Define the search method (optimization): grid search
  • Implement some sort of cross-validation technique: 5-fold cross-validation
  • Evaluate the results on unseen data: RMSE on test data
  • Interpret the results: the ridge regression model performed better than the baseline model, and the coefficients tell us something about the nature of the relationship between the features and the target

As we go along in this chapter, we’ll see most of this in action. So let’s get to it!

7.3 Data setup

For our demonstration here, we’ll and use the heart disease dataset. This is popular ML binary classification problem, where we want to predict whether a patient has heart disease, given information such as age, sex, resting heart rate etc (@#sec-dd-hear-failure).

There are two forms of the data - one which is mostly in raw form, and one that is purely numeric, where the categorical features are dummy coded and where numeric variables have been standardized (Section 9.2). The purely numeric version will allow us to forgo any additional data processing for some model/package implementations. We have also dropped the handful of rows with missing values. This form of the data will allow us to use any model and make direct comparisons later.

In this data, roughly 46% suffered from heart disease, so that is an initial baseline if we’re interested in accuracy- we could get 54% correct by just guessing the majority class of no disease.

import pandas as pd
import numpy as np

df_heart = pd.read_csv('data/heart_disease_processed.csv')
df_heart_num = pd.read_csv('data/heart_disease_processed_numeric_sc.csv')

# convert appropriate features to categorical
for col in df_heart.select_dtypes(include='object').columns:
    df_heart[col] = df_heart[col].astype('category')

X = df_heart_num.drop(columns=['heart_disease']).to_numpy()
y = df_heart_num['heart_disease'].to_numpy()
prevalence = np.mean(y)
majority = np.max([prevalence, 1 - prevalence])
library(tidyverse)

df_heart = read_csv("data/heart_disease_processed.csv") |> 
    mutate(across(where(is.character), as.factor))

df_heart_num = read_csv("data/heart_disease_processed_numeric_sc.csv")

# for use with for mlr3
X_num_df = df_heart_num %>%
    as_tibble() |> 
    mutate(heart_disease = factor(heart_disease)) |> 
    janitor::clean_names() # remove some symbols

7.4 Beat the Baseline

Before getting carried away with models, we should with a good reference point for performance - a baseline model. The baseline model should serve as a way to gauge how much better your model performs over one that is simpler, probably more computationally efficient, more interpretable, and is still viable. It could also be a model that is sufficiently complex to capture something about the data you are exploring, but not as complex as the models you’re also interested in. Take a classification model for example, where we often use a logistic regression as a baseline. It is a viable model to begin answering some questions, but is often too simple to be adequately performant for many situations. We should be able to get better performance with more complex models, or there is little justification for using them.

7.4.1 Why do we do this?

Having a baseline model can help you avoid wasting time and resources implementing more complex tools, and to avoid mistakenly thinking performance is better than expected. It is probably rare, but sometimes relationships for the chosen features and target are mostly or nearly linear and have little interaction. In this case, no amount of fancy modeling will make complex feature targets exist if they don’t already. Furthermore, if our baseline is a more complex model that actually incorporates nonlinear relationships and interactions (e.g. a GAMM), you’ll often find that the more complex models don’t significantly improve on the baseline. As a last example, in time series settings, a moving average can often be a difficult baseline to beat, and so can be a good starting point.

So in general, you may find that the initial baseline model is good enough for present purposes, and you can then move on to other problems to solve, like acquiring data that is more predictive. This is especially true if you are working in a business setting where you have limited time and resources, but should be of mind in many other settings as well

7.4.2 How much better?

In many settings, it often isn’t enough to merely beat the baseline model. Your model should perform statistically better. For instance, if your advanced model accuracy is 75% and your baseline model’s accuracy is 73%, that’s great. But, it’s good to check if this 2% difference is statistically significant. Remember, accuracy and other metrics are estimates and come with uncertainty1. This means you can get a ranged estimate for them, as well as test whether they are different from one another (see Table 7.1). If the difference is not statistically significant, then it’s possible that there is no difference, and you should probably stick with the baseline model, or maybe try a different approach. Such a result means that the next time you run the model, the baseline may actually perform better, or at least you can’t be sure that it won’t.

Table 7.1: Interval Estimates for Accuracy
Sample Size Lower Bound Upper Bound p-value
1000 −0.06 0.02 0.31
10000 −0.03 −0.01 0.00
Confidence intervals are for the difference in proportions at values of .73 and .75, and p-values are for the difference in proportions.

That said, in some situations any performance increase is worth it, and even if we can’t be certain a result is statistically better, any sign of improvement is worth pursuing. For example, if you are trying to predict the next word in a sentence, and your baseline is 10% accurate, and your complex model is 12% accurate, that’s a 20% increase over the baseline, and which may be significant in terms of user experience. You should still try and show that this is a consistent increase and not a fluke.

7.5 Penalized Linear Models

So let’s get on with some models already! Let’s use the classic linear model as our starting point for ML, just because we can. We show explicitly how to estimate models like lasso and ridge regression in Section 3.8. Those work well as a baseline, and so should be in your ML modeling toolbox.

7.5.1 Elastic Net

Another common linear model approach is elastic net. It combines two techniques: lasso and ridge regression. We will not show how to estimate elastic net by hand here, but all you have to know is that it combines the two penalties- one for lasso and one for ridge, along with a standard objective function for a numeric or categorical target. The relative proportion of the two penalties is controlled by a mixing parameter, and the optimal value for it is determined by cross-validation. So for example, you might end up with a 75% lasso penalty and 25% ridge penalty. In the end though, we’re just going to do a slightly fancier logistic regression!

Let’s apply this to the heart disease data. We are only doing simple cross-validation here to get a better performance assessment, but you are more than welcome to tune both the penalty parameter and the mixing ratio as we have demonstrated before (Section 6.7). We’ll revisit hyperparameter tuning towards the end of this chapter.

from sklearn.linear_model import LogisticRegression, LogisticRegressionCV
from sklearn.model_selection import cross_validate, KFold, cross_val_score
from sklearn.metrics import accuracy_score


model_elastic = LogisticRegression(
    penalty='elasticnet',
    solver='saga',
    l1_ratio=0.5,
    random_state=42,
    max_iter=10000,
    verbose=False,
)

 # use cross-validation to estimate performance
cv_elastic = cross_validate(
    model_elastic,
    X,
    y,
    cv=5,
    scoring='accuracy',
)

# pd.DataFrame(cv_elastic) # default output
Training accuracy:  0.828 
Guessing:  0.539
library(mlr3verse)

tsk_elastic = as_task_classif(
    X_num_df,
    target = "heart_disease"
)

model_elastic = lrn(
    "classif.cv_glmnet", 
    nfolds = 5, 
    type.measure = "class", 
    alpha = 0.5
)

cv_elastic = resample(
    task       = tsk_elastic,
    learner    = model_elastic,
    resampling = rsmp("cv", folds = 5)
)

# cv_elastic$aggregate(msr('classif.acc')) # default output
Training Accuracy: 0.825
Guessing: 0.539

So we’re starting off with what seems to be a good model. Our average accuracy across the validation sets is definitely doing better than guessing, an increase of almost 79%! Now let’s see if we can do better with other models!

7.5.2 Strengths & Weaknesses

Strengths

  • Intuitive approach. In the end, it’s still just a standard regression model you’re already familiar with.
  • Widely used for many problems. Lasso/Ridge/ElasticNet would be fine to use in any setting you would use linear or logistic regression.
  • A good baseline for tabular data problems.

Weaknesses

  • Does not automatically seek out interactions and non-linearity, and as such will generally not be as predictive as other techniques.
  • Variables have to be scaled or results will largely reflect data types.
  • May have interpretability issues with correlated predictors.

7.5.3 Additional Thoughts

Using penalized regression is a very good default linear model method, and is something to strongly consider for even more interpretative model settings. These approaches predict better on new data than their standard, non-regularized complements, so they provide a nice balance between interpretability and predictive power. However, in general they are not going to be as strong of a method as others typically used in the machine learning world, and may not even be competitive without a lot of feature engineering. If prediction is all you care about, you’ll likely want to try something else.

7.6 Tree-based methods

Let’s move beyond standard linear models and get into a notably different type of approach. Tree-based methods are a class of models that are very popular in machine learning, and for good reason, they work very well. To get a sense of how they work, consider the following classification example where we want to predict a binary target as ‘Yes’ or ‘No’.

A simple classification tree

We have two numeric features, \(X_1\) and \(X_2\). At the start, we take \(X_1\) and make a split at the value of 5. Any observation less than 5 on \(X_1\) goes to the right with a prediction of No. Any observation greater than or equal to 5 goes to the left, where we then split based on values of \(X_2\), and specifically at the value of 3. Any observation less than 3 (and greater than or equal to 5 on \(X_1\)) goes to the right with a prediction of Yes. Any observation greater than or equal to 3 (and greater than or equal to 5 on \(X_1\)) goes to the left with a prediction of No. So in the end, we see that an observation that is relatively lower on \(X_1\), or relatively higher on both, results in a prediction of No. On the other hand, an observation that is high on \(X_1\) and low on \(X_2\) results in a prediction of Yes.

This is a simple example, but it illustrates the core idea of a tree-based model, where the tree reflects the total process, and branches are represented by the splits going down, ultimately ending at leaves where predictions are made. We can also think of the tree as a series of if-then statements, where we start at the top and work our way down until we reach a leaf node, which is a prediction for all observations that qualify for that leaf.

If we just use a single tree, this would be the most interpretable model we could probably come up with. It also it incorporates nonlinearities (multiple branches on a single feature), interactions (branches across features), and feature selection all in one (some features may not result in useful splits for the objective). However, a single tree is not a very stable model unfortunately, and so does not generalize well. For example, just a slight change in data, or even just starting with a different feature, might produce a very different tree2.

The solution to that problem is straightforward though - by using the power of a bunch of trees, we can get predictions for each observation from each tree, and then average the predictions, resulting in a most stable estimate. This is the concept behind both random forests and gradient boosting, which can be seen as different algorithms to produce a bunch of trees. They are also considered types of ensemble models, which are models that combine the predictions of multiple models, to ultimately produce a single prediction for each observation. In this case each tree serves as a model.

Random forests (RF) and boosting methods (GB) are very easy to implement, to a point. However, there are typically a several hyperparameters to consider for tuning. Here are just a few to think about:

  • Number of trees
  • Learning rate (GB)
  • Maximum depth of each tree
  • Minimum number of observations in each leaf
  • Number of features to consider at each tree/split
  • Regularization parameters (GB)
  • Out-of-bag sample size (RF)

For these models, the number of trees and learning rate play off of each other. Having more trees allows for a smaller rate3, which might improve the model but will take longer to train. However, it can lead to overfitting if other steps are not taken.

The depth of each tree refers to how many level we allow the model to branch out, and is a crucial parameter. It controls the complexity of each tree, and thus the complexity of the overall model- less depth helps to avoid overfitting, but if the depth is too shallow, you won’t be able to capture the nuances of the data. The minimum number of observations in each leaf is also important for similar reasons.

It’s also generally a good idea to take a random sample of features for each tree (or possibly even each branch), to also help reduce overfitting, but it’s not obvious what proportion to take. The regularization parameters are typically less important in practice, but help reduce overfitting as in other modeling circumstances. As with hyperparameters in other model settings, you’ll use something like cross-validation to settle on final values.

7.6.1 Example with LightGBM

Here is an example of gradient boosting with the heart disease data. Although boosting methods are available in scikit-learn for Python, in general we recommend using the lightgbm or xgboost packages directly for boosting implementation, which have a sklearn API anyway (as demonstrated). Also, they both provide R and Python implementations of the package, making it easy to not lose your place when switching between languages. We’ll use lightgbm here, but xgboost is also a very good option 4.

from lightgbm import LGBMClassifier
from sklearn.metrics import accuracy_score

model_boost = LGBMClassifier(
    n_estimators=1000,
    learning_rate=1e-3,
    max_depth = 5,
    verbose = -1,
    random_state=42,
)

cv_boost = cross_validate(
    model_boost,
    df_heart.drop(columns='heart_disease'),
    df_heart_num['heart_disease'],
    cv=5,
    scoring='accuracy',
)
Training accuracy:  0.835 
Guessing:  0.539

Note that as of writing, the mlr3 implementation of lightgbm doesn’t seem to handle factors even though the lightgbm R package does. So we’ll use the numeric version of the data here.

library(mlr3verse)

# for lightgbm, you need mlr3extralearners and lightgbm package installed
# we suggest the latest available from github
# remotes::install_github("mlr-org/mlr3extralearners@*release")
library(mlr3extralearners) 

set.seed(1234)

# Define task
# For consistency we use X_num_df, but lgbm can handle factors and missing data 
# and so we can use the original df_heart if desired
tsk_boost = as_task_classif(
    df_heart,                   # can use the 'raw' data
    target = "heart_disease"
)

# Define learner
model_boost = lrn(
    "classif.lightgbm",
    num_iterations = 1000,
    max_depth = 5,
    learning_rate = 1e-3
)

# Cross-validation
cv_boost = resample(
    task       = tsk_boost,
    learner    = model_boost,
    resampling = rsmp("cv", folds = 5)
)
Training Accuracy: 0.804
Guessing: 0.539

So here we have a model that is also performing well, though not significantly better or worse than our elastic net model. For most situations, we’d expect boosting to do better, but this shows why we want a good baseline or simpler model. We’ll revisit hyperparameter tuning using this model later. If you’d like to see an example of how we could implement a form boosting by hand, see (app-boosting?).

7.6.2 Strengths & Weaknesses

Random forests and boosting methods, though not new, are still ‘state of the art’ in terms of performance on tabular data like the type we’ve been using for our demos here. As of this writing, you’ll find that it will usually take considerable effort to beat them, though many have tried with many deep learning models.

Strengths

  • A single tree is highly interpretable.
  • Easily incorporates features of different types (the scale of numeric features, or using categorical features*, doesn’t matter).
  • Tolerance to irrelevant features.
  • Some tolerance to correlated inputs.
  • Handling of missing values. Missing values are just another value to potentially split on*.

*It’s not clear why most model functions still have no default for this sort of thing in 2024.

Weaknesses

  • Honestly few, but like all techniques, it might be relatively less predictive in certain situations. There is no free lunch.
  • It does take more effort to tune relative to linear model methods.

7.7 Deep Learning and Neural Networks

A neural network

Deep learning has fundamentally transformed the world of data science, and the world itself. It has been used to solve problems in image detection, speech recognition, natural language processing, and more, from assisting with cancer diagnosis to summarizing entire novels. As of now, it is not a panacea for every problem, and is not always the best tool for the job, but it is an approach that should be in your toolbox. Here we’ll provide a brief overview of the key concepts behind neural networks, the underlying approach to deep learning, and then demonstrate how to implement a simple neural network to get things started.

7.7.1 What is a neural network?

Neural networks form the basis of deep learning models. They have actually been around a while - computationally and conceptually going back decades56. Like other models, they are computational tools that help us understand how to get outputs from inputs. However, they weren’t quickly adopted due to computing limitations, similar to the slow adoption of Bayesian methods. But now neural networks, or deep learning more generally, have recently become the go-to method for many problems.

7.7.2 How do they work?

At its core, a neural network can be seen as series of matrix multiplications and other operations to produce combinations of features, and ultimately a desired output. We’ve been talking about inputs and outputs since the beginning (Section 1.3.2), but neural networks like to put a lot more in between the inputs and outputs than we’ve seen with other models. However, the core operations are often no different than what we’ve done with a basic linear model, and sometimes even simpler! But the combinations of features they produce can represent many aspects of the data that are not easily captured by simpler models.

One notable difference from models we’ve been seeing is that neural networks implement multiple combinations of features, where each combination is referred to as hidden nodes or units7. In a neural network, each feature has a weight, just like in a linear model. These features are multiplied by their weights and then added together. But we actually create multiple such combinations, as depicted in the ‘H’ or ‘hidden’ nodes in the following visualization.

The first hidden layer

The next phase is where things can get more interesting. We take those hidden units and add in nonlinear transformations before moving deeper into the network. The transformations applied are typically referred to as activation functions8. So, the output of the current (typically linear) part is transformed in a way that allows the model to incorporate nonlinearities. While this might sound new, this is just like how we use link functions in generalized linear models (Section 4.2). Furthermore, these multiple combinations also allow us to incorporate interactions between features.

But we can go even further! We can add more layers, and more nodes in each layer, to create a deep neural network. We can also add components specific to certain types of processing, have some parts only connected to certain other parts and more. The complexity really is only limited by our imagination, and computational power! This is what helps makes neural networks so powerful - given enough nodes and layers they can potentially approximate any function. Ultimately though, the feature inputs become an output or multiple outputs that can then be assessed in the similar ways as other models.

A more complex neural network

Before getting carried away, let’s simplify things a bit by returning to some familiar ground. Consider a logistic regression model. There we take the linear combination of features and weights, and then apply the sigmoid function (inverse logit) to it, and that is the output of the model that we compare to our observed target.

We can revisit a plot we saw earlier (Figure 1.5) to make things more concrete. The input features are \(X_1\), \(X_2\), and \(X_3\), and the output is the probability of a positive outcome of a binary target. The weights are \(w_1\), \(w_2\), and \(w_3\), and the bias9 is \(w_0\). The hidden node is just our linear predictor which we can create via matrix multiplication of the feature matrix and weights. The sigmoid function is the activation function, and the output is the probability of the chosen label.

A logistic regression as a neural network with one hidden layer, one hidden node, and sigmoid activation

This shows that we can actually think of logistic regression as a very simple neural network, with a linear combination of the inputs as a single hidden node and a sigmoid activation function adding the nonlinear transformation. Indeed, the earliest multilayer perceptron models were just composed of multiple layers of logistic regressions!

As noted, you can think of neural networks as nonlinear extensions of linear models. Regression approaches like GAMs and gaussian process regression can be seen as approximations to neural networks (see also Rasmussen and Williams (2005)), bridging the gap between the simpler, and more interpretable linear model and black box of a deep neural network. This brings us back to having a good baseline. If you know some simpler tools that can approximate more complex ones, you can often get ‘good enough’ results with the simpler models.

7.7.3 Trying it out

For simplicity we’ll use similar tools as before. Our model is a multi-layer perceptron (MLP), which is a model like the one we’ve been depicting. It consists of multiple hidden layers of varying sizes, and we can incorporate activation functions as we see fit.

Do know this would be considered a bare minimum approach for a neural network, and generally you’d need to do more. To begin with, you’d want to tune the architecture, or structure of hidden layers. For example, you might want to try more layers, as well as ‘wider’ layers, or more nodes per layer. Also, as noted in the data discussion, we’d usually want to use embeddings for categorical features as opposed to the one-hot approach used here (Section 9.2.2)10.

For our example, we’ll use the data with one-hot encoded features. For our architecture, we’ll use three hidden layers with 200 nodes each. As noted, these and other settings are hyperparameters that you’d normally prefer to tune.

For our demonstration we’ll use sklearn’s builtin MLPClassifier. We set the learning rate to 0.001. We’ll also use a validation set of 20% of the data to help with early stopping. We set an adaptive learning rate, which is a way to automatically adjust the learning rate as the model trains. The relu activation function is default. We’ll also use the nesterov momentum approach, which is a way to help the model avoid local minima. We use a warm start, which allows us to train the model in stages, which is useful for early stopping. We’ll also set the validation fraction, which is the proportion of data to use for the validation set. And finally, we’ll use shuffle to randomly select observations for each batch.

from sklearn.neural_network import MLPClassifier

model_mlp = MLPClassifier(
    hidden_layer_sizes=(200, 200, 200),  
    learning_rate='adaptive',
    learning_rate_init=0.001,
    shuffle=True,
    random_state=123,
    warm_start=True,
    nesterovs_momentum=True,
    validation_fraction= .2,
    verbose=False,
)

# with the above settings, this will take a few seconds
cv_mlp = cross_validate(
  model_mlp, 
  X, 
  y, 
  cv=5
) 

# pd.DataFrame(cv_mlp) # default output
Training accuracy:  0.818 
Guessing:  0.539

For R, we’ll use mlr3torch, which calls pytorch directly under the hood. We’ll use the same architecture as was done with the Python example. It uses the relu activation function as a default. We’ll also use adam as the optimizer, which is a popular choice and the default for the sklearn approach also. We’ll also use cross entropy as the loss function, which is the same as the log loss objective function used in logistic regression and other ML classification models. We use a batch size of 16, which is the number of observations to use for each batch of training. We’ll also use epochs of 200, which is the number of times to train on the entire dataset. We’ll also use predict type of prob, which is the type of prediction to make. Finally, we’ll use both logloss and accuracy as the metrics to track. As specified, this took over a minute.

library(mlr3torch)

learner_mlp = lrn(
    "classif.mlp",
    # defining network parameters
    layers = 3,
    d_hidden = 200,
    # training parameters
    batch_size = 16,
    epochs = 50,
    # Defining the optimizer, loss, and callbacks
    optimizer = t_opt("adam", lr = 1e-3),
    loss = t_loss("cross_entropy"),
    # # Measures to track
    measures_train = msrs(c("classif.logloss")),
    measures_valid = msrs(c("classif.logloss", "classif.ce")),
    # predict type (required by logloss)
    predict_type = "prob",
    seed = 123
)

tsk_mlp = as_task_classif(
    x = X_num_df,
    target = 'heart_disease'
)

# this will take a few seconds depending on your chosen settings and hardware
cv_mlp = resample(
    task       = tsk_mlp,
    learner    = learner_mlp,
    resampling = rsmp("cv", folds = 5),
)

cv_mlp$aggregate(msr("classif.acc")) # default output
Training Accuracy: 0.842
Guessing: 0.539

This neural network model actually did pretty well, and we’re on par with our accuracy as we were with the other two models. This is somewhat surprising given the nature of the data- small number of observations with different data types- a type of situation in which neural networks don’t usually do as well as others. Just goes to show, you never know until you try!

7.7.4 Strengths & Weaknesses

Strengths

  • Good prediction generally.
  • Incorporates the predictive power of different combinations of inputs.
  • Some tolerance to correlated inputs.
  • Can be added as a component to other deep learning models.

Weaknesses

  • Susceptible to irrelevant features.
  • Doesn’t outperform other methods that are (currently) easier to implement on tabular data.

7.8 A Tuned Example

We noted in the chapter on machine learning concepts that there are often multiple hyperparameters we are concerned with for a given model (Section 6.7). We had hyperparameters for each of the models in this chapter also. For the elastic net model, we might want to tune the penalty parameters and the mixing ratio. For the boosting method, we might want to tune the number of trees, the learning rate, the maximum depth of each tree, the minimum number of observations in each leaf, and the number of features to consider at each tree/split. And for the neural network, we might want to tune the number of hidden layers, the number of nodes in each layer, the learning rate, the batch size, the number of epochs, and the activation function. There is plenty to explore!

Here is an example using the boosting model. We’ll tune the number of trees, the learning rate, the minimum number of observations in each leaf, and the maximum depth of each tree. We’ll use a randomized search across the parameter space to sample from the set of hyperparameters, rather than searching every possible combination as in a grid search. This is a good approach when you have a lot of hyperparameters to tune, and/or when you have a lot of data.

from sklearn.model_selection import RandomizedSearchCV, train_test_split
from sklearn.metrics import accuracy_score

from lightgbm import LGBMClassifier

# train-test split
X_train, X_test, y_train, y_test = train_test_split(
    df_heart.drop(columns='heart_disease'), 
    df_heart_num['heart_disease'],
    test_size=0.2,
    random_state=42
)

model_boost = LGBMClassifier(
    verbose = -1
)

param_grid = {
    'n_estimators': [500, 1000],
    'learning_rate': [1e-3, 1e-2, 1e-1],
    'max_depth': [3, 5, 7, 9],
    'min_child_samples': [1, 5, 10],
}

# this will take a few seconds
cv_boost_tune = RandomizedSearchCV(
    model_boost, 
    param_grid, 
    n_iter = 10,
    cv = 5, 
    scoring = 'accuracy', 
    n_jobs = -1
)

cv_boost_tune.fit(X_train, y_train)

test_predictions = cv_boost_tune.predict(X_test)
accuracy_score(y_test, test_predictions)

Test Accuracy 0.817 
Guessing:  0.539
set.seed(1234)

library(mlr3verse)
library(rsample)

tsk_lgbm_tune = as_task_classif(
    df_heart,
    target = "heart_disease"
)

split = partition(tsk_lgbm_tune, ratio = .8)

lrn_lgbm = lrn(
    "classif.lightgbm",
    num_iterations = to_tune(c(500, 1000)),
    learning_rate = to_tune(1e-3, 1e-1),
    max_depth = to_tune(c(3, 5, 7, 9)),
    min_data_in_leaf = to_tune(c(1, 5, 10))
)

lgbm_tune = auto_tuner(
    tuner = tnr("random_search"),
    learner = lrn_lgbm,
    resampling = rsmp("cv", folds = 5),
    measure = msr("classif.acc"),
    terminator = trm("evals", n_evals = 10)
)

lgbm_tune$train(tsk_lgbm_tune, row_ids = split$train)
lgbm_tune$predict(tsk_lgbm_tune, row_ids = split$test)$score(msr("classif.acc"))
Test Accuracy: 0.864
Guessing: 0.539

Looks like we’ve done a lot better than guessing. Even if we don’t do better than our previous model, we should feel better that we’ve done our due diligence in trying to find the best set of underlying parameters, rather than just going with defaults or what seems to work best.

7.9 Comparing Models

We can tune all the models and compare them head to head. We first split the data into training and test sets with an split. Then with training data, we tuned each model over different settings:

  • Elastic net: penalty and mixing ratio
  • Boosting: number of trees, learning rate, and maximum depth, etc.
  • Neural network: number of hidden layers, number of nodes in each layer

After this, we used the tuned values to retrain on the complete data set. At this stage it’s not necessary to investigate in most settings, but we show the results of the 10-fold cross-validation for the already-tuned models, to give a sense of the uncertainty in error estimation.

Figure 7.1: Cross-validation results for tuned models.

When we look at the performance on the holdout set with our tuned models, we see something you might be surprised about - the simplest model wins! However, none of these results are likely statistically different from each other. As an example, the elastic net model had an accuracy of 0.85, but the interval estimate for such a small sample is very wide - from 0.73 to 0.92. The interval estimate for the difference in accuracy between the elastic net and boosting models is from -0.14 to 0.1911. This was a good example of the importance of having an adequate baseline, and where complexity didn’t really help much, though all our approaches did well.

Table 7.2: Metrics for tuned models on holdout data.
model Acc. TPR TNR F1 PPV NPV
Elastic Net 0.85 0.77 0.91 0.82 0.87 0.84
LGBM 0.82 0.69 0.91 0.77 0.86 0.79
MLP 0.82 0.73 0.88 0.78 0.83 0.81

Some may wonder how the holdout results can be better than the cross-validation results, as they are for the elastic net model. This can definitely happen, and at least in this case probably just reflects the small sample size. The holdout set is a random sample of 20% of the complete data, which is 60 examples. Just a couple different predictions could result in a several percentage point difference in accuracy. Also, this could happen just by chance. In general though, you’d expect the holdout results to be a bit, or even significantly, worse than the cross-validation results.

7.10 Interpretation

When it comes to machine learning, many models we use don’t have an easy interpretation, like with coefficients in a linear model. However, that doesn’t mean we can’t still figure out what’s going on. Let’s use the boosting model as an example.

7.10.1 Feature Importance

The default importance metric for a lightgbm model is the number of splits in which a feature is used across trees, and this will depend notably on the chosen parameters of the best model. But there are other ways to think about what importance means that will be specific to a model, data setting, and ultimate goal of the modeling process. For this data and the model, depending on the settings, you might see that the most important features are age, cholesterol, and max heart rate.

# Get feature importances (# of trees a feature is used in)

best_model = cv_boost_tune.best_estimator_
best_model.feature_importances_

# you remember which feature is which, right? if not, do this:
pd.DataFrame({
    'Feature': cv_boost_tune.feature_name_,
    'Importance': cv_boost_tune.feature_importances_
})

R shows the proportion of splits in which a feature is used across trees rather than the raw number.

# Get feature importances
lgbm_tune$learner$importance()
Table 7.3: Top 4 features from an LGBM model.
Feature value
num_major_vessels 0.21
age 0.14
thalassemia 0.09
resting_bp 0.09

Now let’s think about a visual display to aid our understanding. Here we show a partial dependence plot (Section 2.3.6) to see the effects of cholesterol and being male. From this we can see that males are expected to have a higher probability of heart disease, and that cholesterol has a positive relationship with heart disease,though though this occurs mostly after midpoint for cholesterol (shown by vertical line). The plot shown is a prettier version of what you’d get with the following code, but the model predictions are the same.

from sklearn.inspection import PartialDependenceDisplay

PartialDependenceDisplay.from_estimator(
    cv_boost_tune, 
    df_heart.drop(columns='heart_disease'), 
    features=['cholesterol', 'male'], 
    categorical_features=['male'], 
    percentiles=(0, .9),
    grid_resolution=75
)

For R we’ll use the IML package.

library(iml)

prediction = Predictor$new(
    instance_lgbm_tune$model$learner,
    data = df_heart,
    type = 'prob', 
    class = 'yes'
)

effect_dat = FeatureEffect$new(
    prediction, 
    feature = c('cholesterol', 'male'), 
    method = "pdp", 
)

effect_dat$plot(show.data = TRUE)
Figure 7.2: Partial dependence plot for cholesterol

7.11 Other ML Models for Tabular Data

When you research classical machine learning models for the kind of data we’ve been exploring, you’ll find a variety of methods. Popular approaches from the past include k-nearest neighbors regression, principal components regression, support vector machines (SVM), and more. You don’t see these used in practice as much though for several reasons:

  • Some, like k-nearest neighbors regression, generally don’t predict as well as other models.
  • Others, like linear discriminant analysis, make strong assumptions about how the data is distributed.
  • Some models, like SVM, tend to work well only with ‘clean’ and well-structured data of the same type.
  • Many of these models are computationally demanding, making them less practical for large datasets.
  • Lastly, some of these models are less interpretable, making it hard to understand their predictions without an obvious gain in performance.

While some of these classical models might still work well in unique situations, when you have tools that can handle a lot of data complexity and predict very well (and usually better) like tree-based methods, there’s not much reason to use the historical alternatives. If you’re interested in learning more about them or think one of them is just ‘neat’, you could potentially use it as a baseline model. Alternatively, you could maybe employ them as part of an ensemble or stacked model, where you combine the predictions of multiple models to produce a single prediction. This is a common approach in machine learning, and is often used in Kaggle competitions.

There are also other methods that are more specialized, such as those for text, image, and audio data. We will provide an overview of these elsewhere (Chapter 8). As of this writing, the main research effort for new models for tabular data regards deep learning methods like large language models (LLMs). While typically used for text data, they can be adapted for tabular data as well. They are very powerful, but also computationally expensive. The issue is primarily whether a model can be devised that can consistently beat boosting and other approaches, and while it hasn’t happened yet, there is a good chance it will in the near future. For now, the best approach is to use the best model that works for your data, and to be open to new methods as they come along.

7.12 Wrapping Up

In this chapter we’ve provided a few common and successful models you can implement with much success in machine learning. You don’t really need much beyond these for tabular data unless your unique data condition somehow requires it. But a couple things are worth mentioning before moving on…

Feature engineering will typically pay off more in performance than the model choice.

Thinking hard about the problem and the data is more important than the model choice.

The best model is simply the one that works best for your situation.

You’ll always get more payoff by coming up with better features to use in the model, as well as just using better data that’s been ‘fixed’ because you’ve done some good exploratory data analysis. Thinking harder about the problem means you will waste less time going down dead ends, and you typically can find better data to use to solve the problem by thinking more clearly about the question at hand. And finally, it’s good to not be stuck on one model, and be willing to use whatever it takes to get things done efficiently.

7.12.1 The Thread

When it comes to machine learning, you can use any model you feel like, and this could be standard statistical models like we’ve covered elsewhere. Both boosting and neural networks, like GAMs and related techniques, can be put under a a common heading of basis function models. GAMs with certain types of smooth functions are approximations of gaussian processes, and gaussian process are equivalent to a neural network with a infinitely wide hidden layers ((neal_priors_1994?)). Even the most complicated deep learning model typically has components that involve feature combinations and transformations that we use in far simpler models.

7.12.2 Choose Your Own Adventure

If you haven’t had much exposure to statistical approaches we suggest heading to any chapter of Part I. Otherwise, consider an overview of more machine learning techniques (Chapter 8), data (Chapter 9), or causal modeling (Chapter 10).

7.12.3 Additional Resources

Additional resources include those mentioned in Section 6.9.3, but here are some more to consider:

Deep Learning:

7.13 Exercise

Tune a model of your choice to predict whether a movie is good or bad with the movie review data. Use the categorical target, and use one-hot encoded features if needed. Make sure you use a good baseline model for comparison!


  1. There would be far less hype and wasted time if those in ML and DL research simply did this rather than just reporting the chosen metric of their model ‘winning’ against other models. It’s not that hard to do, yet most do not provide any ranged estimate for their metric, let alone test statistical difference from other models. You don’t even have to bootstrap many common metric estimates for binary classification since they are just proportions! It’d also be nice if they used a more meaningful baseline than logistic regression, but that’s a different story.↩︎

  2. A single regression/classification tree actually could serve as a decent baseline model, especially given the interpretability, and modern methods try to make them more stable.↩︎

  3. This is pretty much the same concept as with stochastic gradient in general. Larger learning rates allow for quicker parameter exploration, but may overshoot the optimal value. Smaller learning rates are more conservative, but may take longer to find the optimal value.↩︎

  4. Some also prefer catboost. The authors have not actually been able to practically implement catboost in a setting where it was more predictive or as efficient/speedy as xgboost or lightgbm, but some have had notable success with it.↩︎

  5. Most consider the scientific origin with McCulloch and Pitts (1943).↩︎

  6. On the conceptual side, they served as a rudimentary model of neuronal functioning in the brain, and a way to understand how the brain processes information. The models sprung from the cognitive revolution, a backlash against the behaviorist approach to psychology, and used the computer as a metaphor for how the brain might operate.↩︎

  7. The term ‘hidden’ is used because these nodes are between the input or output. It does not imply a latent/hidden variable in the sense it is used in structural equation or measurement models, but there is a lot of common ground. See the connection with principal components analysis for example(Section 8.2.1.2).↩︎

  8. We have multiple options for our activation functions, and probably the most common activation function in deep learning is the rectified linear unit or ReLU. Other commonly used are the sigmoid function, which is exactly the same as what we used in logistic regression, the hyperbolic tangent function, variants of the ReLU, and of course the linear/identity function, which not to do any transformation at all.↩︎

  9. It’s not exactly clear why computer scientists chose to call this the bias, but it’s the same as the intercept in a linear model, or conceptually as an offset or constant. It has nothing to do with the word bias as used in every other modeling context.↩︎

  10. A really good tool for a standard MLP type approach with automatic categorical embeddings is fastai’s tabular learner.↩︎

  11. We just used the prop.test function in R for these values with the test being, what proportion of predictions are correct, and are these proportions different? A lot of the metrics people look at from confusion matrices are proportions.↩︎