Saturday, August 30, 2025

Marginal Impact of Hyperparameter Tuning with XGBoost


modeling contexts, the XGBoost algorithm reigns supreme. It offers efficiency and effectivity good points over different tree-based strategies and different boosting implementations. The XGBoost algorithm features a laundry checklist of hyperparameters, though normally solely a subset is chosen through the hyperparameter tuning course of. In my expertise, I’ve all the time used a grid search technique utilizing k-fold cross-validation to establish the optimum mixture of hyperparameters, though there are various strategies for hyperparameter tuning with the hyperopt library that may search the hyperparameter house extra systematically.

By means of my work constructing XGBoost fashions throughout completely different tasks, I got here throughout the nice useful resource Efficient XGBoost by Matt Harrison, a textbook protecting XGBoost, together with methods to tune hyperparameters. Chapter 12 of the guide is devoted to tuning hyperparameters utilizing the hyperopt library; nevertheless, there have been some pure questions that arose upon studying the part. The introduction to the chapter offers a high-level overview of how utilizing hyperopt and Bayesian optimization offers a extra guided strategy for tuning hyperparameters in comparison with grid search. Nonetheless, I used to be curious, what’s going on right here beneath the hood?

As well as, as is the case with many tutorials about tuning XGBoost hyperparameters, the ranges for the hyperparameters appeared considerably arbitrary. Harrison explains that he pulled the checklist of hyperparameters to be tuned from a chat that knowledge scientist Bradley Boehmke gave (right here). Each Harrison and Boehmke present tutorials for utilizing hyperopt with the identical set of hyperparameters, though they use barely completely different search areas for locating an optimum mixture. In Boehmke’s case, his search house is way bigger; for instance, he recommends that the utmost depth for every tree (max_depth) be allowed to fluctuate between 1 and 100. Harrison had narrowed the ranges he presents in his guide considerably, however these two circumstances led to the query: What’s the marginal achieve in comparison with the marginal improve in time from growing the hyperparameter search house when tuning XGBoost fashions?

The aim of this text is centered on these two questions. First, we’ll discover how hyperopt works when tuning hyperparameters at a barely deeper stage to assist achieve some instinct for what’s going on beneath the hood. Second, we’ll discover the tradeoff between massive search areas and narrower search areas in a rigorous approach. I hope to reply these questions in order that this can be utilized as a useful resource for understanding hyperparameter tuning sooner or later.

All code for the undertaking may be discovered on my GitHub web page right here: https://github.com/noahswan19/XGBoost-Hyperparameter-Evaluation

hyperopt with Tree-Structured Parzen Estimators for Hyperparameter Tuning

Within the chapter of his textbook protecting hyperopt, Harrison describes the method of utilizing hyperopt for hyperparameter tuning as utilizing “Bayesian optimization” to establish sequential hyperparameter mixtures to strive through the tuning course of.

The high-level description makes it clear why utilizing hyperopt is a superior technique to the grid search technique, however I used to be curious how that is carried out. What is definitely happening once we run the fmin operate utilizing the Tree-Structured Parzen (TPE) estimator algorithm?

Sequential Mannequin-Primarily based Optimization

To start out with, the TPE algorithm originates from a paper written in 2011 by James Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl, the authors of the hyperopt package deal known as “Algorithms for Hyper-Parameter Optimization”. The paper begins by introducing Sequential Mannequin-Primarily based Optimization (SMBO) algorithms, the place the TPE algorithm is one model of a broader SMBO technique. SMBOs present a scientific approach to decide on the following hyperparameters to judge, avoiding the brute-force nature of grid search and the inefficiency of random search. It includes creating a “surrogate” mannequin for the underlying mannequin we’re optimizing for (i.e. XGBoost in our case), which we are able to use direct the seek for optimum hyperparameters in a approach that’s computationally cheaper than evaluating the underlying mannequin. The algorithm for an SMBO is described within the following picture:

Picture by writer from Determine 1 from “Algorithms for Hyper-Parameter Optimization” (Bergstra et al.)

There’s numerous symbols right here, so let’s break down each:

  • x* and x: x* represents the hyperparameter mixture that’s being examined in a given trial, and x represents a basic hyperparameter mixture.
  • f: That is the “health operate” which is the underlying mannequin that we’re optimizing. Inside this algorithm, f(x*) is mapping a hyperparameter mixture x* to efficiency of this mixture on a validation knowledge set.
  • M_0: The M phrases within the algorithm correspond to the “surrogate” mannequin we’re utilizing to approximate f. Since f is normally costly to run, we are able to use a less expensive estimation, M, to assist establish which hyperparameter mixtures will seemingly enhance efficiency.
  • H: The curly H corresponds to the historical past of hyperparameters searched to date. It’s up to date upon each iteration. It’s also used to develop an up to date surrogate mannequin after every iteration.
  • T: This corresponds to the variety of trials we use for hyperparameter tuning. That is fairly self-explanatory, however this corresponds to the max_evals argument within the fmin operate from hyperopt.
  • S: The S corresponds to the criterion used to choose a set of hyperparameter mixtures to examine given a surrogate mannequin. Within the hyperopt implementation of the TPE algorithm, S corresponds to the Anticipated Enchancment (EI) criterion, described within the following picture.
Picture by writer from Equation 1 of “Algorithms for Hyper-Parameter Optimization” (Bergstra et al.)

Every iteration, some variety of attainable hyperparameter mixtures are drawn (within the python hyperopt package deal, that is set to 24 by default). We’ll focus on in a bit how TPE signifies how these 24 are drawn. These 24 hyperparameter mixtures are evaluated utilizing the EI criterion and surrogate mannequin (which is cheap) to establish the one mixture that’s almost definitely to have the best efficiency. That is the place we see the advantages of the surrogate mannequin: as an alternative of coaching and evaluating 24 XGBoost fashions to judge the one greatest hyperparameter mixture, we are able to approximate this with a computationally cheap surrogate mannequin. Because the title would counsel, the components above corresponds to the anticipated efficiency enchancment of a hyperparameter mixture x:

  • max(y*-y,0): This represents the precise enchancment in efficiency for a hyperparameter mixture x. y* corresponds to the most effective validation loss we’ve attained up to now; we’re aiming to reduce the validation loss, so we’re searching for values of y which are lower than y*. This implies we need to maximize EI in our algorithm.
  • p_M(x|y): That is the piece of the criterion that shall be approximated utilizing the surrogate mannequin and the piece the place TPE will slot in. That is the likelihood density for attainable values for y given a hyperparameter mixture x.

So every spherical, we take a set of 24 hyperparameter mixtures, then proceed with the one which maximizes the EI criterion, which makes use of our surrogate mannequin M.

The place does the TPE algorithm slot in

The important thing piece of the SMBO algorithm that may fluctuate throughout implementations is the surrogate mannequin or how we approximate the success of hyperparameter capabilities. Utilizing the EI criterion, the surrogate mannequin is required to estimate the density operate p(y|x). The paper talked about above introduces one technique known as the Gaussian Course of Strategy which fashions p(y|x) immediately, however the TPE strategy (which is extra typically used for XGBoost hyperparameter optimization) as an alternative approximates p(x|y) and p(y). This strategy follows from Bayes theorem:

Picture by writer

The TPE algorithm splits p(x|y) right into a piecewise mixture of two distributions:

  • l(x) if y < y*
  • g(x) if y ≥ y*

These two distributions have an intuitive understanding: l(x) refers back to the distribution of hyperparameters related to fashions which have a decrease loss (higher) than the most effective mannequin to date whereas g(x) refers back to the distribution of hyperparameters related to fashions which have the next loss (worse) than the most effective mannequin to date. This expression for p(y|x) is substituted into the equation for EI within the paper, and a mathematical derivation (that might be too verbose to interrupt down completely right here) arrives at the truth that maximizing EI is equal to choosing factors which are extra seemingly beneath l(x) and fewer seemingly beneath g(x).

So how does this work in follow? When utilizing hyperopt, we use the fmin operate and provide the tpe.counsel algorithm to specify we wish to use the TPE algorithm. We provide an area of hyperparameters the place every parameter is related to a uniform or log-uniform distribution. These preliminary distributions are used to initialize l(x) and g(x) and supply a previous distribution for l(x) and g(x) whereas working with a small variety of preliminary trials. By default (parameter n_startup_jobs in tpe.counsel), hyperopt runs 20 trials by randomly sampling hyperparameter mixtures from the distributions supplied for the house parameter of fmin. For every of the 20 trials, an XGBoost mannequin is run and a validation loss obtained.

The 20 observations are then cut up in order that two subsets are used to construct non-parametric densities for l(x) and g(x). Subsequent observations are used to replace these distributions. The densities are estimated utilizing a non-parametric technique (which I’m not certified to explain totally) involving the prior distributions for every hyperparameter (that we specified) and particular person distributions for every statement from the trial historical past. Observations are cut up into subsets utilizing a technique that modifications with the variety of whole trials run; the “n” observations with the bottom loss are used for l(x) with the remaining observations used for g(x). The “n” is decided by multiplying a parameter gamma (default for tpe.counsel is 0.25) by the sq. root of the variety of trials and rounding up; nevertheless, a most for “n” is about at 25 so l(x) shall be parameterized with at most 25 values. If we use the default setting for tpe.counsel, then the most effective two observations (0.25 * sqrt(20) = 1.12 rounds to 2) from the preliminary trials are used to parameterize l(x) with the remaining 18 used for g(x). The 0.25 worth refers back to the gamma parameter in tpe.counsel which may be modified if desired. Trying again to the pseudocode for the SMBO algorithm and the components for EI, if n observations are used to parameterize l(x), then the (n+1)th statement is the brink worth y*.

As soon as l(x) and g(x) are instantiated utilizing the start trials, we are able to transfer ahead with every analysis of our goal operate for the variety of max_evals that we specify for fmin. For every iteration, a set of candidate hyperparameter mixtures (24 by default in tpe.counsel however may be specified with n_EI_candidates) is generated by taking random attracts from l(x). Every of those mixtures is evaluated utilizing the ratio l(x)/g(x); the mix that maximizes this ratio is chosen as the mix for use for the iteration. The ratio will increase for hyperparameters mixtures which are both (1) more likely to be related to low losses or (2) unlikely to be related to excessive losses (which drives exploration). This course of of selecting the most effective candidate corresponds to utilizing the surrogate mannequin with the EI as mentioned when wanting on the pseudocode for an SMBO.

An XGBoost mannequin is then skilled with the highest candidate for the iteration; a loss worth is obtained, and the information level (x*, f(x*)) is used to replace the surrogate mannequin (l(x) and g(x)) to proceed optimization.

Marginal Impact of Hyperparameter Tuning

So now, with a background on how the hyperopt library can be utilized within the hyperparameter tuning course of, we transfer to the query of how utilizing wider distributions impacts mannequin efficiency. When making an attempt to check the efficiency of fashions skilled on massive search areas towards these skilled on narrower search areas, the rapid query is methods to create the narrower search house. For instance, the presentation from Boehmke advises utilizing a uniform distribution from 1 to 100 for the max_depth hyperparameter. XGBoost fashions are likely to generalize higher when combining a lot of weak learners, however does that imply we slim the distribution to a minimal of 1 and a most of fifty? We could have some kind of basic understanding from work others have completed to intuitively slim the house, however can we discover a strategy to analytically slim the search house?

The answer proposed on this article includes operating a set of shorter hyperparameter tuning trials to slim the search house primarily based on shallow searches of a wider hyperparameter house. The broader search house we use comes from slide 20 of Boehmke’s aforementioned presentation (right here). As a substitute of operating hyperparameter tuning on a large search house for 1,000 rounds of hyperparameter testing, we’ll run 20 unbiased trials with 25 rounds of hyperparameter testing every. We’ll slim the search house utilizing percentile values for every hyperparameter utilizing the trial outcomes. With the percentiles, we’ll run a closing seek for 200 rounds utilizing the narrower hyperparameter search house, the place the distribution we offer for every hyperparameter is given a most and minimal from the percentile values we see within the trials.

For instance, say we run our 20 trials and get 20 optimum values for max_depth utilizing the shallow search. We select to slim the search house for max_depth from the uniform distribution from 1 to 100 to the uniform distribution from the tenth percentile worth for max_depth from our trials to the ninetieth percentile worth for max_depth. We’ll run a few completely different fashions altering up the percentiles we use to check aggressive narrowing methods.

Fashions produced utilizing the trial-based technique require 700 evaluations of hyperparameter mixtures (500 from the trials and 200 from the ultimate analysis). We’ll evaluate the efficiency of those fashions towards one tuned for 1,000 hyperparameter evaluations on the broader house and one tuned for 700 hyperparameter evaluations on the broader house. We’re curious as as to if this technique of narrowing the hyperparameter search house will result in sooner convergence towards the optimum hyperparameter mixture or if this narrowing negatively impacts outcomes.

We check this technique on a activity from a previous undertaking involving simulated tennis match outcomes (extra data within the article I wrote right here). A part of the undertaking concerned constructing post-match win likelihood fashions utilizing high-level details about every match and statistics for a given participant within the match that adopted a truncated regular distribution; that is the duty used to check the hyperparameter tuning technique right here. Extra details about the precise activity may be discovered within the article and within the code linked at first of the article. At a excessive stage, we try to take details about what occurred within the match to foretell a binary win/loss for the match; one might use a post-match win likelihood mannequin to establish any gamers which may be overperforming their statistical efficiency, who is likely to be candidates for regression. To coach every XGBoost mannequin, we use log loss/cross-entropy loss because the loss operate. The info for the duty comes from Jeff Sackmann’s GitHub web page right here: https://github.com/JeffSackmann/tennis_atp. Anybody concerned about tennis or tennis knowledge ought to take a look at his GitHub and wonderful web site, tennisabstract.com.

For this activity and our technique, we now have six fashions, two skilled on the total search house and 4 skilled on a narrower house. These are titled as follows within the charts:

  • “Full Search”: That is the mannequin skilled for 1000 hyperparameter evaluations throughout the total hyperparameter search house.
  • “XX-XX Percentile”: These fashions are these skilled on a narrower search house for 200 evaluations after the five hundred rounds of trial evaluations on the total hyperparameter search house. The “10–90 Percentile” mannequin for instance trains on a hyperparameter search house the place the distribution for every hyperparameter is decided by the tenth percentile and ninetieth percentile values from the 20 trials.
  • “Shorter Search”: That is the mannequin skilled for 700 hyperparameter evaluations throughout the total hyperparameter search house. We use this to check the efficiency of the trial technique towards the broader search house when allotting the identical variety of hyperparameter evaluations to each strategies.

A log of coaching the fashions is included on the GitHub web page linked on the prime of the article which incorporates the hyperparameters discovered at every step of the method given the random seeds used together with the time it took to run every mannequin on my laptop computer. It additionally offers the outcomes of the 20 trials run so to know how every narrowed search house can be parameterized. These instances are listed under:

  • Full Search: ~6000 seconds
  • 10–90 Percentile: ~4300 seconds (~3000 seconds for trials, ~1300 for narrower search)
  • 20–80 Percentile: ~3800 seconds (~3000 seconds for trials, ~800 for narrower search)
  • 30–70 Percentile: ~3650 seconds (~3000 seconds for trials, ~650 for narrower search)
  • 40–60 Percentile: ~3600 seconds (~3000 seconds for trials, ~600 for narrower search)
  • Shorter Search: ~4600 seconds

The timing doesn’t scale 1:1 with the variety of whole evaluations use; the trial technique fashions are likely to take much less time to coach given the identical variety of evaluations, with narrower searches taking even much less time. The following query is whether or not this time-saving impacts mannequin efficiency in any respect. We’ll start by taking a look at validation log loss throughout the fashions.

Picture by writer

Little or no distinguishes the log losses throughout the fashions, however we’ll zoom in a bit bit to get a visible take a look at the variations. We current the total vary y-axis first to contextualize the minor variations within the log losses.

Picture by writer

Okay so doing higher, however we’ll zoom in yet one more time to see the pattern most clearly.

Picture by writer

We discover that the 20–80 Percentile mannequin attains the most effective validation log loss, barely higher than the Full Search and Shorter Search strategies. The opposite percentile fashions all carry out barely worse than the broader search fashions, however the variations are minor throughout the board. We’ll look now on the variations in accuracy between the fashions.

Picture by writer

As with the log losses, we see very minor variations and select to zoom in to see a extra definitive pattern.

Picture by writer

The Full Search mannequin attains the most effective accuracy of any mannequin, however the 10–90 Percentile and 20–80 Percentile each beat out the Shorter Search mannequin over the identical variety of evaluations. That is the sort of tradeoff I hoped to establish with the caveat that that is task-specific and on a really small scale.

The outcomes utilizing log loss and accuracy counsel the opportunity of a attainable efficiency-performance commerce off when selecting how huge to make the XGBoost hyperparameter search house. We discovered that fashions skilled on a narrower search house can outperform or evaluate to fashions skilled on wider search areas whereas taking much less time to coach total.

Additional Work

The code supplied within the prior part ought to present modularity to run this check towards completely different duties with out issue; the outcomes for this classification activity might fluctuate from these of others. Altering the variety of evaluations run when exploring the hyperparameter search house or the variety of trials run to get percentile ranges might present various conclusions from these discovered right here. This work additionally assumed the set of hyperparameters to tune; one other query I’d be concerned about exploring can be the marginal impact of together with extra hyperparameters to tune (i.e., colsample_bylevel) on the efficiency of an XGBoost mannequin.

References

(used with permission)

[2] M. Harrison, Efficient XGBoost (2023), MetaSnake

[3] B. Boehmke, “Superior XGBoost Hyperparameter Tuning on Databricks” (2021), GitHub

[4] J. Bergstra, R. Bardenet, Y. Bengio, B. Kégl, “Algorithms for Hyper-Parameter Optimization” (2011), NeurIPS 2011

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

PHP Code Snippets Powered By : XYZScripts.com