Resolution Tree algorithms have all the time fascinated me. They’re straightforward to implement and obtain good outcomes on numerous classification and regression duties. Mixed with boosting, determination timber are nonetheless state-of-the-art in lots of functions.
Frameworks akin to sklearn, Lightgbm, xgboost and catboost have carried out an excellent job till at present. Nonetheless, previously few months, I’ve been lacking help for arrow datasets. Whereas lightgbm has just lately added help for that, it’s nonetheless lacking in most different frameworks. The arrow information format might be an ideal match for determination timber because it has a columnar construction optimized for environment friendly information processing. Pandas already added help for that and likewise polars makes use of the benefits.
Polars has proven some important efficiency benefits over most different information frameworks. It makes use of the information effectively and avoids copying the information unnecessarily. It additionally offers a streaming engine that enables the processing of bigger information than reminiscence. Because of this I made a decision to make use of polars as a backend for constructing a call tree from scratch.
The objective is to discover the benefits of utilizing polars for determination timber when it comes to reminiscence and runtime. And, in fact, studying extra about polars, effectively defining expressions, and the streaming engine.
The code for the implementation might be discovered on this repository.
Code overview
To get a primary overview of the code, I’ll present the construction of the DecisionTreeClassifier
first:
The primary essential factor might be seen within the imports. It was essential for me to maintain the import part clear and with as few dependencies as doable. This was profitable with solely having dependencies to polars, pickle, and typing.
The init technique permits to outline if the polars streaming engine must be used. Additionally, the max_depth
of the tree might be set right here. One other function within the definition of categorical columns. These are dealt with another way than numerical options utilizing a goal encoding.
It’s doable to avoid wasting and cargo the choice tree mannequin. It’s represented as a nested dict and might be saved to disk as a pickled file.
The polars magic occurs within the match()
and build_tree()
strategies. These settle for each LazyFrames and DataFrames to have help for in-memory processing and streaming.
There are two prediction strategies obtainable, predict()
and predict_many()
. The predict()
technique can be utilized on a small instance dimension, and the information must be offered as a dict. If now we have an enormous take a look at set, it’s extra environment friendly to make use of the predict_many()
technique. Right here, the information might be offered as a Polars Dataframe or LazyFrame.
import pickle
from typing import Iterable, Listing, Union
import polars as pl
class DecisionTreeClassifier:
def __init__(self, streaming=False, max_depth=None, categorical_columns=None):
...
def save_model(self, path: str) -> None:
...
def load_model(self, path: str) -> None:
...
def apply_categorical_mappings(self, information: Union[pl.DataFrame, pl.LazyFrame]) -> Union[pl.DataFrame, pl.LazyFrame]:
...
def match(self, information: Union[pl.DataFrame, pl.LazyFrame], target_name: str) -> None:
...
def predict_many(self, information: Union[pl.DataFrame, pl.LazyFrame]) -> Listing[Union[int, float]]:
...
def predict(self, information: Iterable[dict]):
...
def get_majority_class(self, df: Union[pl.DataFrame, pl.LazyFrame], target_name: str) -> str:
...
def _build_tree(
self,
information: Union[pl.DataFrame, pl.LazyFrame],
feature_names: listing[str],
target_name: str,
unique_targets: listing[int],
depth: int,
) -> dict:
...
Becoming the tree
To coach the choice tree classifier, the match()
technique must be used.
def match(self, information: Union[pl.DataFrame, pl.LazyFrame], target_name: str) -> None:
"""
Match technique to coach the choice tree.
:param information: Polars DataFrame or LazyFrame containing the coaching information.
:param target_name: Identify of the goal column
"""
columns = information.collect_schema().names()
feature_names = [col for col in columns if col != target_name]
# Shrink dtypes
information = information.choose(pl.all().shrink_dtype()).with_columns(
pl.col(target_name).solid(pl.UInt64).shrink_dtype().alias(target_name)
)
# Put together categorical columns with goal encoding
if self.categorical_columns:
categorical_mappings = {}
for categorical_column in self.categorical_columns:
categorical_mappings[categorical_column] = {
worth: index
for index, worth in enumerate(
information.lazy()
.group_by(categorical_column)
.agg(pl.col(target_name).imply().alias("avg"))
.type("avg")
.gather(streaming=self.streaming)[categorical_column]
)
}
self.categorical_mappings = categorical_mappings
information = self.apply_categorical_mappings(information)
unique_targets = information.choose(target_name).distinctive()
if isinstance(unique_targets, pl.LazyFrame):
unique_targets = unique_targets.gather(streaming=self.streaming)
unique_targets = unique_targets[target_name].to_list()
self.tree = self._build_tree(information, feature_names, target_name, unique_targets, depth=0)
It receives a polars LazyFrame or DataFrame that comprises all options and the goal column. To determine the goal column, the target_name
must be offered.
Polars offers a handy solution to optimize the reminiscence utilization of the information.
information.choose(pl.all().shrink_dtype())
With that, all columns are chosen and evaluated. It should convert the dtype
to the smallest doable worth.
The specific encoding
To encode categorical values, a goal encoding is used. For that, all cases of a categorical function can be aggregated, and the common goal worth can be calculated. Then, the cases are sorted by the common goal worth, and a rank is assigned. This rank can be used because the illustration of the function worth.
(
information.lazy()
.group_by(categorical_column)
.agg(pl.col(target_name).imply().alias("avg"))
.type("avg")
.gather(streaming=self.streaming)[categorical_column]
)
Since it’s doable to offer polars DataFrames and LazyFrames, I take advantage of information.lazy()
first. If the given information is a DataFrame, it is going to be transformed to a LazyFrame. Whether it is already a LazyFrame, it solely returns self. With that trick, it’s doable to make sure that the information is processed in the identical manner for LazyFrames and DataFrames and that the gather()
technique can be utilized, which is barely obtainable for LazyFrames.
As an example the result of the calculations within the totally different steps of the becoming course of, I apply it to a dataset for coronary heart illness prediction. It may be discovered on Kaggle and is revealed below the Database Contents License.
Right here is an instance of the specific function illustration for the glucose ranges:
┌──────┬──────┬──────────┐
│ rank ┆ gluc ┆ avg │
│ --- ┆ --- ┆ --- │
│ u32 ┆ i8 ┆ f64 │
╞══════╪══════╪══════════╡
│ 0 ┆ 1 ┆ 0.476139 │
│ 1 ┆ 2 ┆ 0.586319 │
│ 2 ┆ 3 ┆ 0.620972 │
└──────┴──────┴──────────┘
For every of the glucose ranges, the likelihood of getting a coronary heart illness is calculated. That is sorted after which ranked so that every of the degrees is mapped to a rank worth.
Getting the goal values
Because the final a part of the match()
technique, the distinctive goal values are decided.
unique_targets = information.choose(target_name).distinctive()
if isinstance(unique_targets, pl.LazyFrame):
unique_targets = unique_targets.gather(streaming=self.streaming)
unique_targets = unique_targets[target_name].to_list()
self.tree = self._build_tree(information, feature_names, target_name, unique_targets, depth=0)
This serves because the final preparation earlier than calling the _build_tree()
technique recursively.
Constructing the tree
After the information is ready within the match()
technique, the _build_tree()
technique is known as. That is carried out recursively till a stopping criterion is met, e.g., the max depth of the tree is reached. The primary name is executed from the match()
technique with a depth of zero.
def _build_tree(
self,
information: Union[pl.DataFrame, pl.LazyFrame],
feature_names: listing[str],
target_name: str,
unique_targets: listing[int],
depth: int,
) -> dict:
"""
Builds the choice tree recursively.
If max_depth is reached, returns a leaf node with the bulk class.
In any other case, finds the very best cut up and creates inner nodes for left and proper kids.
:param information: The dataframe to judge.
:param feature_names: Identify of the function columns.
:param target_name: Identify of the goal column.
:param unique_targets: distinctive goal values.
:param depth: The present depth of the tree.
:return: A dictionary representing the node.
"""
if self.max_depth is just not None and depth >= self.max_depth:
return {"sort": "leaf", "worth": self.get_majority_class(information, target_name)}
# Make information lazy right here to keep away from that it's evaluated in every loop iteration.
information = information.lazy()
# Consider entropy per function:
information_gain_dfs = []
for feature_name in feature_names:
feature_data = information.choose([feature_name, target_name]).filter(pl.col(feature_name).is_not_null())
feature_data = feature_data.rename({feature_name: "feature_value"})
# No streaming (but)
information_gain_df = (
feature_data.group_by("feature_value")
.agg(
[
pl.col(target_name)
.filter(pl.col(target_name) == target_value)
.len()
.alias(f"class_{target_value}_count")
for target_value in unique_targets
]
+ [pl.col(target_name).len().alias("count_examples")]
)
.type("feature_value")
.choose(
[
pl.col(f"class_{target_value}_count").cum_sum().alias(f"cum_sum_class_{target_value}_count")
for target_value in unique_targets
]
+ [
pl.col(f"class_{target_value}_count").sum().alias(f"sum_class_{target_value}_count")
for target_value in unique_targets
]
+ [
pl.col("count_examples").cum_sum().alias("cum_sum_count_examples"),
pl.col("count_examples").sum().alias("sum_count_examples"),
]
+ [
# From previous select
pl.col("feature_value"),
]
)
.filter(
# A minimum of one instance obtainable
pl.col("sum_count_examples")
> pl.col("cum_sum_count_examples")
)
.choose(
[
(pl.col(f"cum_sum_class_{target_value}_count") / pl.col("cum_sum_count_examples")).alias(
f"left_proportion_class_{target_value}"
)
for target_value in unique_targets
]
+ [
(
(pl.col(f"sum_class_{target_value}_count") - pl.col(f"cum_sum_class_{target_value}_count"))
/ (pl.col("sum_count_examples") - pl.col("cum_sum_count_examples"))
).alias(f"right_proportion_class_{target_value}")
for target_value in unique_targets
]
+ [
(pl.col(f"sum_class_{target_value}_count") / pl.col("sum_count_examples")).alias(
f"parent_proportion_class_{target_value}"
)
for target_value in unique_targets
]
+ [
# From previous select
pl.col("cum_sum_count_examples"),
pl.col("sum_count_examples"),
pl.col("feature_value"),
]
)
.choose(
(
-1
* pl.sum_horizontal(
[
(
pl.col(f"left_proportion_class_{target_value}")
* pl.col(f"left_proportion_class_{target_value}").log(base=2)
).fill_nan(0.0)
for target_value in unique_targets
]
)
).alias("left_entropy"),
(
-1
* pl.sum_horizontal(
[
(
pl.col(f"right_proportion_class_{target_value}")
* pl.col(f"right_proportion_class_{target_value}").log(base=2)
).fill_nan(0.0)
for target_value in unique_targets
]
)
).alias("right_entropy"),
(
-1
* pl.sum_horizontal(
[
(
pl.col(f"parent_proportion_class_{target_value}")
* pl.col(f"parent_proportion_class_{target_value}").log(base=2)
).fill_nan(0.0)
for target_value in unique_targets
]
)
).alias("parent_entropy"),
# From earlier choose
pl.col("cum_sum_count_examples"),
pl.col("sum_count_examples"),
pl.col("feature_value"),
)
.choose(
(
pl.col("cum_sum_count_examples") / pl.col("sum_count_examples") * pl.col("left_entropy")
+ (pl.col("sum_count_examples") - pl.col("cum_sum_count_examples"))
/ pl.col("sum_count_examples")
* pl.col("right_entropy")
).alias("child_entropy"),
# From earlier choose
pl.col("parent_entropy"),
pl.col("feature_value"),
)
.choose(
(pl.col("parent_entropy") - pl.col("child_entropy")).alias("information_gain"),
# From earlier choose
pl.col("parent_entropy"),
pl.col("feature_value"),
)
.filter(pl.col("information_gain").is_not_nan())
.type("information_gain", descending=True)
.head(1)
.with_columns(function=pl.lit(feature_name))
)
information_gain_dfs.append(information_gain_df)
if isinstance(information_gain_dfs[0], pl.LazyFrame):
information_gain_dfs = pl.collect_all(information_gain_dfs, streaming=self.streaming)
information_gain_dfs = pl.concat(information_gain_dfs, how="vertical_relaxed").type(
"information_gain", descending=True
)
information_gain = 0
if len(information_gain_dfs) > 0:
best_params = information_gain_dfs.row(0, named=True)
information_gain = best_params["information_gain"]
if information_gain > 0:
left_mask = information.choose(filter=pl.col(best_params["feature"]) <= best_params["feature_value"])
if isinstance(left_mask, pl.LazyFrame):
left_mask = left_mask.gather(streaming=self.streaming)
left_mask = left_mask["filter"]
# Break up information
left_df = information.filter(left_mask)
right_df = information.filter(~left_mask)
left_subtree = self._build_tree(left_df, feature_names, target_name, unique_targets, depth + 1)
right_subtree = self._build_tree(right_df, feature_names, target_name, unique_targets, depth + 1)
if isinstance(information, pl.LazyFrame):
target_distribution = (
information.choose(target_name)
.gather(streaming=self.streaming)[target_name]
.value_counts()
.type(target_name)["count"]
.to_list()
)
else:
target_distribution = information[target_name].value_counts().type(target_name)["count"].to_list()
return {
"sort": "node",
"function": best_params["feature"],
"threshold": best_params["feature_value"],
"information_gain": best_params["information_gain"],
"entropy": best_params["parent_entropy"],
"target_distribution": target_distribution,
"left": left_subtree,
"proper": right_subtree,
}
else:
return {"sort": "leaf", "worth": self.get_majority_class(information, target_name)}
This technique is the guts of constructing the timber and I’ll clarify it step-by-step. First, when coming into the strategy, it’s checked if the max depth stopping criterion is met.
if self.max_depth is just not None and depth >= self.max_depth:
return {"sort": "leaf", "worth": self.get_majority_class(information, target_name)}
If the present depth is the same as or better than the max_depth
, a node of the sort leaf can be returned. The worth of the leaf corresponds to the bulk class of the information. That is calculated as follows:
def get_majority_class(self, df: Union[pl.DataFrame, pl.LazyFrame], target_name: str) -> str:
"""
Returns the bulk class of a dataframe.
:param df: The dataframe to judge.
:param target_name: Identify of the goal column.
:return: majority class.
"""
majority_class = df.group_by(target_name).len().filter(pl.col("len") == pl.col("len").max()).choose(target_name)
if isinstance(majority_class, pl.LazyFrame):
majority_class = majority_class.gather(streaming=self.streaming)
return majority_class[target_name][0]
To get the bulk class, the rely of rows per goal is decided by grouping over the goal column and aggregating with len()
. The goal occasion, which is current in many of the rows, is returned as the bulk class.
Data Acquire as Splitting Standards
To discover a good cut up of the information, the data acquire is used.
To get the data acquire, the mum or dad entropy and little one entropy have to be calculated.

Calculating The Data Acquire in Polars
The knowledge acquire is calculated for every function worth that’s current in a function column.
information_gain_df = (
feature_data.group_by("feature_value")
.agg(
[
pl.col(target_name)
.filter(pl.col(target_name) == target_value)
.len()
.alias(f"class_{target_value}_count")
for target_value in unique_targets
]
+ [pl.col(target_name).len().alias("count_examples")]
)
.type("feature_value")
The function values are grouped, and the rely of every of the goal values is assigned to it. Moreover, the overall rely of rows for that function worth is saved as count_examples
. Within the final step, the information is sorted by feature_value
. That is wanted to calculate the splits within the subsequent step.
For the guts illness dataset, after the primary calculation step, the information seems to be like this:
┌───────────────┬───────────────┬───────────────┬────────────────┐
│ feature_value ┆ class_0_count ┆ class_1_count ┆ count_examples │
│ --- ┆ --- ┆ --- ┆ --- │
│ i8 ┆ u32 ┆ u32 ┆ u32 │
╞═══════════════╪═══════════════╪═══════════════╪════════════════╡
│ 29 ┆ 2 ┆ 0 ┆ 2 │
│ 30 ┆ 1 ┆ 0 ┆ 1 │
│ 39 ┆ 1068 ┆ 331 ┆ 1399 │
│ 40 ┆ 975 ┆ 263 ┆ 1238 │
│ 41 ┆ 1052 ┆ 438 ┆ 1490 │
│ … ┆ … ┆ … ┆ … │
│ 60 ┆ 1054 ┆ 1460 ┆ 2514 │
│ 61 ┆ 695 ┆ 1408 ┆ 2103 │
│ 62 ┆ 566 ┆ 1125 ┆ 1691 │
│ 63 ┆ 572 ┆ 1517 ┆ 2089 │
│ 64 ┆ 479 ┆ 1217 ┆ 1696 │
└───────────────┴───────────────┴───────────────┴────────────────┘
Right here, the function age_years
is processed. Class 0
stands for “no coronary heart illness,” and sophistication 1 stands for “coronary heart illness.” The information is sorted by the age of years function, and the columns include the rely of class 0
, class 1
, and the overall rely of examples with the respective function worth.
Within the subsequent step, the cumulative sum over the rely of lessons is calculated for every function worth.
.choose(
[
pl.col(f"class_{target_value}_count").cum_sum().alias(f"cum_sum_class_{target_value}_count")
for target_value in unique_targets
]
+ [
pl.col(f"class_{target_value}_count").sum().alias(f"sum_class_{target_value}_count")
for target_value in unique_targets
]
+ [
pl.col("count_examples").cum_sum().alias("cum_sum_count_examples"),
pl.col("count_examples").sum().alias("sum_count_examples"),
]
+ [
# From previous select
pl.col("feature_value"),
]
)
.filter(
# A minimum of one instance obtainable
pl.col("sum_count_examples")
> pl.col("cum_sum_count_examples")
)
The instinct behind it’s that when a cut up is executed over a particular function worth, it consists of the rely of goal values from smaller function values. To have the ability to calculate the proportion, the overall sum of the goal values is calculated. The identical process is repeated for count_examples
, the place the cumulative sum and the overall sum are calculated as nicely.
After the calculation, the information seems to be like this:
┌──────────────┬─────────────┬─────────────┬─────────────┬─────────────┬─────────────┬─────────────┐
│ cum_sum_clas ┆ cum_sum_cla ┆ sum_class_0 ┆ sum_class_1 ┆ cum_sum_cou ┆ sum_count_e ┆ feature_val │
│ s_0_count ┆ ss_1_count ┆ _count ┆ _count ┆ nt_examples ┆ xamples ┆ ue │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ u32 ┆ u32 ┆ u32 ┆ u32 ┆ u32 ┆ u32 ┆ i8 │
╞══════════════╪═════════════╪═════════════╪═════════════╪═════════════╪═════════════╪═════════════╡
│ 3 ┆ 0 ┆ 27717 ┆ 26847 ┆ 3 ┆ 54564 ┆ 29 │
│ 4 ┆ 0 ┆ 27717 ┆ 26847 ┆ 4 ┆ 54564 ┆ 30 │
│ 1097 ┆ 324 ┆ 27717 ┆ 26847 ┆ 1421 ┆ 54564 ┆ 39 │
│ 2090 ┆ 595 ┆ 27717 ┆ 26847 ┆ 2685 ┆ 54564 ┆ 40 │
│ 3155 ┆ 1025 ┆ 27717 ┆ 26847 ┆ 4180 ┆ 54564 ┆ 41 │
│ … ┆ … ┆ … ┆ … ┆ … ┆ … ┆ … │
│ 24302 ┆ 20162 ┆ 27717 ┆ 26847 ┆ 44464 ┆ 54564 ┆ 59 │
│ 25356 ┆ 21581 ┆ 27717 ┆ 26847 ┆ 46937 ┆ 54564 ┆ 60 │
│ 26046 ┆ 23020 ┆ 27717 ┆ 26847 ┆ 49066 ┆ 54564 ┆ 61 │
│ 26615 ┆ 24131 ┆ 27717 ┆ 26847 ┆ 50746 ┆ 54564 ┆ 62 │
│ 27216 ┆ 25652 ┆ 27717 ┆ 26847 ┆ 52868 ┆ 54564 ┆ 63 │
└──────────────┴─────────────┴─────────────┴─────────────┴─────────────┴─────────────┴─────────────┘
Within the subsequent step, the proportions are calculated for every function worth.
.choose(
[
(pl.col(f"cum_sum_class_{target_value}_count") / pl.col("cum_sum_count_examples")).alias(
f"left_proportion_class_{target_value}"
)
for target_value in unique_targets
]
+ [
(
(pl.col(f"sum_class_{target_value}_count") - pl.col(f"cum_sum_class_{target_value}_count"))
/ (pl.col("sum_count_examples") - pl.col("cum_sum_count_examples"))
).alias(f"right_proportion_class_{target_value}")
for target_value in unique_targets
]
+ [
(pl.col(f"sum_class_{target_value}_count") / pl.col("sum_count_examples")).alias(
f"parent_proportion_class_{target_value}"
)
for target_value in unique_targets
]
+ [
# From previous select
pl.col("cum_sum_count_examples"),
pl.col("sum_count_examples"),
pl.col("feature_value"),
]
)
To calculate the proportions, the outcomes from the earlier step can be utilized. For the left proportion, the cumulative sum of every goal worth is split by the cumulative sum of the instance rely. For the precise proportion, we have to know what number of examples now we have on the precise facet for every goal worth. That’s calculated by subtracting the overall sum for the goal worth from the cumulative sum of the goal worth. The identical calculation is used to find out the overall rely of examples on the precise facet by subtracting the sum of the instance rely from the cumulative sum of the instance rely. Moreover, the mum or dad proportion is calculated. That is carried out by dividing the sum of the goal values counts by the overall rely of examples.
That is the outcome information after this step:
┌───────────┬───────────┬───────────┬───────────┬───┬───────────┬───────────┬───────────┬──────────┐
│ left_prop ┆ left_prop ┆ right_pro ┆ right_pro ┆ … ┆ parent_pr ┆ cum_sum_c ┆ sum_count ┆ feature_ │
│ ortion_cl ┆ ortion_cl ┆ portion_c ┆ portion_c ┆ ┆ oportion_ ┆ ount_exam ┆ _examples ┆ worth │
│ ass_0 ┆ ass_1 ┆ lass_0 ┆ lass_1 ┆ ┆ class_1 ┆ ples ┆ --- ┆ --- │
│ --- ┆ --- ┆ --- ┆ --- ┆ ┆ --- ┆ --- ┆ u32 ┆ i8 │
│ f64 ┆ f64 ┆ f64 ┆ f64 ┆ ┆ f64 ┆ u32 ┆ ┆ │
╞═══════════╪═══════════╪═══════════╪═══════════╪═══╪═══════════╪═══════════╪═══════════╪══════════╡
│ 1.0 ┆ 0.0 ┆ 0.506259 ┆ 0.493741 ┆ … ┆ 0.493714 ┆ 3 ┆ 54564 ┆ 29 │
│ 1.0 ┆ 0.0 ┆ 0.50625 ┆ 0.49375 ┆ … ┆ 0.493714 ┆ 4 ┆ 54564 ┆ 30 │
│ 0.754902 ┆ 0.245098 ┆ 0.499605 ┆ 0.500395 ┆ … ┆ 0.493714 ┆ 1428 ┆ 54564 ┆ 39 │
│ 0.765596 ┆ 0.234404 ┆ 0.492739 ┆ 0.507261 ┆ … ┆ 0.493714 ┆ 2709 ┆ 54564 ┆ 40 │
│ 0.741679 ┆ 0.258321 ┆ 0.486929 ┆ 0.513071 ┆ … ┆ 0.493714 ┆ 4146 ┆ 54564 ┆ 41 │
│ … ┆ … ┆ … ┆ … ┆ … ┆ … ┆ … ┆ … ┆ … │
│ 0.545735 ┆ 0.454265 ┆ 0.333563 ┆ 0.666437 ┆ … ┆ 0.493714 ┆ 44419 ┆ 54564 ┆ 59 │
│ 0.539065 ┆ 0.460935 ┆ 0.305025 ┆ 0.694975 ┆ … ┆ 0.493714 ┆ 46922 ┆ 54564 ┆ 60 │
│ 0.529725 ┆ 0.470275 ┆ 0.297071 ┆ 0.702929 ┆ … ┆ 0.493714 ┆ 49067 ┆ 54564 ┆ 61 │
│ 0.523006 ┆ 0.476994 ┆ 0.282551 ┆ 0.717449 ┆ … ┆ 0.493714 ┆ 50770 ┆ 54564 ┆ 62 │
│ 0.513063 ┆ 0.486937 ┆ 0.296188 ┆ 0.703812 ┆ … ┆ 0.493714 ┆ 52859 ┆ 54564 ┆ 63 │
└───────────┴───────────┴───────────┴───────────┴───┴───────────┴───────────┴───────────┴──────────┘
Now that the proportions can be found, the entropy might be calculated.
.choose(
(
-1
* pl.sum_horizontal(
[
(
pl.col(f"left_proportion_class_{target_value}")
* pl.col(f"left_proportion_class_{target_value}").log(base=2)
).fill_nan(0.0)
for target_value in unique_targets
]
)
).alias("left_entropy"),
(
-1
* pl.sum_horizontal(
[
(
pl.col(f"right_proportion_class_{target_value}")
* pl.col(f"right_proportion_class_{target_value}").log(base=2)
).fill_nan(0.0)
for target_value in unique_targets
]
)
).alias("right_entropy"),
(
-1
* pl.sum_horizontal(
[
(
pl.col(f"parent_proportion_class_{target_value}")
* pl.col(f"parent_proportion_class_{target_value}").log(base=2)
).fill_nan(0.0)
for target_value in unique_targets
]
)
).alias("parent_entropy"),
# From earlier choose
pl.col("cum_sum_count_examples"),
pl.col("sum_count_examples"),
pl.col("feature_value"),
)
For the calculation of the entropy, Equation 2 is used. The left entropy is calculated utilizing the left proportion, and the precise entropy makes use of the precise proportion. For the mum or dad entropy, the mum or dad proportion is used. On this implementation, pl.sum_horizontal()
is used to calculate the sum of the proportions to utilize doable optimizations from polars. This will also be changed with the python-native sum()
technique.
The information with the entropy values look as follows:
┌──────────────┬───────────────┬────────────────┬─────────────────┬────────────────┬───────────────┐
│ left_entropy ┆ right_entropy ┆ parent_entropy ┆ cum_sum_count_e ┆ sum_count_exam ┆ feature_value │
│ --- ┆ --- ┆ --- ┆ xamples ┆ ples ┆ --- │
│ f64 ┆ f64 ┆ f64 ┆ --- ┆ --- ┆ i8 │
│ ┆ ┆ ┆ u32 ┆ u32 ┆ │
╞══════════════╪═══════════════╪════════════════╪═════════════════╪════════════════╪═══════════════╡
│ -0.0 ┆ 0.999854 ┆ 0.999853 ┆ 3 ┆ 54564 ┆ 29 │
│ -0.0 ┆ 0.999854 ┆ 0.999853 ┆ 4 ┆ 54564 ┆ 30 │
│ 0.783817 ┆ 1.0 ┆ 0.999853 ┆ 1427 ┆ 54564 ┆ 39 │
│ 0.767101 ┆ 0.999866 ┆ 0.999853 ┆ 2694 ┆ 54564 ┆ 40 │
│ 0.808516 ┆ 0.999503 ┆ 0.999853 ┆ 4177 ┆ 54564 ┆ 41 │
│ … ┆ … ┆ … ┆ … ┆ … ┆ … │
│ 0.993752 ┆ 0.918461 ┆ 0.999853 ┆ 44483 ┆ 54564 ┆ 59 │
│ 0.995485 ┆ 0.890397 ┆ 0.999853 ┆ 46944 ┆ 54564 ┆ 60 │
│ 0.997367 ┆ 0.880977 ┆ 0.999853 ┆ 49106 ┆ 54564 ┆ 61 │
│ 0.99837 ┆ 0.859431 ┆ 0.999853 ┆ 50800 ┆ 54564 ┆ 62 │
│ 0.999436 ┆ 0.872346 ┆ 0.999853 ┆ 52877 ┆ 54564 ┆ 63 │
└──────────────┴───────────────┴────────────────┴─────────────────┴────────────────┴───────────────┘
Virtually there! The ultimate step is lacking, which is calculating the kid entropy and utilizing that to get the data acquire.
.choose(
(
pl.col("cum_sum_count_examples") / pl.col("sum_count_examples") * pl.col("left_entropy")
+ (pl.col("sum_count_examples") - pl.col("cum_sum_count_examples"))
/ pl.col("sum_count_examples")
* pl.col("right_entropy")
).alias("child_entropy"),
# From earlier choose
pl.col("parent_entropy"),
pl.col("feature_value"),
)
.choose(
(pl.col("parent_entropy") - pl.col("child_entropy")).alias("information_gain"),
# From earlier choose
pl.col("parent_entropy"),
pl.col("feature_value"),
)
.filter(pl.col("information_gain").is_not_nan())
.type("information_gain", descending=True)
.head(1)
.with_columns(function=pl.lit(feature_name))
)
information_gain_dfs.append(information_gain_df)
For the kid entropy, the left and proper entropy are weighted by the rely of examples for the function values. The sum of each weighted entropy values is used as little one entropy. To calculate the data acquire, we merely must subtract the kid entropy from the mum or dad entropy, as might be seen in Equation 1. The perfect function worth is decided by sorting the information by info acquire and deciding on the primary row. It’s appended to an inventory that gathers all the very best function values from all options.
Earlier than making use of .head(1)
, the information seems to be as follows:
┌──────────────────┬────────────────┬───────────────┐
│ information_gain ┆ parent_entropy ┆ feature_value │
│ --- ┆ --- ┆ --- │
│ f64 ┆ f64 ┆ i8 │
╞══════════════════╪════════════════╪═══════════════╡
│ 0.028388 ┆ 0.999928 ┆ 54 │
│ 0.027719 ┆ 0.999928 ┆ 52 │
│ 0.027283 ┆ 0.999928 ┆ 53 │
│ 0.026826 ┆ 0.999928 ┆ 50 │
│ 0.026812 ┆ 0.999928 ┆ 51 │
│ … ┆ … ┆ … │
│ 0.010928 ┆ 0.999928 ┆ 62 │
│ 0.005872 ┆ 0.999928 ┆ 39 │
│ 0.004155 ┆ 0.999928 ┆ 63 │
│ 0.000072 ┆ 0.999928 ┆ 30 │
│ 0.000054 ┆ 0.999928 ┆ 29 │
└──────────────────┴────────────────┴───────────────┘
Right here, it may be seen that the age function worth of 54 has the very best info acquire. This function worth can be collected for the age function and must compete in opposition to the opposite options.
Choosing Greatest Break up and Outline Sub Timber
To pick the very best cut up, the very best info acquire must be discovered throughout all options.
if isinstance(information_gain_dfs[0], pl.LazyFrame):
information_gain_dfs = pl.collect_all(information_gain_dfs, streaming=self.streaming)
information_gain_dfs = pl.concat(information_gain_dfs, how="vertical_relaxed").type(
"information_gain", descending=True
)
For that, the pl.collect_all()
technique is used on information_gain_dfs
. This evaluates all LazyFrames in parallel, which makes the processing very environment friendly. The result’s an inventory of polars DataFrames, that are concatenated and sorted by info acquire.
For the guts illness instance, the information seems to be like this:
┌──────────────────┬────────────────┬───────────────┬─────────────┐
│ information_gain ┆ parent_entropy ┆ feature_value ┆ function │
│ --- ┆ --- ┆ --- ┆ --- │
│ f64 ┆ f64 ┆ f64 ┆ str │
╞══════════════════╪════════════════╪═══════════════╪═════════════╡
│ 0.138032 ┆ 0.999909 ┆ 129.0 ┆ ap_hi │
│ 0.09087 ┆ 0.999909 ┆ 85.0 ┆ ap_lo │
│ 0.029966 ┆ 0.999909 ┆ 0.0 ┆ ldl cholesterol │
│ 0.028388 ┆ 0.999909 ┆ 54.0 ┆ age_years │
│ 0.01968 ┆ 0.999909 ┆ 27.435041 ┆ bmi │
│ … ┆ … ┆ … ┆ … │
│ 0.000851 ┆ 0.999909 ┆ 0.0 ┆ lively │
│ 0.000351 ┆ 0.999909 ┆ 156.0 ┆ peak │
│ 0.000223 ┆ 0.999909 ┆ 0.0 ┆ smoke │
│ 0.000098 ┆ 0.999909 ┆ 0.0 ┆ alco │
│ 0.000031 ┆ 0.999909 ┆ 0.0 ┆ gender │
└──────────────────┴────────────────┴───────────────┴─────────────┘
Out of all options, the ap_hi (Systolic blood stress) function worth of 129 ends in the very best info acquire and thus can be chosen for the primary cut up.
information_gain = 0
if len(information_gain_dfs) > 0:
best_params = information_gain_dfs.row(0, named=True)
information_gain = best_params["information_gain"]
In some circumstances, information_gain_dfs
may be empty, for instance, when all splits lead to having solely examples on the left or proper facet. If that is so, the data acquire is zero. In any other case, we get the function worth with the very best info acquire.
if information_gain > 0:
left_mask = information.choose(filter=pl.col(best_params["feature"]) <= best_params["feature_value"])
if isinstance(left_mask, pl.LazyFrame):
left_mask = left_mask.gather(streaming=self.streaming)
left_mask = left_mask["filter"]
# Break up information
left_df = information.filter(left_mask)
right_df = information.filter(~left_mask)
left_subtree = self._build_tree(left_df, feature_names, target_name, unique_targets, depth + 1)
right_subtree = self._build_tree(right_df, feature_names, target_name, unique_targets, depth + 1)
if isinstance(information, pl.LazyFrame):
target_distribution = (
information.choose(target_name)
.gather(streaming=self.streaming)[target_name]
.value_counts()
.type(target_name)["count"]
.to_list()
)
else:
target_distribution = information[target_name].value_counts().type(target_name)["count"].to_list()
return {
"sort": "node",
"function": best_params["feature"],
"threshold": best_params["feature_value"],
"information_gain": best_params["information_gain"],
"entropy": best_params["parent_entropy"],
"target_distribution": target_distribution,
"left": left_subtree,
"proper": right_subtree,
}
else:
return {"sort": "leaf", "worth": self.get_majority_class(information, target_name)}
When the data acquire is larger than zero, the sub-trees are outlined. For that, the left masks is outlined utilizing the function worth that resulted in the very best info acquire. The masks is utilized to the mum or dad information to get the left information body. The negation of the left masks is used to outline the precise information body. Each left and proper information frames are used to name the _build_tree()
technique once more with an elevated depth+1. Because the final step, the goal distribution is calculated. That is used as extra info on the node and can be seen when plotting the tree together with the opposite info.
When info acquire is zero, a leaf occasion can be returned. This comprises the bulk class of the given information.
Make predictions
It’s doable to make predictions in two other ways. If the enter information is small, the predict()
technique can be utilized.
def predict(self, information: Iterable[dict]):
def _predict_sample(node, pattern):
if node["type"] == "leaf":
return node["value"]
if pattern[node["feature"]] <= node["threshold"]:
return _predict_sample(node["left"], pattern)
else:
return _predict_sample(node["right"], pattern)
predictions = [_predict_sample(self.tree, sample) for sample in data]
return predictions
Right here, the information might be offered as an iterable of dicts. Every dict comprises the function names as keys and the function values as values. Through the use of the _predict_sample()
technique, the trail within the tree is adopted till a leaf node is reached. This comprises the category that’s assigned to the respective instance.
def predict_many(self, information: Union[pl.DataFrame, pl.LazyFrame]) -> Listing[Union[int, float]]:
"""
Predict technique.
:param information: Polars DataFrame or LazyFrame.
:return: Listing of predicted goal values.
"""
if self.categorical_mappings:
information = self.apply_categorical_mappings(information)
def _predict_many(node, temp_data):
if node["type"] == "node":
left = _predict_many(node["left"], temp_data.filter(pl.col(node["feature"]) <= node["threshold"]))
proper = _predict_many(node["right"], temp_data.filter(pl.col(node["feature"]) > node["threshold"]))
return pl.concat([left, right], how="diagonal_relaxed")
else:
return temp_data.choose(pl.col("temp_prediction_index"), pl.lit(node["value"]).alias("prediction"))
information = information.with_row_index("temp_prediction_index")
predictions = _predict_many(self.tree, information).type("temp_prediction_index").choose(pl.col("prediction"))
# Convert predictions to an inventory
if isinstance(predictions, pl.LazyFrame):
# Regardless of the execution plans says there isn't any streaming, utilizing streaming right here considerably
# will increase the efficiency and reduces the reminiscence meals print.
predictions = predictions.gather(streaming=True)
predictions = predictions["prediction"].to_list()
return predictions
If an enormous instance set must be predicted, it’s extra environment friendly to make use of the predict_many()
technique. This makes use of the benefits that polars offers when it comes to parallel processing and reminiscence effectivity.
The information might be offered as a polars DataFrame or LazyFrame. Equally to the _build_tree()
technique within the coaching course of, a _predict_many()
technique is known as recursively. All examples within the information are filtered into sub-trees till the leaf node is reached. Examples that went the identical path to the leaf node get the identical prediction worth assigned. On the finish of the method, all sub-frames of examples are concatenated once more. Because the order cannot be preserved with that, a short lived prediction index is about originally of the method. When all predictions are carried out, the unique order is restored with sorting by that index.
Utilizing the classifier on a dataset
A utilization instance for the choice tree classifier might be discovered right here. The choice tree is skilled on a coronary heart illness dataset. A practice and take a look at set is outlined to check the efficiency of the implementation. After the coaching, the tree is plotted and saved to a file.
With a max depth of 4, the ensuing tree seems to be as follows:

It achieves a practice and take a look at accuracy of 73% on the given information.
Runtime comparability
One objective of utilizing polars as a backend for determination timber is to discover the runtime and reminiscence utilization and evaluate it to different frameworks. For that, I created a reminiscence profiling script that may be discovered right here.
The script compares this implementation, which is known as “efficient-trees” in opposition to sklearn and lightgbm. For efficient-trees, the lazy streaming variant and non-lazy in-memory variant are examined.

Within the graph, it may be seen that lightgbm is the quickest and most memory-efficient framework. Because it launched the potential for utilizing arrow datasets some time in the past, the information might be processed effectively. Nonetheless, because the complete dataset nonetheless must be loaded and may’t be streamed, there are nonetheless potential scaling points.
The following greatest framework is efficient-trees with out and with streaming. Whereas efficient-trees with out streaming has a greater runtime, the streaming variant makes use of much less reminiscence.
The sklearn implementation achieves the worst outcomes when it comes to reminiscence utilization and runtime. Because the information must be offered as a numpy array, the reminiscence utilization grows quite a bit. The runtime might be defined by utilizing just one CPU core. Assist for multi-threading or multi-processing doesn’t exist but.
Deep dive: Streaming in polars
As might be seen within the comparability of the frameworks, the potential for streaming the information as an alternative of getting it in reminiscence makes a distinction to all different frameworks. Nonetheless, the streaming engine remains to be thought-about an experimental function, and never all operations are appropriate with streaming but.
To get a greater understanding of what occurs within the background, a glance into the execution plan is helpful. Let’s soar again into the coaching course of and get the execution plan for the next operation:
def match(self, information: Union[pl.DataFrame, pl.LazyFrame], target_name: str) -> None:
"""
Match technique to coach the choice tree.
:param information: Polars DataFrame or LazyFrame containing the coaching information.
:param target_name: Identify of the goal column
"""
columns = information.collect_schema().names()
feature_names = [col for col in columns if col != target_name]
# Shrink dtypes
information = information.choose(pl.all().shrink_dtype()).with_columns(
pl.col(target_name).solid(pl.UInt64).shrink_dtype().alias(target_name)
)
The execution plan for information might be created with the next command:
information.clarify(streaming=True)
This returns the execution plan for the LazyFrame.
WITH_COLUMNS:
[col("cardio").strict_cast(UInt64).shrink_dtype().alias("cardio")]
SELECT [col("gender").shrink_dtype(), col("height").shrink_dtype(), col("weight").shrink_dtype(), col("ap_hi").shrink_dtype(), col("ap_lo").shrink_dtype(), col("cholesterol").shrink_dtype(), col("gluc").shrink_dtype(), col("smoke").shrink_dtype(), col("alco").shrink_dtype(), col("active").shrink_dtype(), col("cardio").shrink_dtype(), col("age_years").shrink_dtype(), col("bmi").shrink_dtype()] FROM
STREAMING:
DF ["gender", "height", "weight", "ap_hi"]; PROJECT 13/13 COLUMNS; SELECTION: None
The key phrase that’s essential right here is STREAMING
. It may be seen that the preliminary dataset loading occurs within the streaming mode, however when shrinking the dtypes
, the entire dataset must be loaded into reminiscence. Because the dtype
shrinking is just not a needed half, I take away it briefly to discover till what operation streaming is supported.
The following problematic operation is assigning the specific options.
def apply_categorical_mappings(self, information: Union[pl.DataFrame, pl.LazyFrame]) -> Union[pl.DataFrame, pl.LazyFrame]:
"""
Apply categorical mappings on enter body.
:param information: Polars DataFrame or LazyFrame with categorical columns.
:return: Polars DataFrame or LazyFrame with mapped categorical columns
"""
return information.with_columns(
[pl.col(col).replace(self.categorical_mappings[col]).solid(pl.UInt32) for col in self.categorical_columns]
)
The substitute expression doesn’t help the streaming mode. Even after eradicating the solid, streaming is just not used which might be seen within the execution plan.
WITH_COLUMNS:
[col("gender").replace([Series, Series]), col("ldl cholesterol").substitute([Series, Series]), col("gluc").substitute([Series, Series]), col("smoke").substitute([Series, Series]), col("alco").substitute([Series, Series]), col("lively").substitute([Series, Series])]
STREAMING:
DF ["gender", "height", "weight", "ap_hi"]; PROJECT */13 COLUMNS; SELECTION: None
Shifting on, I additionally take away the help for categorical options. What occurs subsequent is the calculation of the data acquire.
information_gain_df = (
feature_data.group_by("feature_value")
.agg(
[
pl.col(target_name)
.filter(pl.col(target_name) == target_value)
.len()
.alias(f"class_{target_value}_count")
for target_value in unique_targets
]
+ [pl.col(target_name).len().alias("count_examples")]
)
.type("feature_value")
)
Sadly, already within the first a part of calculating, the streaming mode is just not supported anymore. Right here, utilizing pl.col().filter()
prevents us from streaming the information.
SORT BY [col("feature_value")]
AGGREGATE
[col("cardio").filter([(col("cardio")) == (1)]).rely().alias("class_1_count"), col("cardio").filter([(col("cardio")) == (0)]).rely().alias("class_0_count"), col("cardio").rely().alias("count_examples")] BY [col("feature_value")] FROM
STREAMING:
RENAME
easy π 2/2 ["gender", "cardio"]
DF ["gender", "height", "weight", "ap_hi"]; PROJECT 2/13 COLUMNS; SELECTION: col("gender").is_not_null()
Since this isn’t really easy to alter, I’ll cease the exploration right here. It may be concluded that within the determination tree implementation with polars backend, the complete potential of streaming can’t be used but since essential operators are nonetheless lacking streaming help. Because the streaming mode is below lively improvement, it may be doable to run many of the operators and even the entire calculation of the choice tree within the streaming mode sooner or later.
Conclusion
On this weblog put up, I introduced my customized implementation of a call tree utilizing polars as a backend. I confirmed implementation particulars and in contrast it to different determination tree frameworks. The comparability reveals that this implementation can outperform sklearn when it comes to runtime and reminiscence utilization. However there are nonetheless different frameworks like lightgbm that present a greater runtime and extra environment friendly processing. There’s plenty of potential within the streaming mode when utilizing polars backend. At the moment, some operators forestall an end-to-end streaming strategy as a result of an absence of streaming help, however that is below lively improvement. When polars makes progress with that, it’s price revisiting this implementation and evaluating it to different frameworks once more.