SkillAgentSearch skills...

Steps

A SciKit-Learn style feature selector using best subsets and stepwise regression.

Install / Use

/learn @chris-santiago/Steps
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

step-select

image Build Status codecov

A SciKit-Learn style feature selector using best subsets and stepwise regression.

Install

Create a virtual environment with Python 3.8 and install from PyPi:

pip install step-select

Use

Preliminaries

Note: this example requires two additional packages: pandas and statsmodels.

In this example we'll show how the ForwardSelector and SubsetSelector classes can be used on their own or in conjuction with a Scikit-Learn Pipeline object.

import pandas as pd
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LinearRegression
import statsmodels.datasets
from statsmodels.api import OLS
from statsmodels.tools import add_constant

from steps.forward import ForwardSelector
from steps.subset import SubsetSelector

We'll download the auto dataset via Statsmodels; we'll use mpg as the endogenous variable and the remaining variables as exongenous. We won't use make, as that will create several dummies and increase the number of paramters to 12+, which is too many for the SubsetSelector class; we'll also drop price.

data = statsmodels.datasets.webuse('auto')
data['foreign'] = pd.Series([x == 'Foreign' for x in data['foreign']]).astype(int)
data.fillna(0, inplace=True)
data.head()
<div> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>make</th> <th>price</th> <th>mpg</th> <th>rep78</th> <th>headroom</th> <th>trunk</th> <th>weight</th> <th>length</th> <th>turn</th> <th>displacement</th> <th>gear_ratio</th> <th>foreign</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>AMC Concord</td> <td>4099</td> <td>22</td> <td>3.0</td> <td>2.5</td> <td>11</td> <td>2930</td> <td>186</td> <td>40</td> <td>121</td> <td>3.58</td> <td>0</td> </tr> <tr> <th>1</th> <td>AMC Pacer</td> <td>4749</td> <td>17</td> <td>3.0</td> <td>3.0</td> <td>11</td> <td>3350</td> <td>173</td> <td>40</td> <td>258</td> <td>2.53</td> <td>0</td> </tr> <tr> <th>2</th> <td>AMC Spirit</td> <td>3799</td> <td>22</td> <td>0.0</td> <td>3.0</td> <td>12</td> <td>2640</td> <td>168</td> <td>35</td> <td>121</td> <td>3.08</td> <td>0</td> </tr> <tr> <th>3</th> <td>Buick Century</td> <td>4816</td> <td>20</td> <td>3.0</td> <td>4.5</td> <td>16</td> <td>3250</td> <td>196</td> <td>40</td> <td>196</td> <td>2.93</td> <td>0</td> </tr> <tr> <th>4</th> <td>Buick Electra</td> <td>7827</td> <td>15</td> <td>4.0</td> <td>4.0</td> <td>20</td> <td>4080</td> <td>222</td> <td>43</td> <td>350</td> <td>2.41</td> <td>0</td> </tr> </tbody> </table> </div>
X = data.iloc[:, 3:]
y = data['mpg']

Forward Stepwise Selection

The ForwardSelector follows the standard stepwise regression algorithm: begin with a null model, iteratively test each variable and select the one that gives the most statistically significant improvement of the fit, and repeat. This greedy algorithm continues until the fit no longer improves.

The ForwardSelector is instantiated with two parameters: normalize and metric. Normalize defaults to False, assuming that this class is part of a larger pipeline; metric defaults to AIC.

|Parameter|Type|Description| |---------|----|-----------| |normalize|bool|Whether to normalize features; default False| |metric|str|Optimization metric to use; must be one of aic or bic; default aic|

The ForwardSelector class follows the Scikit-Learn API. After fitting the selector using the .fit() method, the selected features can be accessed using the boolean mask under the .best_support_ attribute.

selector = ForwardSelector(normalize=True, metric='aic')
selector.fit(X, y)
ForwardSelector(normalize=True)
X.loc[:, selector.best_support_]
<div> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>rep78</th> <th>weight</th> <th>length</th> <th>gear_ratio</th> <th>foreign</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>3.0</td> <td>2930</td> <td>186</td> <td>3.58</td> <td>0</td> </tr> <tr> <th>1</th> <td>3.0</td> <td>3350</td> <td>173</td> <td>2.53</td> <td>0</td> </tr> <tr> <th>2</th> <td>0.0</td> <td>2640</td> <td>168</td> <td>3.08</td> <td>0</td> </tr> <tr> <th>3</th> <td>3.0</td> <td>3250</td> <td>196</td> <td>2.93</td> <td>0</td> </tr> <tr> <th>4</th> <td>4.0</td> <td>4080</td> <td>222</td> <td>2.41</td> <td>0</td> </tr> <tr> <th>...</th> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <th>69</th> <td>4.0</td> <td>2160</td> <td>172</td> <td>3.74</td> <td>1</td> </tr> <tr> <th>70</th> <td>5.0</td> <td>2040</td> <td>155</td> <td>3.78</td> <td>1</td> </tr> <tr> <th>71</th> <td>4.0</td> <td>1930</td> <td>155</td> <td>3.78</td> <td>1</td> </tr> <tr> <th>72</th> <td>4.0</td> <td>1990</td> <td>156</td> <td>3.78</td> <td>1</td> </tr> <tr> <th>73</th> <td>5.0</td> <td>3170</td> <td>193</td> <td>2.98</td> <td>1</td> </tr> </tbody> </table> <p>74 rows × 5 columns</p> </div>

Best Subset Selection

The SubsetSelector follows a very simple algorithm: compare all possible models with $k$ predictors, and select the model that minimizes our selection criteria. This algorithm is only appropriate for $k<=12$ features, as it becomes computationally expensive: there are $\frac{k!}{(p-k)!}$possible models, where $p$ is the total number of paramters and $k$ is the number of features included in the model.

The SubsetSelector is instantiated with two parameters: normalize and metric. Normalize defaults to False, assuming that this class is part of a larger pipeline; metric defaults to AIC.

|Parameter|Type|Description| |---------|----|-----------| |normalize|bool|Whether to normalize features; default False| |metric|str|Optimization metric to use; must be one of aic or bic; default aic|

The SubsetSelector class follows the Scikit-Learn API. After fitting the selector using the .fit() method, the selected features can be accessed using the boolean mask under the .best_support_ attribute.

selector = SubsetSelector(normalize=True, metric='aic')
selector.fit(X, y)
SubsetSelector(normalize=True)
X.loc[:, selector.get_support()]
<div> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>rep78</th> <th>weight</th> <th>length</th> <th>gear_ratio</th> <th>foreign</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>3.0</td> <td>2930</td> <td>186</td> <td>3.58</td> <td>0</td> </tr> <tr> <th>1</th> <td>3.0</td> <td>3350</td> <td>173</td> <td>2.53</td> <td>0</td> </tr> <tr> <th>2</th> <td>0.0</td> <td>2640</td> <td>168</td> <td>3.08</td> <td>0</td> </tr> <tr> <th>3</th> <td>3.0</td> <td>3250</td> <td>196</td> <td>2.93</td> <td>0</td> </tr> <tr> <th>4</th> <td>4.0</td> <td>4080</td> <td>222</td> <td>2.41</td> <td>0</td> </tr> <tr> <th>...</th> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <th>69</th> <td>4.0</td> <td>2160</td> <td>172</td> <td>3.74</td> <td>1</td> </tr> <tr> <th>70</th> <td>5.0</td> <td>2040</td> <td>155</td> <td>3.78</td> <td>1</td> </tr> <tr> <th>71</th> <td>4.0</td> <td>1930</td> <td>155</td> <td>3.78</td> <td>1</td> </tr> <tr> <th>72</th> <td>4.0</td> <td>1990</td> <td>156</td> <td>3.78</td> <td>1</td> </tr> <tr> <th>73</th> <td>5.0</td> <td>3170</td> <td>193</td> <td>2.98</td> <td>1</td> </tr> </tbody> </table> <p>74 rows × 5 columns</p> </div>

Comparing the full model

Using the SubsetSelector selected features yields a model with 4 fewer parameters and slightly improved AIC and BIC metrics. The summaries indicate possible multicollinearity in both models, likely caused by weight, length, displacement and other features that are all related to the weight of a vehicle.

Note: Selection using BIC as the optimization metric yields a model where weight is the only selected feature. Bayesian information criteria penalizes additional parameters more then AIC.

mod = OLS(endog=y, exog=add_constant(X)).fit()
mod
View on GitHub
GitHub Stars5
CategoryData
Updated1y ago
Forks0

Languages

Jupyter Notebook

Security Score

75/100

Audited on Mar 14, 2025

No findings