SkillAgentSearch skills...

GlobalSearchRegression.jl

Julia's HPC command for automatic feature/model selection using all-subset-regression approaches

Install / Use

/learn @ParallelGSReg/GlobalSearchRegression.jl
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

GlobalSearchRegression Build Status codecov DOI

Abstract

GlobalSearchRegression is both the world-fastest all-subset-regression command (a widespread tool for automatic model/feature selection) and a first-step to develop a coherent framework to merge Machine Learning and Econometric algorithms.

Written in Julia, it is a High Performance Computing version of the Stata-gsreg command (get the original code here). In a multicore personal computer (we use a Threadripper 1950x build for benchmarks), it runs up-to 3165 times faster than the original Stata-code and up-to 197 times faster than well-known R-alternatives (pdredge).

Notwithstanding, GlobalSearchRegression main focus is not only on execution-times but also on progressively combining Machine Learning algorithms with Econometric diagnosis tools into a friendly Graphical User Interface (GUI) to simplify embarrassingly parallel quantitative-research.

In a Machine Learning environment (e.g. problems focusing on predictive analysis / forecasting accuracy) there is an increasing universe of “training/test” algorithms (many of them showing very interesting performance in Julia) to compare alternative results and find-out a suitable model.

However, problems focusing on causal inference require five important econometric features: 1) Parsimony (to avoid very large atheoretical models); 2) Interpretability (for causal inference, rejecting “intuition-loss” transformation and/or complex combinations); 3) Across-models sensitivity analysis (uncertainty is the only certainty; parameter distributions are preferred against “best-model” unique results); 4) Robustness to time series and panel data information (preventing the use of raw bootstrapping or random subsample selection for training and test sets); and 5) advanced residual properties (e.g. going beyond the i.i.d assumption and looking for additional panel structure properties -for each model being evaluated-, which force a departure from many traditional machine learning algorithms).

For all these reasons, researchers increasingly prefer advanced all-subset-regression approaches, choosing among alternative models by means of in-sample and/or out-of-sample criteria, model averaging results, bayesian priors for theoretical bounds on covariates coefficients and different residual constraints. While still unfeasible for large problems (choosing among hundreds of covariates), hardware and software innovations allow researchers to implement this approach in many different scientific projects, choosing among one billion models in a few hours using standard personal computers.

Installation

GlobalSearchRegression requires Julia 1.6.7 (or newer releases) to be previously installed in your computer. Then, start Julia and type "]" (without double quotes) to open the package manager.

julia> ]
pkg>

After that, just install GlobalSearchRegression by typing "add GlobalSearchRegression"

pkg> add GlobalSearchRegression

Optionally, some users could also find interesting to install CSV and DataFrames packages to allow for additional I/O functionalities.

pkg> add CSV DataFrames

Basic Usage

To run the simplest analysis just type:

julia> using GlobalSearchRegression, DelimitedFiles
julia> dataname = readdlm("path_to_your_data/your_data.csv", ',', header=true)

and

julia> gsreg("your_dependent_variable your_explanatory_variable_1 your_explanatory_variable_2 your_explanatory_variable_3 your_explanatory_variable_4", dataname)

or

julia> gsreg("your_dependent_variable *", dataname)

It performs an Ordinary Least Squares - all subset regression (OLS-ASR) approach to choose the best model among 2<sup>n</sup>-1 alternatives (in terms of in-sample accuracy, using the adjusted R<sup>2</sup>), where:

  • DelimitedFiles is the Julia buit-in package we use to read data from csv files (throught its readdlm function);
  • "path_to_your_data/your_data.csv" is a string that indentifies your comma-separated database, allowing for missing observations. It's assumed that your database first row is used to identify variable names;
  • gsreg is the GlobalSearchRegression function that estimates all-subset-regressions (e.g. all-possible covariate combinations). In its simplest form, it has two arguments separated by a comma;
  • The first gsreg argument is the general unrestricted model (GUM). It must be typed between double quotes. Its first string is the dependent variable name (csv-file names must be respected, remember that Julia is case sensitive). After that, you can include as many explanatory variables as you want. Alternative, you can replace covariates by wildcars as in the example above (e.g. * for all other variables in the csv-files, or qwert* for all other variables in the csv-file with names starting by "qwert"); and
  • The second gsreg argument is name of the object containing your database. Following the example above, it must match the name you use in dataname = readdlm("path_to_your_data/your_data.csv", ',', header=true)

Advanced usage

Alternative data input

Databases can also be handled with CSV/DataFrames packages. To do so, remember to install them by using the add command in the Julia's package manager. Once it is done, just type:

]
pkg> add CSV, DataFrames

then return with backspace to main REPL interface

julia> using GlobalSearchRegression, CSV, DataFrames
julia> data = CSV.read("path_to_your_data/your_data.csv", DataFrame)
julia> gsreg("y *", data)

Alternative GUM syntax

The general unrestricted model (GUM; the gsreg function first argument) can be written in many different ways, looking for a smooth transition for R and Stata users.

# Stata like
julia> gsreg("y x1 x2 x3", data)

# R like
julia> gsreg("y ~ x1 + x2 + x3", data)
julia> gsreg("y ~ x1 + x2 + x3", data=data)

# Strings separated with comma
julia> gsreg("y,x1,x2,x3", data)

# Array of strings
julia> gsreg(["y", "x1", "x2", "x3"], data)

# Using wildcards
julia> gsreg("y *", data)
julia> gsreg("y x*", data)
julia> gsreg("y x1 z*", data)
julia> gsreg("y ~ x*", data)
julia> gsreg("y ~ .", data)

Additional options

GlobalSearchRegression advanced properties include almost all Stata-GSREG options but also additional features. Overall, our Julia's version has the following options:

  • intercept::Union{Nothing, Bool}: by default the GUM includes an intercept as a fixed covariate (e.g. it's included in every model). Alternatively, users can erase it by selecting the intercept=false boolean option.
  • estimator::Union{Nothing, String}: can be either "ols" or "ols_fe". The latter performs the OLS estimator on the modified panel dataset obtained from applying the "within transformation" to the original data. panel_id and time options must be identified to use estimator="ols_fe".
  • fixedvars::Union{Nothing, Symbol, Vector{Symbol}}: if you have some interest variables that should remain ubiquitous, use this gsreg option to identify variables that will be used in all regressions (i.e. fixedvars = [:x1, x2]). fixedvars cannot be also included in the equation.
  • outsample::Union{Nothing, Int}: it identify how many observations will be left to forecasting purposes (e.g. outsample = 10 indicates that the last 10 observations will not be used in the OLS estimation, remaining available for out-of-sample accuracy calculations). In a panel data context, outsample observations will be identified on a panel_id basis (i.e. the last 10 observations of each panel group).
  • criteria::Union{Nothing, Symbol, Vector{Symbol}}: there are 7 different criteria (which must be included as symbols) to evaluate alternative models. For in-sample adjustment, user can choose one or many among the following: Adjusted R<sup>2</sup> (:r2adj, the default), Bayesian information criteria (:bic), Akaike and Corrected Akaike information criteria (:aic and :aicc), Mallows's Cp statistic (:cp), Sum of squared errors (also known as Residual sum of squares, :sse) and the Root mean square error (:rmse). For out-of-sample accuracy, there is available the out-of-sample root mean square error (:rmsout). Users are free to combine in-sample and out-of-sample information criteria, as well as many different in-sample criteria. For each alternative model, GlobalSearchRegrssion will calculate a composite ordering variable defined as the equally-weighted average of normalized (to guarantee equal weights) and harmonized (to ensure that higher values always identify better models) user's specified criteria.
  • ttest::Union{Nothing, Bool}: by default there is no t-test (to resamble similar R packages), but users can active it by using the boolean option ttest=true.
  • vc

Related Skills

View on GitHub
GitHub Stars18
CategoryDevelopment
Updated1y ago
Forks4

Languages

Julia

Security Score

60/100

Audited on Feb 11, 2025

No findings