PySR
High-Performance Symbolic Regression in Python and Julia
Install / Use
/learn @MilesCranmer/PySRREADME
PySR searches for symbolic expressions which optimize a particular objective.
https://github.com/MilesCranmer/PySR/assets/7593028/c8511a49-b408-488f-8f18-b1749078268f
PySR: High-Performance Symbolic Regression in Python and Julia
| Docs | Forums | Paper | colab demo |
|:---:|:---:|:---:|:---:|
||
|
|
|
| pip | conda | Stats |
| :---: | :---: | :---: |
||
|<div align="center">pip:
<br>conda:
</div>|
If you find PySR useful, please cite the paper arXiv:2305.01582. If you've finished a project with PySR, please submit a PR to showcase your work on the research showcase page!
Contents:
<div align="center">Test status
| Linux | Windows | macOS |
|---|---|---|
||
|
|
| Docker | Conda | Coverage |
|
|
|
|
Why PySR?
PySR is an open-source tool for Symbolic Regression: a machine learning task where the goal is to find an interpretable symbolic expression that optimizes some objective.
Over a period of several years, PySR has been engineered from the ground up to be (1) as high-performance as possible, (2) as configurable as possible, and (3) easy to use. PySR is developed alongside the Julia library SymbolicRegression.jl, which forms the powerful search engine of PySR. The details of these algorithms are described in the PySR paper.
Symbolic regression works best on low-dimensional datasets, but one can also extend these approaches to higher-dimensional spaces by using "Symbolic Distillation" of Neural Networks, as explained in 2006.11287, where we apply it to N-body problems. Here, one essentially uses symbolic regression to convert a neural net to an analytic equation. Thus, these tools simultaneously present an explicit and powerful way to interpret deep neural networks.
Installation
Pip
You can install PySR with pip:
pip install pysr
Julia dependencies will be installed at first import.
Conda
Similarly, with conda:
conda install -c conda-forge pysr
<details>
<summary>
Docker
</summary>You can also use the Dockerfile to install PySR in a docker container
- Clone this repo.
- Within the repo's directory, build the docker container:
docker build -t pysr .
- You can then start the container with an IPython execution with:
docker run -it --rm pysr ipython
For more details, see the docker section.
</details> <details> <summary>Apptainer
</summary>If you are using PySR on a cluster where you do not have root access,
you can use Apptainer to build a container
instead of Docker. The Apptainer.def file is analogous to the Dockerfile,
and can be built with:
apptainer build --notest pysr.sif Apptainer.def
and launched with
apptainer run pysr.sif
</details>
<details>
<summary>
Troubleshooting
</summary>One issue you might run into can result in a hard crash at import with
a message like "GLIBCXX_... not found". This is due to another one of the Python dependencies
loading an incorrect libstdc++ library. To fix this, you should modify your
LD_LIBRARY_PATH variable to reference the Julia libraries. For example, if the Julia
version of libstdc++.so is located in $HOME/.julia/juliaup/julia-1.10.0+0.x64.linux.gnu/lib/julia/
(which likely differs on your system!), you could add:
export LD_LIBRARY_PATH=$HOME/.julia/juliaup/julia-1.10.0+0.x64.linux.gnu/lib/julia/:$LD_LIBRARY_PATH
to your .bashrc or .zshrc file.
Quickstart
You might wish to try the interactive tutorial here, which uses the notebook in examples/pysr_demo.ipynb.
In practice, I highly recommend using IPython rather than Jupyter, as the printing is much nicer. Below is a quick demo here which you can paste into a Python runtime. First, let's import numpy to generate some test data:
import numpy as np
X = 2 * np.random.randn(100, 5)
y = 2.5382 * np.cos(X[:, 3]) + X[:, 0] ** 2 - 0.5
We have created a dataset with 100 datapoints, with 5 features each. The relation we wish to model is $2.5382 \cos(x_3) + x_0^2 - 0.5$.
Now, let's create a PySR model and train it. PySR's main interface is in the style of scikit-learn:
from pysr import PySRRegressor
model = PySRRegressor(
maxsize=20,
niterations=40, # < Increase me for better results
binary_operators=["+", "*"],
unary_operators=[
"cos",
"exp",
"sin",
"inv(x) = 1/x",
# ^ Custom operator (julia syntax)
],
extra_sympy_mappings={"inv": lambda x: 1 / x},
# ^ Define operator for SymPy as well
elementwise_loss="loss(prediction, target) = (prediction - target)^2",
# ^ Custom loss function (julia syntax)
)
This will set up the model for 40 iterations of the search code, which contains hundreds of thousands of mutations and equation evaluations.
Let's train this model on our dataset:
model.fit(X, y)
Internally, this launches a Julia process which will do a multithreaded search for equations to fit the dataset.
Equations will be printed during training, and once you are satisfied, you may quit early by hitting 'q' and then <enter>.
After the model has been fit, you can run model.predict(X)
to see the predictions on a given dataset using the automatically-selected expression,
or, for example, model.predict(X, 3) to see the predictions of the 3rd equation.
You may run:
print(model)
to print the learned equations:
PySRRegressor.equations_ = [
pick score equation loss complexity
0 0.000000 4.4324794 42.354317 1
1 1.255691 (x0 * x0) 3.437307 3
2 0.011629 ((x0 * x0) + -0.28087974) 3.358285 5
3 0.897855 ((x0 * x0) + cos(x3)) 1.368308 6
4 0.857018 ((x0 * x0) + (cos(x3) * 2.4566472)) 0.246483 8
5 >>>> inf (((cos(x3) + -0.19699033) * 2.5382123) + (x0 *... 0.000000 10
]
This arrow in the pick column indicates which equation is currently selected by your
model_selection strategy for prediction.
(You may change model_selection after .fit(X, y) as well.)
model.equations_ is a pandas DataFrame containing all equations, including callable format
(lambda_format),
SymPy format (sympy_format - which you can also get with model.sympy()), and even JAX and PyTorch format
(both of which are differentiable - which you can get with model.jax() and model.pytorch()).
Note that PySRRegressor stores the state of the last search, and will restart from where you left off the next time you call .fit(), assuming you have set warm_start=True.
This will cause problems if significant changes are made to the search parameters (like changing the operators). You can run model.reset() to reset the state.
You will notice that PySR will save two files:
hall_of_fame...csv and hall_of_fame...pkl.
The csv file is a list of equations and their losses, and the pkl file is a saved state of the model.
You may load the model from the pkl file with:
model = PySRRegressor.from_file("hall_of_fame.2022-08-10_100832.281.pkl")
There are several other useful features such as denoising (e.g., denoise=True),
feature selection (e.g., select_k_features=3).
For examples of these and other features, see the [examples page](https://ai.damtp.cam.ac
Related Skills
claude-opus-4-5-migration
110.7kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
model-usage
351.4kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
feishu-drive
351.4k|
things-mac
351.4kManage Things 3 via the `things` CLI on macOS (add/update projects+todos via URL scheme; read/search/list from the local Things database)
