BERG
Trained encoding models to generate in silico neural responses for arbitrary stimuli.
Install / Use
/learn @gifale95/BERGREADME

The [Brain Encoding Response Generator (BERG)][website] is a resource consisting of multiple pre-trained encoding models of the brain and an accompanying Python package to generate accurate in silico neural responses to arbitrary stimuli with just a few lines of code.
In silico neural responses from encoding models increasingly resemble in vivo responses recorded from real brains, enabling the novel research paradigm of in silico neuroscience. In silico neural responses are quick and cheap to generate, allowing researchers to explore and test scientific hypotheses across vastly larger solution spaces than possible in vivo. Novel findings from large-scale in silico experimentation are then validated through targeted small-scale in vivo data collection, in this way optimizing research resources. Thus, in silico neuroscience scales beyond what is possible with in vivo data, and democratizes research across groups with diverse data collection infrastructure and resources. To catalyze this emerging research paradigm, we introduce the Brain Encoding Response Generator (BERG), a resource consisting of multiple pre-trained encoding models of the brain and an accompanying Python package to generate accurate in silico neural responses to arbitrary stimuli with just a few lines of code. BERG includes a growing, well documented library of encoding models trained on different neural data acquisition modalities, datasets, subjects, stimulation types, and brain areas, offering broad versatility for addressing a wide range of research questions through in silico neuroscience.
<font color='red'><b>Note:</b></font> Beyond BERG's native models, BERG is also integrated with BrainScore, giving you access to hundreds of vision models scored against macaque neural recordings (V1, V2, V4, IT), as well as GPT-family language models scored against human fMRI data.
For additional information on BERG, you can check out our [website][website], [paper][paper], and [documentation][documentation].
🤝 Contribute to Expanding BERG
We warmly welcome contributions to improve and expand BERG, including:
- Encoding models with higher prediction accuracies.
- Encoding models for new neural data recording modalities (e.g., MEG/ECoG/animal).
- Encoiding models from new neural dataset.
- Encoding models of neural responses for new stimulus types (e.g., videos, audio, language, multimodal).
- Suggestions to improve BERG.
For more information on how to contribute, please refer to [our documentation][berg_contribute]. If you have questions or would like to discuss your contribution before submitting, you can contact us at brain.berg.info@gmail.com. All feedback and help is strongly appreciated!
⚙️ Installation
To install BERG run the following command on your terminal:
pip install -U git+https://github.com/gifale95/BERG.git
You will additionally need to install the Python dependencies found in [requirements.txt][requirements].
BrainScore models (optional)
BERG is integrated with BrainScore, giving you access to hundreds of vision models scored against macaque neural recordings (V1, V2, V4, IT), as well as GPT-family language models scored against human fMRI data.
BrainScore requires Python 3.11. If you are on a different Python version, the rest of BERG will work normally — only BrainScore models will be unavailable.
To install BERG with BrainScore support:
pip install -U "berg[brainscore] @ git+https://github.com/gifale95/BERG.git"
🕹️ How to use
🧰 Download the Brain Encoding Response Generator
BERG is hosted as a public AWS S3 bucket via the AWS Open Data Program. You do not need an AWS account to browse or download the data.
<font color='red'><b>IMPORTANT:</b></font> By downloading the data you agree to BERG's Terms and Conditions.
To download the full BERG dataset into a local folder named brain-encoding-response-generator, use the AWS CLI:
aws s3 sync --no-sign-request s3://brain-encoding-response-generator ./brain-encoding-response-generator
You can also download specific subfolders, for example:
aws s3 sync --no-sign-request s3://brain-encoding-response-generator/encoding_models/modality-fmri ./modality-fmri
Or, you can also downlaod specific files:
aws s3 cp --no-sign-request s3://brain-encoding-response-generator/encoding_models/../model_weights.npy ./modality-fmri
For detailed instructions and folder structure, see the documentation.
🧠 Available encoding models
The following table shows BERG's most accurate encoding models for each dataset and modality. For more details on these models, or for the full list of available models, refer to the [documentation][model_cards].
| Model ID | Training dataset | Neural recoding modality | Species | Stimuli | Encoding accuracy | |----------|------------------|--------------------|---------|---------|-------------------| | [fmri-nsd_fsaverage-huze][fmri-nsd_fsaverage-huze] | [NSD (surface space)][allen] | fMRI | Human | Images | [Accuracy plots][acc-fmri-nsd_fsaverage-huze] | | [fmri-nsd-fwrf][fmri-nsd-fwrf] | [NSD (volume space)][allen] | fMRI | Human | Images | [Accuracy plots][acc-fmri-nsd-fwrf] | | [fmri-mosaic-CNN8_multihead_subAll_verticesVisual][fmri-mosaic-CNN8_multihead_subAll_verticesVisual] | [MOSAIC][MOSAIC] | fMRI | Human | Images | [Accuracy plots][acc-mosaic-CNN8_multihead_subAll_verticesVisual] | | [fmri-mosaic-CNN8_multihead_subNSD_verticesAll][fmri-mosaic-CNN8_multihead_subNSD_verticesAll] | [MOSAIC][MOSAIC] | fMRI | Human | Images | [Accuracy plots][acc-mosaic-CNN8_multihead_subNSD_verticesAll] | | [fmri-bmd-s3d][fmri-bmd-s3d] | [BMD][bmd] | fMRI | Human | Videos | [Accuracy plots][acc-fmri-bmd-s3d] | | [fmri-things_fmri_1-vit_b_32][fmri-things_fmri_1-vit_b_32] | [THINGS fMRI1][things_data] | fMRI | Human | Images | [Accuracy plots][acc-fmri-things_fmri_1-vit_b_32] | | [eeg-things_eeg_2-vit_b_32][eeg-things_eeg_2-vit_b_32] | [THINGS EEG2][THINGS EEG2] | EEG | Human | Images | [Accuracy plots][acc-eeg-things_eeg_2-vit_b_32] | | [meg-things_meg_1-vit_b_32][meg-things_meg_1-vit_b_32] | [THINGS MEG1][things_data] | MEG | Human | Images | [Accuracy plots][acc-meg-things_meg_1-vit_b_32] | | [utah_array-tvsd-vit_b_32][utah_array-tvsd-vit_b_32] | [TVSD][tvsd] | Utah arrays | Macaque | Images | [Accuracy plots][acc-utah_array-tvsd-vit_b_32] | | [calcium_2p-wang_2025-3DCNN][calcium_2p-wang_2025-3DCNN] | [Wang et al., 2025][wang_2025] | two-photon calcium imaging | Mouse | Videos | [Accuracy plots][acc-calcium_2p-wang_2025-3DCNN] | | [fmri-tuckute_2024-GPT2_XL][fmri-tuckute_2024-GPT2_XL] | [Tuckute et al., 2024][tuckute_2024] | fMRI | Human | Text | [Accuracy plots][acc-fmri-tuckute_2024-GPT2_XL] | | [fmri-cneuromod_algo2025-text2fmri][fmri-cneuromod_algo2025-text2fmri] | [CNeuroMod/Algonauts2025][Algonauts] | fMRI | Human | Text | [HF Collection][acc-fmri-cneuromod_algo2025-text2fmri] | | [fmri-cneuromod_algo2025-vibe][fmri-cneuromod_algo2025-vibe] | [CNeuroMod/Algonauts2025][Algonauts] | fMRI | Human | Video + Audio + Text | [Accuracy plots][acc-fmri-cneuromod_algo2025-vibe] | | [brainscore_language][brainscore_language] | [Pereira et al., 2018][pereira_2018] | fMRI | Human | Text | [BrainScore leaderboard (language)][bs_leaderboard_language] | | [brainscore_vision][brainscore_vision] | [Freeman et al., 2013][freeman_2013]; [Majaj et al., 2015][majaj_2015] | Ephys | Macaque | Images | [BrainScore leaderboard (vision)][bs_leaderboard_vision] |
✨ BERG functions
🔹 Initialize the BERG object
To use BERG's functions, you first need to import BERG and create a berg_object.
from berg import BERG
# Initialize BERG with the path to the toolkit
berg = BERG(berg_dir="path/to/brain-encoding-response-generator")
🔹 Generate in silico neural responses to stimuli
Step 1: Load an encoding model of your choice using the get_encoding_model function.
# Load an example fMRI encoding model
fmri_model = berg.get_encoding_model("fmri-nsd_fsaverage-huze",
subject=1,
device="cpu")
# Load an example EEG encoding model
eeg_model = berg.get_encoding_model("eeg-things_eeg_2-vit_b_32",
subject=1,
device="auto")
Step 2: Generate in silico neural responses to stimuli using the encode function.
# Encode fMRI responses to images with metadata
insilico_fmri, insilico_fmri_metadata = berg.encode(fmri_model,
images,
return_metadata=True) # if needed
# Encode EEG responses to images without metadata
insilico_eeg = berg.encode(eeg_model,
images)
🔹 Get the models' metadata
You can also load the encoding models' metadata without having to load the models themselves.
# Load the encoding models' metadata
metadata = berg.get_model_metadata("fmri-nsd_fsaverage-huze",
subject=1)
For more detailed information on how to use these functions, which parameters are available, and the content of the model metadata files, refer to the [model cards in the documentation][model_cards], or to the Tutorials below ⬇️.
💻 Tutorials
We provide several tutorials to help you get started with BERG (you can run these tutorials on Colab or locally as Jupyter Notebooks):
Using BERG:
- [Quickstart Tutorial](https://drive.google.com/f
