Interlacer
Joint Frequency and Image Space Learning for MRI Reconstruction and Analysis
Install / Use
/learn @nalinimsingh/InterlacerREADME
Interlacer
Joint frequency- and image- space learning for Fourier imaging tasks.

keywords: image reconstruction, motion correction, denoising, magnetic resonance imaging, deep learning
Dependencies
All dependencies required to run this code are specified in environment.yml. To create an anaconda environment with those dependencies installed, run conda env create --name <env> --file environment.yml. You will also need to add this repo to your python path (if you're using conda, conda-develop /path/to/interlacer/).
Layer Implementation
If you'd like to incorporate our joint learning strategy into your own networks, we provide a standalone Keras Layer in interlacer/layers.py. This layer currently supports only 2D inputs.
Training Code
Unfortunately, we are unable to provide the images used for training, due to license restrictions. However, we provide code to train on alternative datasets. To specify your own dataset paths and paths for output of training results, fill in the appropriate fields in scripts/filepaths.py.
The entry to our training code is in scripts/train.py, which is called via python scripts/train.py $path_to_config.ini. Running this script:
- reads the experiment configuration specified
- loads the appropriate model architecture
- loads the training data
- executes training
- writes all training logs to a subdirectory created under
training/.
We provide a helper script to generate config files for experiments comparing multiple models in scripts/make_configs.py. This script allows the user to specify the name of an experiment as well as lists of model/data parameters to be tried (e.g. a list of model architectures). Running scripts/make_configs.py creates the subdirectory configs/$experiment_name, which contains a single configuration file for each specified model/data combination.
For SLURM users, running python scripts/run_experiment.py ../configs/$experiment_name$ starts training (by running train.py) for each configuration file within the directory.
Pretrained Models
Pretrained models are available in the pretrained models directory. This directory also includes a jupyter notebook which shows how to load and run inference using the models.
Paper
If you use the ideas or implementation in this repository, please cite our paper:
@article{melba:2022:018:singh,
title = "Joint Frequency and Image Space Learning for MRI Reconstruction and Analysis",
authors = "Singh, Nalini M. and Iglesias, Juan Eugenio and Adalsteinsson, Elfar and Dalca, Adrian V. and Golland, Polina",
journal = "Machine Learning for Biomedical Imaging",
volume = "1",
issue = "June 2022 issue",
year = "2022"
}
Related Skills
proje
Interactive vocabulary learning platform with smart flashcards and spaced repetition for effective language acquisition.
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
flutter-tutor
Flutter Learning Tutor Guide You are a friendly computer science tutor specializing in Flutter development. Your role is to guide the student through learning Flutter step by step, not to provide d
