SVETlANNa
Open source library for simulation and engineering of diffractive neural networks
Install / Use
/learn @CompPhysLab/SVETlANNaREADME
SVETlANNa
SVETlANNa is an open-source Python library for simulation of free-space optical set-ups and neuromorphic systems such as Diffractive Neural Networks. It is primarily built on the PyTorch framework, leveraging key features such as tensor-based computations and efficient parallel processing. At its core, SvetlANNa relies on the Fourier optics, supporting multiple propagation models, including the Angular spectrum method and the Fresnel approximation.
There is a supporting github project SVETlANNa.docs containing numerous application examples in the Jupyter notebook format.
The name of the library is composed of the Russian word "svet", which is the "light" in English and the abbreviation ANN standing for an artificial neural network, and simultaneously the whole word sounds like a Russian female name Svetlana.
Abbreviations
NN - Neural Network
ANN - Artificial Neural Network
ONN - Optical Neural Network
DONN - Diffractive Optical Neural Network
DOE - Diffractive Optical Element
SLM - Spatial Light Modulator
Features
- based on the PyTorch
- forward propagation models include the Angular spectrum method and the Fresnel approximation
- possibility to solve the classical DOE/SLM optimization problem with the Gerchberg-Saxton and hybrid input-output algorithms
- support for custom elements and optimization methods
- support for various free-space ONN architectures including feed-forward NN, autoencoders, and recurrent NN
- cross platform
- full GPU acceleration
- companion repository with numerous .ipynb examples
- custom logging, project management, analysis, and visualization tools
- tests for the whole functionality
Possible applications
- modeling and optimization of optical systems and optical beams propagating in free space;
- calculation of parameters of phase masks, DOEs and SLMs for both classical optical systems and neuromorphic optical computers;
- modeling and optimization of parameters of ONN and DONN for problems of all-optical classification and prediction.
Recommended technical requirements:
- Intel Core i5 8400 / AMD Ryzen 5 1400 or higher;
- memory greater than 8 Gb;
- HDD/SSD space not less than 16 Gb (virtual enviroment takes about 6 Gb);
- for running on GPU you need an Nvidia card with CUDA support and memory not less than 2 Gb;
- OS Windows 10 or greater, or Linux with kernel 5.15 or greater;
- the library requires Python 3.11.
Installation, Usage and Examples
For usage there are two options, either run from the source or install from pip.
Installation From PyPI
pip install svetlanna
Running From Source
We reccomend first to install git in your system.
- Clone the repository:
git clone https://github.com/CompPhysLab/SVETlANNa.git
- Move to the project directory:
cd SVETlANNa
- Create a virtual environment for Python 3.11 (see venv documentation) and activate it.
- Istall the Poetry (version 2.0.0 or greater):
pip install poetry
- Install all depencies:
poetry install --all-extras
- Install the PyTorch (check the instructions at the official cite). Choose options appropriate to your system. For running the library on a GPU you need to have an appropriate graphic card and a PyTorch version which supports this card.
pip install torch
Running Tests
The tests can be executed when the library is running from source only. PIP istallation does not include tests.
NOTE: running tests requires at least 4 Gb of free memory.
To run tests, run the following command (within the activated virtual environment in the library folder)
pytest
Documentation
Running examples with library installation
Examples of various scenarios and setups are provided in the repository SVETlANNa.docs.
To run the example locally follow the instructions:
- Clone the repository (you should have an installed git):
git clone https://github.com/CompPhysLab/SVETlANNa.docs.git
- Go to the project directory:
cd SVETlANNa.docs
- Create a virtual environment for Python 3.11 (see venv documentation) and activate it. All subsequent actions should be done within the activated environment.
- Install the main library:
pip install svetlanna
- Install the PyTorch (check the instructions at the official cite). Choose options appropriate to your system. For running the library on a GPU you need to have an appropriate graphic card and a PyTorch version which supports this card.
- If you do not have the Jupyter notebook in your system, install it:
pip install notebook
- Install additional dependencies:
pip install reservoirpy matplotlib tqdm requests av scikit-image py-cpuinfo gputil
- Run the Jupyter notebook:
jupyter notebook
and open the page in your browser (see “Or copy and paste one of these URLs:” in a terminal window)
- Go to the folder SVETlANNa.docs. Examples are grouped into subject subfolders in form of .ipynb files. To run them you need to open a file in the Jupyter notebook and run all code cells (key Shift+Enter).
Running examples with SVETlANNa installed from source
- Clone the repository (you should have an installed git):
git clone https://github.com/CompPhysLab/SVETlANNa.docs.git
All subsequent actions should be done within the activated environment, which was used to install the SVETlANNa library from source.
- If you do not have the Jupyter notebook in your system, install it:
pip install notebook
- Install additional dependencies:
pip install reservoirpy matplotlib tqdm requests av scikit-image py-cpuinfo gputil
- Run the Jupyter notebook:
jupyter notebook
and open the page in your browser (see “Or copy and paste one of these URLs:” in a terminal window)
- Go to the folder SVETlANNa.docs. Examples are grouped into subject subfolders in form of .ipynb files. To run them you need to open a file in the Jupyter notebook and run all code cells (key Shift+Enter).
Examples
Result of training the feed-forward optical neural network for the MNIST classification task: The image of the figure "8" is passed through a stack of 10 phase plates with adjusted phase masks. Selected regions of the detector correspond to different classes of figures. The class of the figure is identified by the detector region that measures the maximum optical intensity.
Examples of visualzation of optical set-ups and optical fields:
<img src="./pics/visualization.png" alt="drawing" width="400"/>Example of a five-layer DONN trained to recognize numbers from the MNIST database:
<img src="./pics/MNIST example 1.png" alt="drawing" width="400"/> <img src="./pics/MNIST example 2.png" alt="drawing" width="400"/> <img src="./pics/MNIST example 3.png" alt="drawing" width="400"/>Contributing
Contributions are always welcome!
See contributing.md for ways to get started.
Please adhere to this project's code of conduct.
Acknowledgements
The work on this repository was initiated within the grant by the Foundation for Assistance to Small Innovative Enterprises
Authors
License
Related Skills
node-connect
344.1kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
96.8kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
344.1kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
344.1kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
