SkillAgentSearch skills...

Spikingjelly

SpikingJelly is an open-source deep learning framework for Spiking Neural Network (SNN) based on PyTorch.

Install / Use

/learn @fangwei123456/Spikingjelly
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

SpikingJelly

GitHub last commit Documentation Status PyPI PyPI - Python Version repo size GitHub issues GitHub closed issues GitHub pull requests GitHub closed pull requests Visitors GitHub forks GitHub Repo stars GitHub contributors

English | 中文(Chinese)

demo

SpikingJelly is an open-source deep learning framework for Spiking Neural Network (SNN) based on PyTorch.

The documentation of SpikingJelly is written in both English and Chinese.

Changelog

We are actively maintaining and improving SpikingJelly. Below are our future plans and highlights of each release.

Highlights

Our new work Towards Lossless Memory-efficient Training of Spiking Neural Networks via Gradient Checkpointing and Spike Compression was recently accepted by ICLR 2026! The automatic training memory optimization tool is available in spikingjelly.activation_based.memopt. Read the tutorial for more information.

In the latest version (Github version),

  • IFNode, LIFNode and ParametricLIFNode are now equipped with Triton backends;
  • FlexSN is available for converting PyTorch spiking neuronal dynamics to Triton kernels;
  • SpikingSelfAttention and QKAttention are available;
  • memopt is available;
  • nir_exchange is available;
  • op_counter is available;
  • spikingjelly.activation_based.layer, spikingjelly.activation_based.functional and spikingjelly.datasets are refactored;
  • Dataset implementations are refactored;
  • Docs and tutorials are updated;
  • Conv-bn fusion functions in spikingjelly.activation_based.functional are deprecated; use PyTorch's fuse_conv_bn_eval instead.

Planned

We are going to release version 0.0.0.1.0 soon.

  • [x] Add Triton backend for further acceleration on GPU.
  • [x] Add a transpiler for converting PyTorch spiking neurons to Triton kernels, which will be more flexible than the existing auto_cuda subpackage.
  • [x] Add spiking self-attention implementations.
  • [x] Update docs and tutorials.

Other long-term plans include:

  • [x] Add NIR support.
  • [x] Optimize training memory cost.
  • [ ] Accelerate on Huawei NPU.

For early-stage experimental features, see our companion project flashsnn. New ideas are prototyped in flashsnn before merging into SpikingJelly.

Version notes

  • The odd version number is the developing version, updated with the GitHub/OpenI repository. The even version number is the stable version and is available at PyPI.

  • The default doc is for the latest developing version. If you are using the stable version, do not forget to switch to the doc in the corresponding version.

  • From the version 0.0.0.0.14, modules including clock_driven and event_driven are renamed. Please refer to the tutorial Migrate From Old Versions.

  • If you use an old version of SpikingJelly, you may encounter some fatal bugs. Refer to Bugs History with Releases for more details.

Docs for different versions:

Installation

Note that SpikingJelly is based on PyTorch. Please make sure that you have installed PyTorch, torchvision and torchaudio before you install SpikingJelly. Note that the latest version of SpikingJelly requires torch>=2.2.0 and is tested on torch==2.7.1 .

Install the last stable version from PyPI:

pip install spikingjelly

Install the latest developing version from the source code:

From GitHub:

git clone https://github.com/fangwei123456/spikingjelly.git
cd spikingjelly
pip install .

From OpenI:

git clone https://openi.pcl.ac.cn/OpenI/spikingjelly.git
cd spikingjelly
pip install .

Optional Dependencies

To enable cupy backend, install CuPy.

pip install cupy-cuda12x # for CUDA 12.x
pip install cupy-cuda11x # for CUDA 11.x

To enable triton backend, make sure that Triton is installed. Typically, triton is installed with PyTorch 2.X. We test triton backend on triton==3.3.1.

pip install triton==3.3.1

To enable nir_exchange, install NIR and NIRTorch.

pip install nir nirtorch

Build SNN In An Unprecedented Simple Way

SpikingJelly is user-friendly. Building SNN with SpikingJelly is as simple as building ANN in PyTorch:

nn.Sequential(
    layer.Flatten(),
    layer.Linear(28 * 28, 10, bias=False),
    neuron.LIFNode(tau=tau, surrogate_function=surrogate.ATan())
)

This simple network with a Poisson encoder can achieve 92% accuracy on the MNIST test dataset. Read refer to the tutorial for more details. You can also run this code in a Python terminal for training on classifying MNIST:

python -m spikingjelly.activation_based.examples.lif_fc_mnist -tau 2.0 -T 100 -device cuda:0 -b 64 -epochs 100 -data-dir <PATH to MNIST> -amp -opt adam -lr 1e-3 -j 8

Fast And Handy ANN-SNN Conversion

SpikingJelly implements a relatively general ANN-SNN Conversion interface. Users can realize the conversion through PyTorch. What's more, users can customize the conversion mode.

class ANN(nn.Module):
    def __init__(self):
        super().__init__()
        self.network = nn.Sequential(
            nn.Conv2d(1, 32, 3, 1),
            nn.BatchNorm2d(32, eps=1e-3),
            nn.ReLU(),
            nn.AvgPool2d(2, 2),

            nn.Conv2d(32, 32, 3, 1),
            nn.BatchNorm2d(32, eps=1e-3),
            nn.ReLU(),
            nn.AvgPool2d(2, 2),

            nn.Conv2d(32, 32, 3, 1),
            nn.BatchNorm2d(32, eps=1e-3),
            nn.ReLU(),
            nn.AvgPool2d(2, 2),

            nn.Flatten(),
            nn.Linear(32, 10)
        )

    def forward(self,x):
        x = self.network(x)
        return x

This simple network with analog encoding can achieve 98.44% accuracy after conversion on MNIST test dataset. Read the tutorial for more details. You can also run this code in a Python terminal for training on classifying MNIST using the converted model:

>>> import spikingjelly.activation_based.ann2snn.examples.cnn_mnist as cnn_mnist
>>> cnn_mnist.main()

CUDA/Triton-Enhanced Neuron

SpikingJelly provides multiple backends for multi-step neurons. You can use the user-friendly torch backend for easily coding and debugging and use cupy or triton backend for faster training speed.

The following figure compares the execution time of torch and cupy backends of Multi-Step LIF neurons (float32). Generally, triton backend is even more efficient than cupy backend.

<img src="./docs/source/_static/tutorials/11_cext_neuron_with_lbl/exe_time_fb.png" alt="exe_time_fb" />

float16 is also provided by the cupy and triton backend, and can be used in automatic mixed precision training.

To use the cupy backend, please i

View on GitHub
GitHub Stars1.9k
CategoryEducation
Updated8h ago
Forks301

Languages

Python

Security Score

85/100

Audited on Mar 23, 2026

No findings