SkillAgentSearch skills...

LenslessPiCam

Lensless imaging toolkit. Complete tutorial: https://go.epfl.ch/lenslesspicam

Install / Use

/learn @LCAV/LenslessPiCam

README

============= LenslessPiCam

.. image:: https://img.shields.io/badge/GitHub-100000?style=for-the-badge&logo=github&logoColor=white :target: https://github.com/LCAV/LenslessPiCam :alt: GitHub page

.. image:: https://readthedocs.org/projects/lensless/badge/?version=latest :target: http://lensless.readthedocs.io/en/latest/ :alt: Documentation Status

.. image:: https://joss.theoj.org/papers/10.21105/joss.04747/status.svg :target: https://doi.org/10.21105/joss.04747 :alt: DOI

.. image:: https://static.pepy.tech/badge/lensless :target: https://www.pepy.tech/projects/lensless :alt: Downloads

.. image:: https://colab.research.google.com/assets/colab-badge.svg :target: https://lensless.readthedocs.io/en/latest/examples.html :alt: notebooks

.. image:: https://img.shields.io/badge/Google_Slides-yellow :target: https://docs.google.com/presentation/d/1PcNhMfjATSwcpbHUMrmc88ciQmheBJ7alz8hel8xnGU/edit?usp=sharing :alt: slides

.. image:: https://huggingface.co/datasets/huggingface/badges/resolve/main/powered-by-huggingface-dark.svg :target: https://huggingface.co/bezzam :alt: huggingface

A Hardware and Software Toolkit for Lensless Computational Imaging

.. image:: https://github.com/LCAV/LenslessPiCam/raw/main/scripts/recon/example.png :alt: Lensless imaging example :align: center

This toolkit has everything you need to perform imaging with a lensless camera. The sensor in most examples is the Raspberry Pi HQ <https://www.raspberrypi.com/products/raspberry-pi-high-quality-camera>, camera sensor as it is low cost (around 50 USD) and has a high resolution (12 MP). The lensless encoder/mask used in most examples is either a piece of tape or a low-cost LCD <https://www.adafruit.com/product/358>. As modularity is a key feature of this toolkit, we try to support different sensors and/or lensless encoders.

The toolkit includes:

  • Training scripts/configuration for various learnable, physics-informed reconstruction approaches, as shown here <https://github.com/LCAV/LenslessPiCam/blob/main/configs/train#training-physics-informed-reconstruction-models>__.
  • Camera assembly tutorials (link <https://lensless.readthedocs.io/en/latest/building.html>__).
  • Measurement scripts (link <https://lensless.readthedocs.io/en/latest/measurement.html>__).
  • Dataset preparation and loading tools, with Hugging Face <https://huggingface.co/bezzam>__ integration (slides <https://docs.google.com/presentation/d/18h7jTcp20jeoiF8dJIEcc7wHgjpgFgVxZ_bJ04W55lg/edit?usp=sharing>__ on uploading a dataset to Hugging Face with this script <https://github.com/LCAV/LenslessPiCam/blob/main/scripts/data/upload_dataset_huggingface.py>__).
  • Reconstruction algorithms <https://lensless.readthedocs.io/en/latest/reconstruction.html>__ (e.g. FISTA, ADMM, unrolled algorithms, trainable inversion, , multi-Wiener deconvolution network, pre- and post-processors).
  • Pre-trained models <https://github.com/LCAV/LenslessPiCam/blob/main/lensless/recon/model_dict.py>__ that can be loaded from Hugging Face <https://huggingface.co/bezzam>, for example in this script <https://github.com/LCAV/LenslessPiCam/blob/main/scripts/recon/diffusercam_mirflickr.py>.
  • Mask design <https://lensless.readthedocs.io/en/latest/mask.html>__ and fabrication <https://lensless.readthedocs.io/en/latest/fabrication.html>__ tools.
  • Simulation tools <https://lensless.readthedocs.io/en/latest/simulation.html>__.
  • Evalutions tools <https://lensless.readthedocs.io/en/latest/evaluation.html>__ (e.g. PSNR, LPIPS, SSIM, visualizations).
  • Demo <https://lensless.readthedocs.io/en/latest/demo.html#telegram-demo>__ that can be run on Telegram!

Please refer to the documentation <http://lensless.readthedocs.io>__ for more details, while an overview of example notebooks can be found here <https://lensless.readthedocs.io/en/latest/examples.html>__.

We've also written a few Medium articles to guide users through the process of building the camera, measuring data with it, and reconstruction. They are all laid out in this post <https://medium.com/@bezzam/a-complete-lensless-imaging-tutorial-hardware-software-and-algorithms-8873fa81a660>__.

Collection of lensless imaging research

The following works have been implemented in the toolkit:

Reconstruction algorithms:

  • ADMM with total variation regularization and 3D support (source code <https://github.com/LCAV/LenslessPiCam/blob/d0261b4bc79ef05228b135e6898deb4f7793d1aa/lensless/recon/admm.py#L24>, usage <https://github.com/LCAV/LenslessPiCam/blob/main/scripts/recon/admm.py>). [1]_
  • Unrolled ADMM (source code <https://github.com/LCAV/LenslessPiCam/blob/d0261b4bc79ef05228b135e6898deb4f7793d1aa/lensless/recon/unrolled_admm.py#L20>, usage <https://github.com/LCAV/LenslessPiCam/tree/main/configs/train#unrolled-admm>). [2]_
  • Unrolled ADMM with compensation branch (source code <https://github.com/LCAV/LenslessPiCam/blob/d0261b4bc79ef05228b135e6898deb4f7793d1aa/lensless/recon/utils.py#L84>, usage <https://github.com/LCAV/LenslessPiCam/tree/main/configs/train#compensation-branch>). [3]_
  • Trainable inversion from Flatnet (source code <https://github.com/LCAV/LenslessPiCam/blob/d0261b4bc79ef05228b135e6898deb4f7793d1aa/lensless/recon/trainable_inversion.py#L11>, usage <https://github.com/LCAV/LenslessPiCam/tree/main/configs/train#trainable-inversion>). [4]_
  • Multi-Wiener deconvolution network (source code <https://github.com/LCAV/LenslessPiCam/blob/d0261b4bc79ef05228b135e6898deb4f7793d1aa/lensless/recon/multi_wiener.py#L87>, usage <https://github.com/LCAV/LenslessPiCam/tree/main/configs/train#multi-wiener-deconvolution-network>). [5]_
  • SVDeconvNet (for learning multi-PSF deconvolution) from PhoCoLens (source code <https://github.com/LCAV/LenslessPiCam/blob/main/lensless/recon/sv_deconvnet.py#L42>, usage <https://github.com/LCAV/LenslessPiCam/tree/main/configs/train#multi-psf-camera-inversion>). [6]_
  • Incorporating pre-processor (source code <https://github.com/LCAV/LenslessPiCam/blob/d0261b4bc79ef05228b135e6898deb4f7793d1aa/lensless/recon/trainable_recon.py#L52>__). [7]_
  • Accounting for external illumination(source code 1 <https://github.com/LCAV/LenslessPiCam/blob/d0261b4bc79ef05228b135e6898deb4f7793d1aa/lensless/recon/trainable_recon.py#L64>, source code 2 <https://github.com/LCAV/LenslessPiCam/blob/d0261b4bc79ef05228b135e6898deb4f7793d1aa/scripts/recon/train_learning_based.py#L458>, usage <https://github.com/LCAV/LenslessPiCam/tree/main/configs/train#multilens-under-external-illumination>__). [8]_

Camera / mask design:

  • Fresnel zone aperture mask pattern (source code <https://github.com/LCAV/LenslessPiCam/blob/d0261b4bc79ef05228b135e6898deb4f7793d1aa/lensless/hardware/mask.py#L823>__). [9]_
  • Coded aperture mask pattern (source code <https://github.com/LCAV/LenslessPiCam/blob/d0261b4bc79ef05228b135e6898deb4f7793d1aa/lensless/hardware/mask.py#L288>__). [10]_
  • Near-field Phase Retrieval for designing a high-contrast phase mask (source code <https://github.com/LCAV/LenslessPiCam/blob/d0261b4bc79ef05228b135e6898deb4f7793d1aa/lensless/hardware/mask.py#L706>__). [11]_
  • LCD-based camera, i.e. DigiCam (source code <https://github.com/LCAV/LenslessPiCam/blob/d0261b4bc79ef05228b135e6898deb4f7793d1aa/lensless/hardware/trainable_mask.py#L117>__). [7]_

Datasets (hosted on Hugging Face and downloaded via their API):

  • DiffuserCam Lensless MIR Flickr dataset (copy on Hugging Face <https://huggingface.co/datasets/bezzam/DiffuserCam-Lensless-Mirflickr-Dataset-NORM>__). [2]_
  • TapeCam MIR Flickr (Hugging Face <https://huggingface.co/datasets/bezzam/TapeCam-Mirflickr-25K>__). [7]_
  • DigiCam MIR Flickr (Hugging Face <https://huggingface.co/datasets/bezzam/DigiCam-Mirflickr-SingleMask-25K>__). [7]_
  • DigiCam MIR Flickr with multiple mask patterns (Hugging Face <https://huggingface.co/datasets/bezzam/DigiCam-Mirflickr-MultiMask-25K>__). [7]_
  • DigiCam CelebA (Hugging Face <https://huggingface.co/datasets/bezzam/DigiCam-CelebA-26K>__). [7]_
  • MultiFocal mask MIR Flickr under external illumination (Hugging Face <https://huggingface.co/datasets/Lensless/MultiLens-Mirflickr-Ambient>__). [8]_ Mask fabricated by [12]_

Setup

If you are just interested in using the reconstruction algorithms and plotting / evaluation tools you can install the package via pip:

.. code:: bash

pip install lensless

For plotting, you may also need to install Tk <https://stackoverflow.com/questions/5459444/tkinter-python-may-not-be-configured-for-tk>__.

For performing measurements, the expected workflow is to have a local computer which interfaces remotely with a Raspberry Pi equipped with the HQ camera sensor (or V2 sensor). Instructions on building the camera can be found here <https://lensless.readthedocs.io/en/latest/building.html>__.

The software from this repository has to be installed on both your local machine and the Raspberry Pi. Note that we recommend using Python 3.11, as some Python library versions may not be available with earlier versions of Python. Moreover, its end-of-life <https://endoflife.date/python>__ is Oct 2027.

Local machine setup

Below are commands that worked for our configuration (Ubuntu 22.04.5 LTS), but there are certainly other ways to download a repository and install the library locally.

Note that (lensless) is a convention to indicate that the virtual environment is activated. After activating your virtual environment, you only have to copy the command after (lensless).

.. code:: bash

download from GitHub

git clone git@github.com:LCAV/LenslessPiCam.git cd LenslessPiCam

create virtual environment (as of Oct 4 2023, rawpy is not compatible wit

View on GitHub
GitHub Stars88
CategoryDevelopment
Updated4d ago
Forks32

Languages

Jupyter Notebook

Security Score

100/100

Audited on Apr 1, 2026

No findings