SkillAgentSearch skills...

ConformalLayers

ConformalLayers is a conformal embedding of sequential layers of Convolutional Neural Networks (CNNs) that allows associativity between operations like convolution, average pooling, dropout, flattening, padding, dilation, and stride. Such embedding allows associativity between layers of CNNs, considerably reducing the number of operations to perform inference in neural networks.

Install / Use

/learn @Prograf-UFF/ConformalLayers
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

ConformalLayers: A non-linear sequential neural network with associative layers

ConformalLayers is a conformal embedding of sequential layers of Convolutional Neural Networks (CNNs) that allows associativity between operations like convolution, average pooling, dropout, flattening, padding, dilation, grouping, and stride. Such embedding allows associativity between layers of CNNs, considerably reducing the number of operations to perform inference in neural networks.

This repository is a implementation of ConformalLayers written in Python using Minkowski Engine and PyTorch as backend. This implementation is a first step into the usage of activation functions, like ReSPro, that can be represented as tensors, depending on the geometry model.

Please cite our SIBGRAPI'21 and Pattern Recognition Letter papers if you use this code in your research. The papers present a complete description of the library:

@InProceedings{sousa_et_al-sibgrapi-2021,
  author    = {Sousa, Eduardo V. and Fernandes, Leandro A. F. and Vasconcelos, Cristina N.},
  title     = {{C}onformal{L}ayers: a non-linear sequential neural network with associative layers},
  booktitle = {Proceedings of the 2021 34th SIBGRAPI Conference on Graphics, Patterns and Images},
  year      = {2021},
  pages     = {386--393},
  doi       = {https://doi.org/10.1109/SIBGRAPI54419.2021.00059},
  url     = {https://github.com/Prograf-UFF/ConformalLayers},
}

@Article{sousa_et_al-prl-166(1)-2023,
  author  = {Sousa, Eduardo V. and Vasconcelos, Cristina N. and Fernandes, Leandro A. F.},
  title   = {An analysis of {C}onformal{L}ayers' robustness to corruptions in natural images},
  journal = {Pattern Recognition Letters},
  year    = {2023},
  volume  = {166},
  number  = {1},
  pages   = {190--197},
  doi     = {https://doi.org/10.1016/j.patrec.2022.11.002},
  url     = {https://github.com/Prograf-UFF/ConformalLayers},
}

Please, let Eduardo Vera Sousa, Leandro A. F. Fernandes and Cristina Nader Vasconcelos know if you want to contribute to this project. Also, do not hesitate to contact them if you encounter any problems.

Contents:

Requirements

Make sure that you have the following tools and Python modules before attempting to use ConformalLayers:

The following tool is optional:

The following Python modules are optional, but necessary for running the experiments:

The complete set of required Python modules will be installed automatically by following the instructions presented below.

Installation

Python System

No magic needed here. Just run:

git clone https://github.com/Prograf-UFF/ConformalLayers
cd ConformalLayers
python setup.py install

But the Minkowski Engine may need some special libraries like MKL or OpenBLAS. See its Quick Start Tutorial for details. Also, make sure that Minkowski Engine is not using the CPU_ONLY build set! It happens when you have installed the CPU version of PyTorch. So, first you have to install the CUDA Toolkit/SDK and PyTorch with CUDA according to your computer platform.

Docker

Just run:

git clone https://github.com/Prograf-UFF/ConformalLayers
cd ConformalLayers
docker build -t clayers .

Once the docker is built, check it loads ConformalLayers correctly:

docker run clayers python3 -c "import cl; print(cl.__version__)"

and run it (detached) with GPUs:

docker run --gpus all --env NVIDIA_DISABLE_REQUIRE=1 -it -d clayers

Running Experiments

First, you have to go to the folder <ConformalLayers-dir>/experiments. Run all experiments using the following commands:

python run_all_bechmarks.py --wandb_entity NAME
python run_all_sweeps.py --wandb_entity NAME

Results will be available after summarization:

python summarize_bechmarks.py --wandb_entity NAME
python summarize_sweeps.py --wandb_entity NAME

You have to set NAME to the name of your entity in Weights & Biases.

Individual benchmarks and sweeps can be run using the scripts benchmark.py and sweep.py. Use --help with these scripts to get help on the available arguments.

The files in the <ConformalLayers-dir>/experiments/stuff/networks folder contains the description of each architecture used in our experiments and present the usage of the classes and methods of our library.

Running Unit Tests

First, you have to go to the folder <ConformalLayers-dir>/tests. Run all tests using the following command:

python run_all_tests.py

To run the tests for each module, run:

python test-<module_name>.py

Documentation

Here you find a brief description of the classes available for the user. The detailed documentation is not ready yet.

Contents:

Modules

Here we present the main modules implemented in our framework. They can be found inside cl. Most of the modules are used just like in PyTorch, so users with some background on this framework benefits from this implementation. For users not familiar with PyTorch, the usage still quite simple and intuitive.

| Module | Description | | --- | --- | | cl.ConformalLayers | This class is equivalent to the nn.Sequential module from PyTorch | | cl.Conv1d, cl.Conv2d, cl.Conv3d | Convolution operation implemented for n-D signals | | cl.AvgPool1d, cl.AvgPool2d, cl.AvgPool3d | Average pooling operation implemented for n-D signals | | cl.BaseActivation | The abstract class for the activation function layer. To extend the library, one shall implement this class | | cl.Dropout | In this version, cl.Dropout is only regularization available. In this approach, during the training phase, we randomly shut down some neurons with a probability p, passed as argument to this module | | cl.Flatten | Flattens a contiguous range of dims into a tensor. | | cl.Identity | A placeholder identity operator that is argument-insensitive. | | cl.ReSPro | The layer that corresponds to the ReSPro activation function. Such function is a linear function with non-linear behavior that can be encoded as a tensor. The non-linearity of this function is controlled by a parameter <span>α</span> (passed as argument) that can be provided or inferred from the data | <br>

To define a sequential network, you need to queue the layers in an instance of cl.ConformalLayers. This class in a very similar to the nn.Sequential module from PyTorch and plays an important role in this task, as you can see by comparing the code snippets below:

# This one is built with pure PyTorch
import torch.nn as nn

class D3ModNet(nn.Module):
    
    def __init__(self) -> None:
        super(D3ModNet, self).__init__()
        self.features = nn.Sequential(
            nn.Conv2d(3, 32, kernel_size=3),
            nn.ReLU(),
            nn.AvgPool2d(kernel_size=2, stride=2),
            nn.Conv2d(32, 32, kernel_size=3),
            nn.ReLU(),
            nn.AvgPool2d(kernel_size=2, stride=2),
            nn.Conv2d(32, 32, kernel_size=3),
            nn.ReLU(),
            nn.AvgPool2d(kernel_size=2, stride=2),
        )
        self.classifier = nn.Linear(128, 10)

    def forward(self, images: ImageBatch) -> Logits:
        x = self.features(images)
        x = torch.flatten(x, 1)
        return self.classifier(x)
# This one is built with ConformalLayers
import cl
import torch.nn as nn

class D3ModNetCL(nn.Module):

    def __init__(self) -> None:
        super(D3ModNetCL, self).__init__()
        self.features = cl.ConformalLayers(
            cl.Conv2d(3, 32, kernel_size=3),
            cl.ReSPro(),
            cl.AvgPool2d(kernel_size=2, stride=2),
            cl.Conv2d(32, 32, kernel_size=3),
            cl.ReSPro(),
            cl.AvgPool2d(kerne
View on GitHub
GitHub Stars5
CategoryEducation
Updated3y ago
Forks2

Languages

Python

Security Score

75/100

Audited on Sep 28, 2022

No findings