CORING
:ring: Efficient tensor decomposition-based filter pruning
Install / Use
/learn @vantienpham/CORINGREADME
:ring: Efficient tensor decomposition-based filter pruning
<div> <div align="center"> <a href='https://github.com/pvti' target='_blank'>Van Tien PHAM<sup>1,✉</sup></a>  <a href='https://yzniyed.blogspot.com/p/about-me.html' target='_blank'>Yassine ZNIYED<sup>1</sup></a>  <a href='http://tpnguyen.univ-tln.fr/' target='_blank'>Thanh Phuong NGUYEN<sup>1</sup></a>  </div> <div> <div align="center"> <sup>1</sup><em>Université de Toulon, Aix Marseille Université, CNRS, LIS, UMR 7020, France</em>  <sup>✉</sup><em>Corresponding Author</em> </div> <div style="text-align: justify"> We present a novel filter pruning method for neural networks, named CORING, for effiCient tensOr decomposition-based filteR prunING. The proposed approach preserves the multidimensional nature of filters by employing tensor decomposition. Our approach leads to a more efficient and accurate way to measure the similarity, compared to traditional methods that use vectorized or matricized versions of filters. This results in more efficient filter pruning without losing valuable information. Experiments conducted on various architectures proved its effectiveness. Particularly, the numerical results show that CORING outperforms state-of-the-art methods in terms of FLOPS and parameters reduction, and validation accuracy. Moreover, CORING demonstrates its ability to increase model generalization by boosting accuracy on several experiments. For example, with VGG-16, we achieve a 58.1% FLOPS reduction by removing 81.6% of the parameters, while increasing the accuracy by 0.46% on CIFAR-10. Even on the large scale ImageNet, for ResNet-50, the top-1 accuracy increased by 0.63%, while reducing 40.8% and 44.8% of memory and computation requirements, respectively. </div> <div> <img class="image" src="assets\Framework.png" width="100%" height="100%"> </div> <div align="center "> The CORING approach for filter pruning in one layer. </div>:star2: News
Project is under development :construction_worker:. Please stay tuned for more :fire: updates.
- 2024.05.15: Paper accepted by Neural Networks :innocent:. Preprint available here.
- 2024.01.24: Update ablation studies on metrics metrics 📊 and Kshots 🔁.
- 2023.11.02: Add instance segmentation and keypoint detection visualization :horse_racing:.
- 2023.10.02: Efficacy :fast_forward: study is added.
- 2023.8.22: Throughput acceleration :stars: experiment is released :tada:.
- 2023.8.14: Poster :bar_chart: is released. Part of the project will be :mega: presented at GRETSI'23 :clap:.
- 2023.8.12: Baseline and compressed checkpoints :gift: are released.
:dart: Main results
<div align="center "> <img class="image" src="assets\Performance.png" width="40%" height="100%"> </div> <div align="center "> Comparison of pruning methods for VGG-16 on CIFAR-10. </div> <div style="text-align: justify"> CORING is evaluated on various benchmark datasets with well-known and representative architectures including the classic plain structure VGG-16-BN, the GoogLeNet with inception modules, the ResNet-56 with residual blocks, the DenseNet-40 with dense blocks and the MobileNetV2 with inverted residuals and linear bottlenecks. Due to a large number of simulations, these models are all considered on CIFAR-10. Also, to validate the scalability of CORING, we conduct experiments on the challenging ImageNet dataset with ResNet-50. </div> <details> <summary><strong>1. VGG-16-BN/CIFAR-10</strong></summary> <div align="center">| Model | Top-1 (%) | # Params. (↓%) | FLOPs (↓%) | |------------------------|----------|--------------|------------| | VGG-16-BN | 93.96 | 14.98M(00.0) | 313.73M(00.0) | | L1 | 93.40 | 5.40M(64.0) | 206.00M(34.3) | | SSS | 93.02 | 3.93M(73.8) | 183.13M(41.6) | | GAL-0.05 | 92.03 | 3.36M(77.6) | 189.49M(39.6) | | VAS | 93.18 | 3.92M(73.3) | 190.00M(39.1) | | CHIP | 93.86 | 2.76M(81.6) | 131.17M(58.1) | | EZCrop | 93.01 | 2.76M(81.6) | 131.17M(58.1) | | DECORE-500 | 94.02 | 5.54M(63.0) | 203.08M(35.3) | | FPAC | 94.03 | 2.76M(81.6) | 131.17M(58.1) | | CORING-C-5 (Ours) | 94.42| 2.76M(81.6)| 131.17M(58.1)| | GAL-0.1 | 90.73 | 2.67M(82.2) | 171.89M(45.2) | | HRank-2 | 92.34 | 2.64M(82.1) | 108.61M(65.3) | | HRank-1 | 93.43 | 2.51M(82.9) | 145.61M(53.5) | | DECORE-200 | 93.56 | 1.66M(89.0) | 110.51M(64.8) | | EZCrop | 93.70 | 2.50M(83.3) | 104.78M(66.6) | | CHIP | 93.72 | 2.50M(83.3) | 104.78M(66.6) | | FSM | 93.73 | N/A(86.3) | N/A(66.0) | | FPAC | 93.86 | 2.50M(83.3) | 104.78M(66.6) | | AutoBot | 94.01 | 6.44M(57.0) | 108.71M(65.3) | | CORING-C-15 (Ours) | 94.20| 2.50M(83.3) | 104.78M(66.6)| | HRank-3 | 91.23 | 1.78M(92.0) | 73.70M(76.5) | | DECORE-50 | 91.68 | 0.26M(98.3) | 36.85M(88.3) | | QSFM | 92.17 | 3.68M(75.0) | 79.00M(74.8) | | DECORE-100 | 92.44 | 0.51M(96.6) | 51.20M(81.5) | | FSM | 92.86 | N/A(90.6) | N/A(81.0) | | CHIP | 93.18 | 1.90M(87.3) | 66.95M(78.6) | | CORING-C-10 (Ours) | 93.83| 1.90M(87.3) | 66.95M(78.6) |
</div> </details> <details> <summary><strong>2. ResNet-56/CIFAR-10</strong></summary> <div align="center">| Model | Top-1(%) | # Params. (↓%) | FLOPs (↓%) | |---------------------------|---------|---------------|---------------| | ResNet-56 | 93.26 | 0.85M(00.0) | 125.49M(00.0) | | L1 | 93.06 | 0.73M(14.1) | 90.90M(27.6) | | NISP | 93.01 | 0.49M(42.4) | 81.00M(35.5) | | GAL-0.6 | 92.98 | 0.75M(11.8) | 78.30M(37.6) | | HRank-1 | 93.52 | 0.71M(16.8) | 88.72M(29.3) | | DECORE-450 | 93.34 | 0.64M(24.2) | 92.48M(26.3) | | TPP | 93.81 | N/A | N/A(31.1) | | CORING-E-5 (Ours) | 94.76| 0.66M(22.4) | 91.23M(27.3) | | HRank-2 | 93.17 | 0.49M(42.4) | 62.72M(50.0) | | DECORE-200 | 93.26 | 0.43M(49.0) | 62.93M(49.9) | | TPP | 93.46 | N/A | N/A(49.8) | | FSM | 93.63 | N/A(43.6) | N/A(51.2) | | CC-0.5 | 93.64 | 0.44M(48.2) | 60M(52.0) | | FPAC | 93.71 | 0.48M(42.8) | 65.94M(47.4) | | ResRep | 93.71 | N/A | 59.3M(52.7) | | DCP | 93.72 | N/A(49.7) | N/A(54.8) | | EZCrop | 93.80 | 0.48M(42.8) | 65.94M(47.4) | | CHIP | 94.16 | 0.48M(42.8) | 65.94M(47.4) | | CORING-V-5 (Ours) | 94.22| 0.48M(42.8) | 65.94M(47.4) | | GAL-0.8 | 90.36 | 0.29M(65.9) | 49.99M(60.2) | | HRank-3 | 90.72 | 0.27M(68.1) | 32.52M(74.1) | | DECORE-55 | 90.85 | 0.13M(85.3) | 23.22M(81.5) | | QSFM | 91.88 | 0.25M(71.3) | 50.62M(60.0) | | CHIP | 92.05 | 0.24M(71.8) | 34.79M(72.3) | | TPP | 92.35 | N/A | N/A(70.6) | | FPAC | 92.37 | 0.24M(71.8) | 34.79M(72.3) | | CORING-E (Ours) | 92.84| 0.24M(71.8) | 34.79M(72.3) |
</div> </details> <details> <summary><strong>3. DenseNet-40/CIFAR-10</strong></summary> <div align="center">| Model | Top-1 (%) | # Params. (↓%) | FLOPs (↓%) | |----------------------------------------|----------|---------------|-----------------| | DenseNet-40 | 94.81 | 1.04M(00.0) | 282.92M(00.0) | | DECORE-175 | 94.85 | 0.83M(20.7) | 228.96M(19.1) | | CORING-C (Ours) | 94.88| 0.80M(23.1)| 224.12M(20.8) | | GAL-0.01 | 94.29 | 0.67M(35.6) | 182.92M(35.3) | | HRank-1 | 94.24 | 0.66M(36.5) | 167.41M(40.8) | | FPAC | 94.51 | 0.62M(40.1) | 173.39M(38.5) | | DECORE-115 | 94.59
Related Skills
node-connect
351.8kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
110.9kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
351.8kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
351.8kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
