PQKD
PQKD compresses CNN models via iterative pruning, performance recovery with knowledge distillation, and quantization-aware training, reducing model size by ~20× with minimal accuracy loss.
Install / Use
/learn @rusuanjun007/PQKDREADME
Pruning-Quantization with Knowledge Distillation(PQKD)
Introduction
PQKD is a method to compress a model by pruning and quantization with knowledge distillation. Through iterative pruning, performance recovering using knowledge distillation and followed by quantization-aware training (QAT), the PQKD successfully reduces the CNN-based model size by approximately 20 times while maintaining minimal degradation in accuracy. The channel adapters are inserted to match middle layer feature maps, solving the model heterogeneity problem caused by structured pruning.

How to use
The PQKD is implemented in PyTorch. First pre-train the model in FP32 with fp32_pre_training.py, then run pruning_with_knowledge_distillation.py to iteratively pruning with knowledge distillation. Finally, run QAT_finetune.py to quantize the model.
Results
The PQKD achieves 20x compression with minimal accuracy degradation on [PEC datasets][https://www.kaggle.com/datasets/rusuanjun/pec-dataset]. The following table shows the results of ResNet50-1D and MobileNetV3 after pruning with knowledge distillation.

Related Skills
node-connect
352.0kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
111.1kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
352.0kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
352.0kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
