SkillAgentSearch skills...

DPad

Official implementation of "DPad: Efficient Diffusion Language Models with Suffix Dropout"

Install / Use

/learn @Crys-Chen/DPad
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

<p align="center"> <img src="assets/logo.png" width="300"> </p>

DPad: Efficient Diffusion Language Models with Suffix Dropout

<p align="left"> <a href="https://openreview.net/forum?id=0yOsSMU1eY"><b>📄 Paper</b></a> </p> <p align="center"> </p> <hr> <center> <strong>LLaDA-1.5 on GSM8K (1024 tokens)</strong> <p align="center"> <img src="assets/speedup_llada.png" width="800"> <br> <small><b>Efficiency:</b> DPad-enhanced dLLMs achieve up to a <b>61.39× speedup</b> over vanilla dLLM baselines.</small> <br> <small><b>Accuracy:</b> DPad-enhanced dLLMs achieve up to a <b>+26.46% improvement</b> over vanilla dLLM baselines.</small> <br> <small>(Evaluation conducted on NVIDIA A100-PCIe-80GB GPUs).</small> </p>

Diffusion Scratchpad (DPad) is a novel training-free inference paradigm that overcomes a key efficiency bottleneck in Diffusion Language Models (dLLMs): the high computational cost of full suffix attention. By intelligently pruning redundant suffix tokens, DPad achieves:

  • Up to a staggering 61.39x acceleration over vanilla dLLM baselines on long-sequence benchmarks (GSM8K, 1319 samples).
  • A significant improvement in strict-match accuracy on reasoning tasks by enhancing in-context learning.
  • Comparable or better generation quality on standard reasoning and coding benchmarks.
  • Seamless integration with existing optimizations like parallel decoding and prefix caching for multiplicative speedups.

This repository provides the code to reproduce our evaluation results.

<center> <strong>Demo for LLaDA-1.5 on GSM8K (50 samples) (1024, 1-shot)</strong>

https://github.com/user-attachments/assets/d2bce8f2-310e-4f14-8b4e-cbef8c962741

(Latency: Inference Time for 50 samples. F: Flexible-Match Accuracy, S: Strict-Match Accuracy)

🔥 News!

  • Aug 19, 2025: We've released our paper on Arxiv!

Contents

🤔 How It Works

DPad overcomes the high computational overhead of dLLMs, where models predict all future suffix tokens at each step while retaining only a small fraction.

Attentiln_scre

1. The "Scratchpad" Insight: We identify that suffix tokens function as an information reservoir—a "scratchpad"—that collects signals from already decoded prefix tokens to guide generation. However, we found that most of these suffix tokens are redundant and their importance decays sharply with distance.

2. The Diffusion Lottery Tickets (DLT) Hypothesis: We find that even pruning high-attention "spike" tokens in the distant suffix has little effect on accuracy, as the model dynamically shifts its attention to nearby tokens. This suggests that a sparse subset of suffix tokens is sufficient. DPad acts as a training-free lottery ticket search, finding an efficient "winning ticket" for generation on the fly.

3. Suffix Dropout Mechanisms: DPad introduces two simple, training-free strategies to eliminate this redundancy before attention computation:

  • Sliding Window: Maintains a fixed-length suffix window, preventing computation from scaling with the full sequence length.
  • Distance-Decay Dropout: Progressively prunes distant suffix tokens using a Gaussian sampling strategy, focusing computation on the most relevant nearby tokens.
<p align="left"> <img src="assets/dpad.png" width="800"> <br> <small><b>Overview of DPad vs. other generation methods:</b> <br> (a) Autoregressive models generate one token at a time. <br> (b) Standard dLLMs attend to all suffix tokens, incurring high computational costs. <br> (c) DPad restricts attention to a small, nearby set of suffix tokens, eliminating redundant computation while preserving fidelity.</small> </p>

✨ Key Features & Modifications

This repository is built upon the Fast-dLLM codebase and incorporates the following key features and modifications to implement the DPad methodology:

  • Simplified Command-Line Interface: To simplify experiments, the original complex commands have been wrapped into a user-friendly run.py script. You can now run evaluations and generation with simple, intuitive arguments.

  • Dynamic Suffix Sampling (DPad Core): The core of DPad is implemented in sampler.py and integrated into the main generation pipelines (llada/generate.py for LLaDA and dream/model/generation_utils_block.py for Dream). This module applies distance-decay dropout within the sliding window before the decoding process of each block, efficiently pruning redundant suffix tokens.

  • Expanded Model Support: We have extended support to include the full semi-autoregressive mode for the Dream-Base model, enabling comprehensive evaluation across different dLLM architectures.

  • Adaptive Positional Embeddings (RoPE): We have modified the RoPE implementation to correctly handle the non-contiguous token sequences that result from our suffix dropout. This ensures each token retains its original positional information, maintaining the integrity of the model's spatial awareness.

📊 Performance Highlights

DPad delivers transformative speedups while maintaining or improving scores. Below is a comprehensive summary of performance on LLaDA-Instruct , LLaDA-1.5 and Dream-Base, comparing our method against the original vanilla baseline and the optimized parallel decoding variant (Fast-dLLM).

<center> <strong>Performance on LLaDA-Instruct</strong> <table style="width:100%; border-collapse: collapse; text-align:left;"> <thead style="background-color:#f2f2f2;"> <tr> <th style="padding: 8px; border: 1px solid #ddd;">Benchmark</th> <th style="padding: 8px; border: 1px solid #ddd;">Metric</th> <th style="padding: 8px; border: 1px solid #ddd;">Vanilla</th> <th style="padding: 8px; border: 1px solid #ddd;">+DPad</th> <th style="padding: 8px; border: 1px solid #ddd;">+Parallel (Fast-dLLM)</th> <th style="padding: 8px; border: 1px solid #ddd;">+Parallel+DPad (Ours)</th> </tr> </thead> <tbody> <tr> <td rowspan="3" style="padding: 8px; border: 1px solid #ddd; vertical-align: middle;"><strong>GSM8K</strong><br><em>4-shot</em></td> <td style="padding: 8px; border: 1px solid #ddd;">Latency(s) ↓</td> <td style="padding: 8px; border: 1px solid #ddd;">27.48</td> <td style="padding: 8px; border: 1px solid #ddd;">18.35 <span style="color: green;">(1.50x)</span></td> <td style="padding: 8px; border: 1px solid #ddd;">8.55 <span style="color: green;">(3.21x)</span></td> <td style="padding: 8px; border: 1px solid #ddd;"><strong>6.64 <span style="color: green;">(4.14x)</span></strong></td> </tr> <tr> <td style="padding: 8px; border: 1px solid #ddd;">Flexible Acc. ↑</td> <td style="padding: 8px; border: 1px solid #ddd;">78.39</td> <td style="padding: 8px; border: 1px solid #ddd;">78.54</td> <td style="padding: 8px; border: 1px solid #ddd;">78.54</td> <td style="padding: 8px; border: 1px solid #ddd;">79.76</td> </tr> <tr> <td style="padding: 8px; border: 1px solid #ddd;">Strict Acc. ↑</td> <td style="padding: 8px; border: 1px solid #ddd;">37.38</td> <td style="padding: 8px; border: 1px solid #ddd;">63.84</td> <td style="padding: 8px; border: 1px solid #ddd;">38.67</td> <td style="padding: 8px; border: 1px solid #ddd;">64.97</td> </tr> <tr style="background-color: #fafafa;"> <td rowspan="3" style="padding: 8px; border: 1px solid #ddd; vertical-align: middle;"><strong>MATH</strong><br><em>4-shot</em></td> <td style="padding: 8px; border: 1px solid #ddd;">Latency(s) ↓</td> <td style="padding: 8px; border: 1px solid #ddd;">25.40</td> <td style="padding: 8px; border: 1px solid #ddd;">21.61 <span style="color: green;">(1.18x)</span></td> <td style="padding: 8px; border: 1px solid #ddd;">9.91 <span style="color: green;">(2.56x)</span></td> <td style="padding: 8px; border: 1px solid #ddd;"><strong>9.20 <span style="color: green;">(2.76x)</span></strong></td> </tr> <tr style="background-color: #fafafa;"> <td style="padding: 8px; border: 1px solid #ddd;">Flexible Acc. ↑</td> <td style="padding: 8px; border: 1px solid #ddd;">33.58</td> <td style="padding: 8px; border: 1px solid #ddd;">33.42</td> <td style="padding: 8px; border: 1px solid #ddd;">33.40</td> <td style="padding: 8px; border: 1px solid #ddd;">33.30</td> </tr> <tr style="background-color: #fafafa;"> <td style="padding: 8px; border: 1px solid #ddd;">Strict Acc. ↑</td> <td style="padding: 8px; border: 1px solid #ddd;">8.42</td> <td style="padding: 8px; border: 1px solid #ddd;">28.04</td> <td style="padding: 8px; border: 1px solid #ddd;">8.76</td> <td style="padding: 8px; border: 1px solid #ddd;">27.98</td> </tr> <tr> <td rowspan="2" style="padding: 8px; border: 1px solid #ddd; vertical-align: middle;"><strong>HumanEval</strong><br><em>0-shot</em></td> <td style="padding: 8px; border: 1px solid #ddd;">Latency(s) ↓</td> <td style="padding: 8px; border: 1px solid #ddd;">34.67</td> <td style="padding: 8px; border: 1px solid #ddd;">27.41 <span style="color: green;">(1.26x)</span></td> <td style="padding: 8px; border: 1px solid #ddd;">11.48 <span style="color: green;">(3.02x)</span></td> <td style="
View on GitHub
GitHub Stars58
CategoryDevelopment
Updated4d ago
Forks5

Languages

Python

Security Score

95/100

Audited on Mar 25, 2026

No findings