TokenCleaning
[ICML 2025] Official implementation of paper "Token Cleaning: Fine-Grained Data Selection for LLM Supervised Fine-Tuning"
Install / Use
/learn @UCSC-REAL/TokenCleaningREADME
Token Cleaning: Fine-Grained Data Selection for LLM Supervised Fine-Tuning
<a href='https://github.com/JlPang863/LLM_token_selection'><img src='https://img.shields.io/badge/Project-Page-Green'></a> <a href='https://arxiv.org/abs/2502.01968'><img src='https://img.shields.io/badge/Paper-PDF-orange'></a>
Jinlong Pang, Na Di, Zhaowei Zhu, Jiaheng Wei, Hao Cheng, Chen Qian, Yang Liu.
University of California, Santa Cruz
<!-- <a href='#'><img src='https://img.shields.io/badge/Demo-Page-purple'></a> --> <!--  --> <!-- [](https://www.youtube.com/watch?v=aqw2SCWeWD0) --> <!-- This repository hosts the code and data of proposed token cleaning pipelines. --> <!-- ### Abstract --> <!-- Recent studies show that in supervised fine-tuning (SFT) of large language models (LLMs), data quality matters more than quantity. While most data cleaning methods concentrate on filtering entire samples, the quality of individual tokens within a sample can vary significantly. After pre-training, even in high-quality samples, patterns or phrases that are not task-related can be redundant or uninformative. Continuing to fine-tune on these patterns may offer limited benefit and even degrade downstream task performance. In this paper, we investigate token quality from a noisy-label perspective and propose a generic token cleaning pipeline for SFT tasks. Our method filters out uninformative tokens while preserving those carrying key task-specific information. Specifically, we first evaluate token quality by examining the influence of model updates on each token, then apply a threshold-based separation. The token influence can be measured in a single pass with a fixed reference model or iteratively with self-evolving reference models. The benefits and limitations of both methods are analyzed theoretically by error upper bounds. Extensive experiments show that our framework consistently improves performance across multiple downstream tasks. -->Brief Introduction
This project investigates token quality from a noisy-label perspective and propose a generic token cleaning pipeline for SFT tasks. Our method filters out uninformative tokens while preserving those carrying key task-specific information. Specifically, we first evaluate token quality by examining the influence of model updates on each token, then apply a threshold-based separation. The token influence can be measured in a single pass with a fixed reference model or iteratively with self-evolving reference models.

-
Fixed-Model Cleaning This pipeline applies a one-shot cleaning process to the entire dataset.
-
Self-Evolving Cleanning This pipeline follows an iterative approach.
🎉🎉 News
- [x] [2025.05.01] 🚀🚀 Accepted by ICML 2025.
- [x] [2025.04.01] 🚀🚀 Code Release
Environment Setup
To run training, evaluation, or inference for finetuned models, you need to install the required packages by running the following command (after installing pytorch):
pip install -r requirements.txt
Dataset Preparation
The data pool (50k samples) is constructed based on a new powerful data curation pipeline DS2, which involves selecting data samples using quality rating scores generated by LLMs. For convenience, the 50k used samples can be accessed from Huggingface via the link.
Our selected evaluation and training data are listed below.
| Category | Dataset | |----------------------|----------------------------------------------| | Evaluation Data | MMLU, TruthfulQA, TydiQA, HellaSwag, BoolQ, ARC-C, LoqiQA| | Training Data | Flan v2, OASST1, WizardLM, Dolly, Stanford Alpaca |
🚀🚀 Get Started
Note that our cleaning pipelines consists of Fixed-Model Cleaning and Self-Evolving Cleaning. One can run the code by
# Fixed-model cleaning
bash get_ref_model.sh
bash fixed_model_cleaning.sh
# Self-evolving cleaning
bash self_evolving_cleaning.sh
The implementations of our baselines can be found in the baselines directory, including full, random and rho baselines.
Model Evaluation
The task performances are evaluated on the lm-eval-hareness repository. For convenience, one can do evaluation by
bash run_eval.sh
Note that lm-eval-harness repo does not contain TydiQA task. Here, we follows the original Tydiqa code repo to conduct evaluation. The TydiQA dataset can be downloaded via prepare_eval_data.sh.
Results Presentations
The tabular results can be printed via the read_results.ipynb jupyter notebook.
Citation
If you used this repository, please cite our work:
@article{pang2025token,
title={Token Cleaning: Fine-Grained Data Selection for LLM Supervised Fine-Tuning},
author={Pang, Jinlong and Di, Na and Zhu, Zhaowei and Wei, Jiaheng and Cheng, Hao and Qian, Chen and Liu, Yang},
journal={arXiv preprint arXiv:2502.01968},
year={2025}
}
Related Skills
node-connect
349.2kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
109.5kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
349.2kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
349.2kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
