UnifiedTSLib
No description available
Install / Use
/learn @UnifiedTSAI/UnifiedTSLibREADME
UnifiedTSLib: A Unified Time Series Foundation Model Training Architecture
</div> UnifiedTSLib is a collection of popular time series analysis models implemented in the Hugging Face Transformers style. This library provides easy-to-use, standardized interfaces for training, fine-tuning, and evaluating various state-of-the-art time series forecasting models, making it convenient to apply and benchmark them on your own datasets.🌟Key Features
- Implementation of TimeMixer++, TimeMixer, iTransformer, TimesNet, and Autoformer (with more being added continuously).
- Supports data parallel training with models saved in Hugging Face format🤗.
- Features a channel-mixing time series pre-training framework that balances batch size and channel count across datasets to enhance computational stability and reduce bandwidth waste caused by padding.
- Inherits Time-MoE's disk-based single-sequence reading capability to avoid memory overflow during large-scale data training (300B+ time points), and accelerates disk reading of all sequences within a specified range in Channel Mixing mode.
🚀 Usage
1. Install Dependencies
Make sure you have Python 3.8+ installed. Install the required packages with:
pip install -r requirements.txt
2. Prepare Data
Prepare your time series dataset and place it in the appropriate directory (e.g., data/train/, data/val/). Supported formats include .jsonl, .csv, or .bin, etc.
3. Train (Pre-train or Fine-tune) a Model
You can either pre-train a model from scratch or fine-tune a model based on pretrained weights.
- Fine-tuning: Use a pretrained model as the starting point (
-mspecifies the model path). - Pre-training: Train a model from scratch by adding the
--from_scratchflag and omitting the-margument. Fine-tune example:
python torch_dist_run.py main.py \
--micro_batch_size 1 \
--global_batch_size 8 \
--channel_mixing True \
--inner_batch_ratio 1 \
--model_name timemixerpp \
-o logs/timemixerpp_traffic_finetune \
-d data/train/data_electricity_train/ \
-m /tiger/UnifiedTSLib/logs/timemixerpp_1 \
--val_data_path data/val/data_electricity_validation/
Pre-train example:
python torch_dist_run.py main.py \
--micro_batch_size 1 \
--global_batch_size 8 \
--channel_mixing True \
--inner_batch_ratio 1 \
--model_name timemixerpp \
--from_scratch \
-o logs/timemixerpp_pretrain \
-d data/train/ \
Parameter explanations:
--channel_mixing True: Enables channel mixing strategy during training.--micro_batch_size 1: Sets the micro batch size per GPU to 1. For datasets with a large number of channels, it is recommended to use 1.--inner_batch_ratio 1: Sets the inner batch size for the dataset with the largest number of channels. Recommended value is 1.--model_name Timemixerpp: Specifies the model architecture to use.--from_scratch: (Optional) If set, the model will be trained from scratch without loading pretrained weights.-m: (Optional) Path to the pretrained model directory. Omit this argument when pre-training from scratch.
Note: For channel dependent model like autoformer, timesnet etc, you should modified the 'enc_in' and 'c_out' to align with the number of channels.
4. Evaluate a Model
You can evaluate a trained model using eval_model.py as follows:
python eval_model.py \
-d datasets_pretrain/test/data_etth1_test.jsonl \
--channel_mixing True \
--batch_size 512 \
--model_path logs/UnifiedTS/Timemixerpp \
--model_name timemixerpp
Parameter explanations:
-d: Path to the evaluation dataset.--channel_mixing True: Enables channel mixing during evaluation.--batch_size: Batch size for evaluation.--model_path: Path to the trained model directory.--model_name: Name of the model architecture.
📝 Citation
If you find this useful for your research, please consider citing the associated paper:
@inproceedings{Wang2025TimeMixer++,
title={Timemixer++: A general time series pattern machine for universal predictive analysis},
author={Wang, Shiyu and Li, Jiawei and Shi, Xiaoming and Ye, Zhou and Mo, Baichuan and Lin, Wenze and Ju, Shengtong and Chu, Zhixuan and Jin, Ming},
booktitle={International Conference on Learning Representations (ICLR)},
year={2025}
}
@inproceedings{shi2024timemoe,
title={Time-MoE: Billion-Scale Time Series Foundation Models with Mixture of Experts},
author={Xiaoming Shi and Shiyu Wang and Yuqi Nie and Dianqi Li and Zhou Ye and Qingsong Wen and Ming Jin},
booktitle={International Conference on Learning Representations (ICLR)},
year={2025}
}
@inproceedings{wang2023timemixer,
title={TimeMixer: Decomposable Multiscale Mixing for Time Series Forecasting},
author={Wang, Shiyu and Wu, Haixu and Shi, Xiaoming and Hu, Tengge and Luo, Huakun and Ma, Lintao and Zhang, James Y and ZHOU, JUN},
booktitle={International Conference on Learning Representations (ICLR)},
year={2024}
}
📃 License
This project is licensed under the Apache-2.0 License.
Related Skills
node-connect
343.1kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
90.0kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
343.1kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
343.1kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
