Funtuner
Supervised instruction finetuning for LLM with HF trainer and Deepspeed
Install / Use
/learn @vibrantlabsai/FuntunerREADME
FunTuner
A no nonsense easy to configure model fine-tuning framework for GPT based models that can get the job done in a memory and time efficient manner.
:radioactive: Work in progress
Components
✅hydra configuration
✅Deepspeed support
✅8 bit training
✅LoRA using peft
✅Sequence bucketing
✅Inference
✅single
✅batch
❎stream
✅Supported Models
✅GPTNeoX - Redajajama, Pythia, etc
❎LLama
❎Falcon
❎Flash attention
Train
- Using deepspeed
deepspeed funtuner/trainer.py
Inference
from funtuner.inference import Inference
model = Inference("shahules786/GPTNeo-125M-lora")
kwargs = {"temperature":0.1,
"top_p":0.75,
"top_k":5,
"num_beams":2,
"max_new_tokens":128,}
##single
output =model.generate("Which is a species of fish? Tope or Rope",**kwargs)
##batch
inputs = [["There was a tiger in the hidden"],["Which is a species of fish? Tope or Rope"]]
output = model.batch_generate(inputs,**kwargs)
Sampling
python funtuner/sampling.py --model_url shahules786/Redpajama-3B-CoT --dataset Dahoas/cot_gsm8k
Related Skills
node-connect
353.1kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
111.6kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
353.1kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
353.1kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
