482 skills found · Page 1 of 17
huggingface / Peft🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
microsoft / NniAn open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
OpenGVLab / LLaMA Adapter[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
facebookarchive / TweaksAn easy way to fine-tune, and adjust parameters for iOS apps in development.
cocopon / Tweakpane:control_knobs: Compact GUI for fine-tuning parameters and monitoring value changes
shankarpandala / LazypredictLazy Predict help build a lot of basic models without much code and helps understand which models works better without any parameter tuning
mljar / Mljar SupervisedPython package for AutoML on Tabular Data with Feature Engineering, Hyper-Parameters Tuning, Explanations and Automatic Documentation
PhoebusSi / Alpaca CoTWe unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts to initiate any meaningful PR on this repo and integrate as many LLM related technologies as possible. 我们打造了方便研究人员上手和使用大模型等微调平台,我们欢迎开源爱好者发起任何有意义的pr!
JuliaAI / MLJ.jlA Julia machine learning framework
google / VizierPython-based research interface for blackbox and hyperparameter optimization, based on the internal Google Vizier Service.
tobegit3hub / AdvisorOpen-source implementation of Google Vizier for hyper parameters tuning
LiYangHart / Hyperparameter Optimization Of Machine Learning AlgorithmsImplementation of hyperparameter optimization/tuning methods for machine learning & deep learning models (easy&clear)
AGI-Edgerunners / LLM AdaptersCode for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"
thunlp / OpenDeltaA plug-and-play library for parameter-efficient-tuning (Delta Tuning)
ScienceOne-AI / DeepSeek 671B SFT GuideAn open-source solution for full parameter fine-tuning of DeepSeek-V3/R1 671B, including complete code and scripts from training to inference, as well as some practical experiences and conclusions. (DeepSeek-V3/R1 满血版 671B 全参数微调的开源解决方案,包含从训练到推理的完整代码和脚本,以及实践中积累一些经验和结论。)
perpetual-ml / PerpetualPerpetual is a high-performance gradient boosting machine. It delivers optimal accuracy in a single run without complex tuning through a simple budget parameter. It features out-of-the-box support for causal ML, continual learning, native calibration, and robust drift monitoring, along with Rust core and zero-copy bindings for Python and R
McGill-NLP / Nano Aha MomentSingle File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"
synbol / Awesome Parameter Efficient Transfer LearningCollection of awesome parameter-efficient fine-tuning resources.
Releem / Awesome Mysql Performance🔥 A curated list of awesome links related to MySQL / MariaDB / Percona performance tuning
r-three / T FewCode for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"