XLAM
xLAM: A Family of Large Action Models to Empower AI Agent Systems
Install / Use
/learn @SalesforceAIResearch/XLAMREADME
🎉🎉🎉 News
- [08-20.2025] 🎉🎉🎉 ActionStudio and LATTE (Learning to think with vision specialists) are both accepted by EMNLP 2025 Main conference!
- [08-05.2025] 💫 ActionStudio has been updated with new features, improved training configuration tracking, and general code enhancements!
- [05-12.2025] Our xLAM Presentation Slides for the NAACL 2025 Oral Session are now live! 📂 We’ve also open-sourced APIGen-MT-5k — a compact yet powerful dataset to explore multi-turn function-calling.
- [04-15.2025] 🏆🏆🏆 xLAM-2-fc-r achieves Top-1 performance on the latest BFCL Leaderboard!
- [04-15.2025]: 🚀🚀🚀 ActionStudio is now open-source! Checkout our paper and code for full details.
- [04-15.2025]: 📢📢📢 APIGen-MT is now open-source! Learn more in our paper and Project Website!
- [11.2024]: Add the latest examples and tokenizer info on interacting with xLAM models.
- [09.2024]: Join our Discord Community if you have any feedbacks!
- [09.2024]: Check our xLAM Technical Report Paper.
- [08.2024]: We are excited to announce the release of full xLAM family, our suite of Large Action Models! From the "tiny giant" to industrial powerhouses. These models have achieved impressive rankings, placing #1 and #6 on the Berkeley Function-Calling Leaderboard. Check our Hugging Face collection.
- [07.2024]: We are excited to announce the release of our two function-calling models: xLAM-1b-fc-r and xLAM-7b-fc-r. These models have achieved impressive rankings, placing #3 and #25 on the Berkeley Function-Calling Leaderboard, outperforming many significantly larger models. Stay tuned for more powerful models coming soon.
- [06.2024] Check our latest work APIGen, the best open-sourced models for function calling. Our dataset xlam-function-calling-60k is currently among the Top-3 trending datasets on HuggingFace, standing out in a field of 173,670 datasets as of July 4, 2024. See also the Twitter by Salesforce CEO, VentureBeat and 新智元.
- [03.2024] xLAM model is released! Try it together with AgentLite benchmark or other benchmarks, which is comparable to GPT-4!
- [02.2024] Initial Release of AgentOhana and xLAM paper!
Note: This repository is provided for research purposes only.
Any data related to xLAM is partially released due to internal regulations to support the advancement of the agent research community.
Autonomous agents powered by large language models (LLMs) have garnered significant research attention. However, fully harnessing the potential of LLMs for agent-based tasks presents inherent challenges due to the heterogeneous nature of diverse data sources featuring multi-turn trajectories.
This repo introduces xLAM that aggregates agent trajectories from distinct environments, spanning a wide array of scenarios. It standardizes and unifies these trajectories into a consistent format, streamlining the creation of a generic data loader optimized for agent training. Leveraging the data unification, our training pipeline maintains equilibrium across different data sources and preserves independent randomness across devices during dataset partitioning and model training.
<p align="center"> <br> <!-- <img src="./images/framework.png" width="780"/> --> <img src="./images/xlam_release_v1.jpeg" width="700"/> <br> <p>Model Instruction
| Model | # Total Params | Context Length |Release Date | Category | Download Model | Download GGUF files | |------------------------|----------------|------------|-------------|-------|----------------|----------| | Llama-xLAM-2-70b-fc-r | 70B | 128k | Mar. 26, 2025 | Multi-turn Conversation, Function-calling | 🤗 Link | NA | | Llama-xLAM-2-8b-fc-r | 8B | 128k | Mar. 26, 2025 | Multi-turn Conversation, Function-calling | 🤗 Link | 🤗 Link | | xLAM-2-32b-fc-r | 32B | 32k (max 128k)* | Mar. 26, 2025 | Multi-turn Conversation, Function-calling | 🤗 Link | NA | | xLAM-2-3b-fc-r | 3B | 32k (max 128k)* | Mar. 26, 2025 | Multi-turn Conversation, Function-calling | 🤗 Link | 🤗 Link | | xLAM-2-1b-fc-r | 1B | 32k (max 128k)* | Mar. 26, 2025 | Multi-turn Conversation, Function-calling | 🤗 Link | 🤗 Link | | xLAM-7b-r | 7.24B | 32k | Sep. 5, 2024|General, Function-calling | 🤗 Link | -- | | xLAM-8x7b-r | 46.7B | 32k | Sep. 5, 2024|General, Function-calling | 🤗 Link | -- | | xLAM-8x22b-r | 141B | 64k | Sep. 5, 2024|General, Function-calling | 🤗 Link | -- | | xLAM-1b-fc-r | 1.35B | 16k | July 17, 2024 | Function-calling| 🤗 Link | 🤗 Link | | xLAM-7b-fc-r | 6.91B | 4k | July 17, 2024| Function-calling| 🤗 Link | 🤗 Link | | xLAM-v0.1-r | 46.7B | 32k | Mar. 18, 2024 |General, Function-calling | 🤗 Link | -- |
xLAM series are significant better at many things including general tasks and function calling. For the same number of parameters, the model have been fine-tuned across a wide range of agent tasks and scenarios, all while preserving the capabilities of the original model.
📦 Model Naming Conventions
xLAM-7b-r: A general-purpose v1.0 or v2.0 release of the Large Action Model, fine-tuned for broad agentic capabilities. The-rsuffix indicates it is a research release.xLAM-7b-fc-r: A special
