SkillAgentSearch skills...

Mergekit

Tools for merging pretrained large language models.

Install / Use

/learn @arcee-ai/Mergekit
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

mergekit

License: LGPL v3 GitHub Actions Workflow Status Arcee Discord

mergekit is a toolkit for merging pre-trained language models. mergekit uses an out-of-core approach to perform unreasonably elaborate merges in resource-constrained situations. Merges can be run entirely on CPU or accelerated with as little as 8 GB of VRAM. Many merging algorithms are supported, with more coming as they catch my attention.

Contents

Why Merge Models?

Model merging is a powerful technique that allows combining the strengths of different models without the computational overhead of ensembling or the need for additional training. By operating directly in the weight space of models, merging can:

  • Combine multiple specialized models into a single versatile model
  • Transfer capabilities between models without access to training data
  • Find optimal trade-offs between different model behaviors
  • Improve performance while maintaining inference costs
  • Create new capabilities through creative model combinations

Unlike traditional ensembling which requires running multiple models, merged models maintain the same inference cost as a single model while often achieving comparable or superior performance.

Features

Key features of mergekit include:

Installation

git clone https://github.com/arcee-ai/mergekit.git
cd mergekit

pip install -e .  # install the package and make scripts available

If the above fails with the error of:

ERROR: File "setup.py" or "setup.cfg" not found. Directory cannot be installed in editable mode:
(A "pyproject.toml" file was found, but editable mode currently requires a setuptools-based build.)

You may need to upgrade pip to > 21.3 with the command python3 -m pip install --upgrade pip.

Community & Support

Contributing

We welcome contributions to mergekit! If you have ideas for new merge methods, features, or other improvements, please check out our contributing guide for details on how to get started.

Community Tools

  • FrankensteinAI: For those who prefer a browser-based experience without local setup or hardware wrangling, the team at FrankensteinAI has built a hosted platform powered by mergekit. Also features a community gallery and leaderboard for sharing and comparing merged models.

Usage

The script mergekit-yaml is the main entry point for mergekit. It takes a YAML configuration file and an output path, like so:

mergekit-yaml path/to/your/config.yml ./output-model-directory [--cuda] [--lazy-unpickle] [--allow-crimes] [... other options]

This will run the merge and write your merged model to ./output-model-directory.

For more information on the arguments accepted by mergekit-yaml run the command mergekit-yaml --help.

Uploading to Huggingface

When you have a merged model you're happy with, you may want to share it on the Hugging Face Hub. mergekit generates a README.md for your merge with some basic information for a model card. You can edit it to include more details about your merge, like giving it a good name or explaining what it's good at; rewrite it entirely; or use the generated README.md as-is. It is also possible to edit your README.md online once it has been uploaded to the Hub.

Once you're happy with your model card and merged model, you can upload it to the Hugging Face Hub using the huggingface_hub Python library.

# log in to huggingface with an access token (must have write permission)
huggingface-cli login
# upload your model
huggingface-cli upload your_hf_username/my-cool-model ./output-model-directory .

The documentation for huggingface_hub goes into more detail about other options for uploading.

Merge Configuration

Merge configurations are YAML documents specifying the operations to perform in order to produce your merged model. Below are the primary elements of a configuration file:

  • merge_method: Specifies the method to use for merging models. See Merge Methods for a list.
  • slices: Defines slices of layers from different models to be used. This field is mutually exclusive with models.
  • models: Defines entire models to be used for merging. This field is mutually exclusive with slices.
  • base_model: Specifies the base model used in some merging methods.
  • parameters: Holds various parameters such as weights and densities, which can also be specified at different levels of the configuration.
  • dtype: Specifies the data type used for the merging operation.
  • tokenizer or tokenizer_source: Determines how to construct a tokenizer for the merged model.
  • chat_template: Specifies a chat template for the merged model.

Parameter Specification

Parameters are flexible and can be set with varying precedence. They can be specified conditionally using tensor name filters, which allows finer control such as differentiating between attention heads and fully connected layers.

Parameters can be specified as:

  • Scalars: Single floating-point values.
  • Gradients: List of floating-point values, specifying an interpolated gradient.

The parameters can be set at different levels, with decreasing precedence as follows:

  1. slices.*.sources.parameters - applying to a specific input slice
  2. slices.*.parameters - applying to a specific output slice
  3. models.*.parameters or input_model_parameters - applying to any tensors coming from specific input models
  4. parameters - catchall

Tokenizer Configuration

The tokenizer behavior can be configured in two ways: using the new tokenizer field (recommended) or the legacy tokenizer_source field (maintained for backward compatibility). These fields are mutually exclusive - you should use one or the other, not both.

Modern Configuration (tokenizer)

The tokenizer field provides fine-grained control over vocabulary and embeddings:

tokenizer:
  source: "union"  # or "base" or a specific model path
  tokens:          # Optional: configure specific tokens
    <token_name>:
      source: ...  # Specify embedding source
      force: false # Optional: force this embedding for all models
  pad_to_multiple_of: null  # Optional: pad vocabulary size
Tokenizer Source

The source field determines the vocabulary of the output model:

  • union: Combine vocabularies from all input models (default)
  • base: Use vocabulary from the base model
  • "path/to/model": Use vocabulary from a specific model
Token Embedding Handling

When a tokenizer is configured, each input model's embedding matrix is adjusted to match the output vocabulary before being passed to the merge method. For tokens a model already has, its own embedding is used. For tokens a model is missing, a fallback embedding is assigned using these rules:

  • If the base model has the token, use the base model's embedding
  • If only one model has the token, use that model's embedding
  • Otherwise, use an average of all available embeddings

The merge method then combines these per-model embeddings (original and filled-in) to produce the final output. This means the final embedding for a token present in multiple models is determined by your merge method (SLERP, linear, TIES, etc.), not simply taken from one model.

You can override these defaults for specific tokens. Any tokens listed here that don't already exist in the output vocabulary will be added automatically, making this useful for introducing new special tokens.

tokenizer:
  source: union
  tokens:
    # Use embedding from
View on GitHub
GitHub Stars6.9k
CategoryDevelopment
Updated2h ago
Forks681

Languages

Python

Security Score

100/100

Audited on Mar 27, 2026

No findings