Splade
SPLADE: sparse neural search (SIGIR21, SIGIR22)
Install / Use
/learn @naver/SpladeREADME
SPLADE
What's New:
- November 2023: Better training code for SPLADE and rerankers training (e.g, cross encoders, RankT5) available; new models coming soon on github!
- July 2023: We add the code for static pruning SPLADE indexes in order to reproduce A Static Pruning Study on Sparse Neural Retrievers
- May 2023: We add a new branch (based on HF Trainer) allowing training with several negatives : https://github.com/naver/splade/tree/hf
- April 2023: We have removed the weights and pushed them to huggingface (https://huggingface.co/naver/splade_v2_max and https://huggingface.co/naver/splade_v2_distil)
This repository contains the code to perform training, indexing and retrieval for SPLADE models. It also includes everything needed to launch evaluation on the BEIR benchmark.
TL; DR SPLADE is a neural retrieval model which learns query/document sparse expansion via the BERT MLM head and sparse regularization. Sparse representations benefit from several advantages compared to dense approaches: efficient use of inverted index, explicit lexical match, interpretability... They also seem to be better at generalizing on out-of-domain data (BEIR benchmark).
- (v1, SPLADE) SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking, Thibault Formal, Benjamin Piwowarski and Stéphane Clinchant. SIGIR21 short paper.
By benefiting from recent advances in training neural retrievers, our v2 models rely on hard-negative mining, distillation and better Pre-trained Language Model initialization to further increase their effectiveness, on both in-domain (MS MARCO) and out-of-domain evaluation (BEIR benchmark).
- (v2, SPLADE v2) SPLADE v2: Sparse Lexical and Expansion Model for Information Retrieval, Thibault Formal, Benjamin Piwowarski, Carlos Lassance, and Stéphane Clinchant. arxiv.
- (v2bis, SPLADE++) From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective, Thibault Formal, Carlos Lassance, Benjamin Piwowarski, and Stéphane Clinchant. SIGIR22 short paper (extension of SPLADE v2).
Finally, by introducing several modifications (query specific regularization, disjoint encoders etc.), we are able to improve efficiency, achieving latency on par with BM25 under the same computing constraints.
- (efficient SPLADE) An Efficiency Study for SPLADE Models, Carlos Lassance and Stéphane Clinchant. SIGIR22 short paper.
Weights for models trained under various settings can be found on Naver Labs Europe website, as well as Hugging Face. Please bear in mind that SPLADE is more a class of models rather than a model per se: depending on the regularization magnitude, we can obtain different models (from very sparse to models doing intense query/doc expansion) with different properties and performance.
splade: a spork that is sharp along one edge or both edges, enabling it to be used as a knife, a fork and a spoon.
Getting started :rocket:
Requirements
We recommend to start from a fresh environment, and install the packages from conda_splade_env.yml.
conda create -n splade_env python=3.9
conda activate splade_env
conda env create -f conda_splade_env.yml
Usage
Playing with the model
inference_splade.ipynb allows you to load and perform inference with a trained model, in order to inspect the
predicted "bag-of-expanded-words". We provide weights for six main models:
| model | MRR@10 (MS MARCO dev) |
| --- | --- |
| naver/splade_v2_max (v2 HF) | 34.0 |
| naver/splade_v2_distil (v2 HF) | 36.8 |
| naver/splade-cocondenser-selfdistil (SPLADE++, HF) | 37.6 |
| naver/splade-cocondenser-ensembledistil (SPLADE++, HF) | 38.3 |
| naver/efficient-splade-V-large-doc (HF) + naver/efficient-splade-V-large-query (HF) (efficient SPLADE) | 38.8 |
| naver/efficient-splade-VI-BT-large-doc (HF) + efficient-splade-VI-BT-large-query (HF) (efficient SPLADE) | 38.0 |
We also uploaded various models here. Feel free to try them out!
High level overview of the code structure
- This repository lets you either train (
train.py), index (index.py), retrieve (retrieve.py) (or perform every step withall.py) SPLADE models. - To manage experiments, we rely on hydra. Please refer to conf/README.md for a complete guide on how we configured experiments.
Data
- To train models, we rely on MS MARCO data.
- We also further rely on distillation and hard negative mining, from available datasets (Margin MSE Distillation , Sentence Transformers Hard Negatives) or datasets we built ourselves (e.g. negatives mined from SPLADE).
- Most of the data formats are pretty standard; for validation, we rely on an approximate validation set, following a setting similar to TAS-B.
To simplify setting up, we made available all our data folders, which can
be downloaded here. This link includes queries,
documents and hard negative data, allowing for training under the EnsembleDistil setting (see v2bis paper). For
other settings (Simple, DistilMSE, SelfDistil), you also have to download:
- (
Simple) standard BM25 Triplets - (
DistilMSE) "Vienna" triplets for MarginMSE distillation - (
SelfDistil) triplets mined from SPLADE
After downloading, you can just untar in the root directory, and it will be placed in the right folder.
tar -xzvf file.tar.gz
Quick start
In order to perform all steps (here on toy data, i.e. config_default.yaml), go on the root directory and run:
conda activate splade_env
export PYTHONPATH=$PYTHONPATH:$(pwd)
export SPLADE_CONFIG_NAME="config_default.yaml"
python3 -m splade.all \
config.checkpoint_dir=experiments/debug/checkpoint \
config.index_dir=experiments/debug/index \
config.out_dir=experiments/debug/out
Additional examples
We provide additional examples that can be plugged in the above code. See conf/README.md for details on how to change experiment settings.
- you can similarly run training
python3 -m splade.train(same for indexing or retrieval) - to create Anserini readable files (after training),
run
SPLADE_CONFIG_FULLPATH=/path/to/checkpoint/dir/config.yaml python3 -m splade.create_anserini +quantization_factor_document=100 +quantization_factor_query=100 - config files for various settings (distillation etc.) are available in
/conf. For instance, to run theSelfDistilsetting:- change to
SPLADE_CONFIG_NAME=config_splade++_selfdistil.yaml - to further change parameters (e.g. lambdas) outside the config,
run:
python3 -m splade.all config.regularizer.FLOPS.lambda_q=0.06 config.regularizer.FLOPS.lambda_d=0.02
- change to
We provide several base configurations which correspond to the experiments in the v2bis and "efficiency" papers. Please note that these are
suited for our hardware setting, i.e. 4 GPUs Tesla V100 with 32GB memory. In order to train models with e.g. one GPU,
you need to decrease the batch size for training and evaluation. Also note that, as the range for the loss might change
with a different batch size, corresponding lambdas for regularization might need to be adapted. However, we provide a mono-gpu configuration
config_splade++_cocondenser_ensembledistil_monogpu.yaml for which we obtain 37.2 MRR@10, trained on a single 16GB GPU.
Evaluating a pre-trained model
Indexing (and retrieval) can be done either using our (numba-based) implementation of inverted index,
or Anserini. Let's perform these steps using an available model (naver/splade-cocondenser-ensembledistil).
conda activate splade_env
export PYTHONPATH=$PYTHONPATH:$(pwd)
export SPLADE_CONFIG_NAME="config_splade++_cocondenser_ensembledistil"
python3 -m splade.index \
init_dict.model_type_or_dir=naver/splade-cocondenser-ensembledistil \
