KEPLER
Source code for TACL paper "KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation".
Install / Use
/learn @THU-KEG/KEPLERREADME
KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation
Source code for TACL 2021 paper KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation.
Requirements
- PyTorch version >= 1.1.0
- Python version >= 3.5
- For training new models, you'll also need an NVIDIA GPU and NCCL
- For faster training install NVIDIA's apex library with the
--cuda_extoption
Installation
This repo is developed on top of fairseq and you need to install our version like installing fairseq from source:
pip install cython
git clone https://github.com/THU-KEG/KEPLER
cd KEPLER
pip install --editable .
Pre-training
Preprocessing for MLM data
Refer to the RoBERTa document for the detailed data preprocessing of the datasets used in the Masked Language Modeling (MLM) objective.
Preprocessing for KE data
<span id="KEpre">The pre-training with KE objective requires the Wikidata5M dataset (an alternative download source which shall be faster within China is Tsinghua Cloud). Here we use the transductive split of Wikidata5M to demonstrate how to preprocess the KE data. The scripts used below are in this folder. </span>
Download the Wikidata5M transductive data and its corresponding corpus, and then uncompress them:
wget -O wikidata5m_transductive.tar.gz https://www.dropbox.com/s/6sbhm0rwo4l73jq/wikidata5m_transductive.tar.gz?dl=1
wget -O wikidata5m_text.txt.gz https://www.dropbox.com/s/7jp4ib8zo3i6m10/wikidata5m_text.txt.gz?dl=1
tar -xzvf wikidata5m_transductive.tar.gz
gzip -d wikidata5m_text.txt.gz
Convert the original Wikidata5M files into the numerical format used in pre-training:
python convert.py --text wikidata5m_text.txt \
--train wikidata5m_transductive_train.txt \
--valid wikidata5m_transductive_valid.txt \
--converted_text Qdesc.txt \
--converted_train train.txt \
--converted_valid valid.txt
Encode the entity descriptions with the GPT-2 BPE:
mkdir -p gpt2_bpe
wget -O gpt2_bpe/encoder.json https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json
wget -O gpt2_bpe/vocab.bpe https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe
python -m examples.roberta.multiprocessing_bpe_encoder \
--encoder-json gpt2_bpe/encoder.json \
--vocab-bpe gpt2_bpe/vocab.bpe \
--inputs Qdesc.txt \
--outputs Qdesc.bpe \
--keep-empty \
--workers 60
Do negative sampling and dump the whole training and validation data:
python KGpreprocess.py --dumpPath KE1 \
-ns 1 \
--ent_desc Qdesc.bpe \
--train train.txt \
--valid valid.txt
The above command generates training and validation data for one epoch. You can generate data for more epochs by running it many times and dump to different folders (e.g. KE2, KE3, ...).
There may be too many instances in the KE training data generated above and thus results in the time for training one epoch is too long. We then randomly split the KE training data into smaller parts and the number of training instances in each part aligns with the MLM training data:
python splitDump.py --Path KE1 \
--split_size 6834352 \
--negative_sampling_size 1
The KE1 will be splited into KE1_0, KE1_1, KE1_2, KE1_3. We then binarize them for training:
wget -O gpt2_bpe/dict.txt https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt
for KE_Data in ./KE1_0/ ./KE1_1/ ./KE1_2/ ./KE1_3/ ; do \
for SPLIT in head tail negHead negTail; do \
fairseq-preprocess \ #if fairseq-preprocess cannot be founded, use "python -m fairseq_cli.preprocess" instead
--only-source \
--srcdict gpt2_bpe/dict.txt \
--trainpref ${KE_Data}${SPLIT}/train.bpe \
--validpref ${KE_Data}${SPLIT}/valid.bpe \
--destdir ${KE_Data}${SPLIT} \
--workers 60; \
done \
done
Running
An example pre-training script:
TOTAL_UPDATES=125000 # Total number of training steps
WARMUP_UPDATES=10000 # Warmup the learning rate over this many updates
LR=6e-04 # Peak LR for polynomial LR scheduler.
NUM_CLASSES=2
MAX_SENTENCES=3 # Batch size.
NUM_NODES=16 # Number of machines
ROBERTA_PATH="path/to/roberta.base/model.pt" #Path to the original roberta model
CHECKPOINT_PATH="path/to/checkpoints" #Directory to store the checkpoints
UPDATE_FREQ=`expr 784 / $NUM_NODES` # Increase the batch size
DATA_DIR=../Data
#Path to the preprocessed KE dataset, each item corresponds to a data directory for one epoch
KE_DATA=$DATA_DIR/KEI/KEI1_0:$DATA_DIR/KEI/KEI1_1:$DATA_DIR/KEI/KEI1_2:$DATA_DIR/KEI/KEI1_3:$DATA_DIR/KEI/KEI3_0:$DATA_DIR/KEI/KEI3_1:$DATA_DIR/KEI/KEI3_2:$DATA_DIR/KEI/KEI3_3:$DATA_DIR/KEI/KEI5_0:$DATA_DIR/KEI/KEI5_1:$DATA_DIR/KEI/KEI5_2:$DATA_DIR/KEI/KEI5_3:$DATA_DIR/KEI/KEI7_0:$DATA_DIR/KEI/KEI7_1:$DATA_DIR/KEI/KEI7_2:$DATA_DIR/KEI/KEI7_3:$DATA_DIR/KEI/KEI9_0:$DATA_DIR/KEI/KEI9_1:$DATA_DIR/KEI/KEI9_2:$DATA_DIR/KEI/KEI9_3:
DIST_SIZE=`expr $NUM_NODES \* 4`
fairseq-train $DATA_DIR/MLM \ #Path to the preprocessed MLM datasets
--KEdata $KE_DATA \ #Path to the preprocessed KE datasets
--restore-file $ROBERTA_PATH \
--save-dir $CHECKPOINT_PATH \
--max-sentences $MAX_SENTENCES \
--tokens-per-sample 512 \
--task MLMetKE \
--sample-break-mode complete \
--required-batch-size-multiple 1 \
--arch roberta_base \
--criterion MLMetKE \
--dropout 0.1 --attention-dropout 0.1 --weight-decay 0.01 \
--optimizer adam --adam-betas "(0.9, 0.98)" --adam-eps 1e-06 \
--clip-norm 0.0 \
--lr-scheduler polynomial_decay --lr $LR --total-num-update $TOTAL_UPDATES --warmup-updates $WARMUP_UPDATES \
--update-freq $UPDATE_FREQ \
--negative-sample-size 1 \ # Negative sampling size (one negative head and one negative tail)
--ke-model TransE \
--init-token 0 \
--separator-token 2 \
--gamma 4 \ # Margin of the KE objective
--nrelation 822 \
--skip-invalid-size-inputs-valid-test \
--fp16 --fp16-init-scale 2 --threshold-loss-scale 1 --fp16-scale-window 128 \
--reset-optimizer --distributed-world-size ${DIST_SIZE} --ddp-backend no_c10d --distributed-port 23456 \
--log-format simple --log-interval 1 \
#--relation-desc #Add this option to encode the relation descriptions as relation embeddings (KEPLER-Rel in the paper)
Note: The above command assumes distributed training on 64x16GB V100 GPUs, 16 machines. If you have fewer GPUs or GPUs with less memory you may need to reduce $MAX_SENTENCES and increase $UPDATE_FREQ to compensate. Alternatively if you have more GPUs you can decrease $UPDATE_FREQ accordingly to increase training speed.
Note: If you are interested in the detailed implementations. The main implementations are in tasks/MLMetKE.py and criterions/MLMetKE.py. We encourage to master the fairseq toolkit before learning KEPLER implementation details.
Usage for NLP Tasks
We release the pre-trained checkpoint for NLP tasks. Since KEPLER does not modify RoBERTa model architectures, the KEPLER checkpoint can be directly used in the same way as RoBERTa checkpoints in the downstream NLP tasks.
Convert Checkpoint to HuggingFace's Transformers
In the fine-tuning and usage, it will be more convinent to convert the original fairseq checkpoints to HuggingFace's Transformers.
The conversion can be finished with this code. The example command is:
python -m transformers.convert_roberta_original_pytorch_checkpoint_to_pytorch \
--roberta_checkpoint_path path_to_KEPLER_checkpoint \
--pytorch_dump_folder_path path_to_output \
The path_to_KEPLER_checkpoint should contain model.pt (the downloaded KEPLER checkpoint) and dict.txt (standard RoBERTa dictionary file).
Note that the new versions of HuggingFace's Transformers library requires fairseq>=0.9.0, but the modified fairseq library in this repo and our checkpoints generated with is fairseq==0.8.0. The two versions are minorly different in the checkpoint format. Hence transformers<=2.2.2 or pytorch_transformers are needed for checkpoint conversion here.
TACRED
We suggest to use the converted HuggingFace's Transformers checkpoint as well as the OpenNRE library to perform experiments on TACRED. An example code will be updated soon.
To directly fine-tune KEPLER on TACRED in fairseq framework, please refer to this script. The script requires 2x16GB V100 GPUs.
FewRel
To finetune KEPLER on FewRel, you can use the offiicial code in the FewRel repo and set --encoder roberta as well as --pretrained_checkpoint path_to_converted_KEPLER.
OpenEntity
Please refer to this directory and this script for the codes of OpenEntity experiments.
These codes are modified on top of ERNIE.
GLUE
For the fine-tuning on GLUE tasks, refer to th
