Bpemb
Pre-trained subword embeddings in 275 languages, based on Byte-Pair Encoding (BPE)
Install / Use
/learn @bheinzerling/BpembREADME
BPEmb
BPEmb is a collection of pre-trained subword embeddings in 275 languages, based on Byte-Pair Encoding (BPE) and trained on Wikipedia. Its intended use is as input for neural models in natural language processing.
Website ・ Usage ・ Download ・ MultiBPEmb ・ Paper (pdf) ・ Citing BPEmb
Usage
Install BPEmb with pip:
pip install bpemb
Embeddings and SentencePiece models will be downloaded automatically the first time you use them.
>>> from bpemb import BPEmb
# load English BPEmb model with default vocabulary size (10k) and 50-dimensional embeddings
>>> bpemb_en = BPEmb(lang="en", dim=50)
downloading https://nlp.h-its.org/bpemb/en/en.wiki.bpe.vs10000.model
downloading https://nlp.h-its.org/bpemb/en/en.wiki.bpe.vs10000.d50.w2v.bin.tar.gz
You can do two main things with BPEmb. The first is subword segmentation:
# apply English BPE subword segmentation model
>>> bpemb_en.encode("Stratford")
['▁strat', 'ford']
# load Chinese BPEmb model with vocabulary size 100k and default (100-dim) embeddings
>>> bpemb_zh = BPEmb(lang="zh", vs=100000)
# apply Chinese BPE subword segmentation model
>>> bpemb_zh.encode("这是一个中文句子") # "This is a Chinese sentence."
['▁这是一个', '中文', '句子'] # ["This is a", "Chinese", "sentence"]
If / how a word gets split depends on the vocabulary size. Generally, a smaller vocabulary size will yield a segmentation into many subwords, while a large vocabulary size will result in frequent words not being split:
| vocabulary size | segmentation | | --- | --- | | 1000 | ['▁str', 'at', 'f', 'ord'] | | 3000 | ['▁str', 'at', 'ford'] | | 5000 | ['▁str', 'at', 'ford'] | | 10000 | ['▁strat', 'ford'] | | 25000 | ['▁stratford'] | | 50000 | ['▁stratford'] | | 100000 | ['▁stratford'] | | 200000 | ['▁stratford'] |
The second purpose of BPEmb is to provide pretrained subword embeddings:
# Embeddings are wrapped in a gensim KeyedVectors object
>>> type(bpemb_zh.emb)
gensim.models.keyedvectors.Word2VecKeyedVectors
# You can use BPEmb objects like gensim KeyedVectors
>>> bpemb_en.most_similar("ford")
[('bury', 0.8745079040527344),
('ton', 0.8725000619888306),
('well', 0.871537446975708),
('ston', 0.8701574206352234),
('worth', 0.8672043085098267),
('field', 0.859795331954956),
('ley', 0.8591548204421997),
('ington', 0.8126075267791748),
('bridge', 0.8099068999290466),
('brook', 0.7979353070259094)]
>>> type(bpemb_en.vectors)
numpy.ndarray
>>> bpemb_en.vectors.shape
(10000, 50)
>>> bpemb_zh.vectors.shape
(100000, 100)
To use subword embeddings in your neural network, either encode your input into subword IDs:
>>> ids = bpemb_zh.encode_ids("这是一个中文句子")
[25950, 695, 20199]
>>> bpemb_zh.vectors[ids].shape
(3, 100)
Or use the embed method:
# apply Chinese subword segmentation and perform embedding lookup
>>> bpemb_zh.embed("这是一个中文句子").shape
(3, 100)
Downloads for each language
ab (Abkhazian) ・ ace (Achinese) ・ ady (Adyghe) ・ af (Afrikaans) ・ ak (Akan) ・ als (Alemannic) ・ am (Amharic) ・ an (Aragonese) ・ ang (Old English) ・ ar (Arabic) ・ arc (Official Aramaic) ・ arz (Egyptian Arabic) ・ as (Assamese) ・ ast (Asturian) ・ atj (Atikamekw) ・ av (Avaric) ・ ay (Aymara) ・ az (Azerbaijani) ・ azb (South Azerbaijani)
ba (Bashkir) ・ bar (Bavarian) ・ bcl (Central Bikol) ・ be (Belarusian) ・ bg (Bulgarian) ・ bi (Bislama) ・ bjn (Banjar) ・ bm (Bambara) ・ bn (Bengali) ・ bo (Tibetan) ・ bpy (Bishnupriya) ・ br (Breton) ・ bs (Bosnian) ・ bug (Buginese) ・ bxr (Russia Buriat)
ca (Catalan) ・ cdo (Min Dong Chinese) ・ ce (Chechen) ・ ceb (Cebuano) ・ ch (Chamorro) ・ chr (Cherokee) ・ chy (Cheyenne) ・ ckb (Central Kurdish) ・ co (Corsican) ・ cr (Cree) ・ crh (Crimean Tatar) ・ cs (Czech) ・ csb (Kashubian) ・ cu (Church Slavic) ・ cv (Chuvash) ・ cy (Welsh)
da (Danish) ・ de (German) ・ din (Dinka) ・ diq (Dimli) ・ dsb (Lower Sorbian) ・ dty (Dotyali) ・ dv (Dhivehi) ・ dz (Dzongkha)
ee (Ewe) ・ el (Modern Greek) ・ en (English) ・ eo (Esperanto) ・ es (Spanish) ・ et (Estonian) ・ eu (Basque) ・ ext (Extremaduran)
fa (Persian) ・ ff (Fulah) ・ fi (Finnish) ・ fj (Fijian) ・ fo (Faroese) ・ fr (French) ・ frp (Arpitan) ・ frr (Northern Frisian) ・ fur (Friulian) ・ fy (Western Frisian)
ga (Irish) ・ gag (Gagauz) ・ gan (Gan Chinese) ・ gd (Scottish Gaelic) ・ gl (Galician) ・ glk (Gilaki) ・ gn (Guarani) ・ gom (Goan Konkani) ・ got (Gothic) ・ gu (Gujarati) ・ gv (Manx)
ha (Hausa) ・ hak (Hakka Chinese) ・ haw (Hawaiian) ・ he (Hebrew) ・ hi (Hindi) ・ hif (Fiji Hindi) ・ hr (Croatian) ・ hsb (Upper Sorbian) ・ ht (Haitian) ・ hu (Hungarian) ・ hy (Armenian)
ia (Interlingua) ・ id (Indonesian) ・ ie (Interlingue) ・ ig (Igbo) ・ ik (Inupiaq) ・ ilo (Iloko) ・ io (Ido) ・ is (Icelandic) ・ it (Italian) ・ iu (Inuktitut)
ja (Japanese) ・ jam (Jamaican Creole English) ・ jbo (Lojban) ・ jv (Javanese)
ka (Georgian) ・ kaa (Kara-Kalpak) ・ kab (Kabyle) ・ kbd (Kabardian) ・ kbp (Kabiyè) ・ kg (Kongo) ・ ki (Kikuyu) ・ kk (Kazakh) ・ kl (Kalaallisut) ・ km (Central Khmer) ・ kn (Kannada) ・ ko (Korean) ・ koi (Komi-Permyak) ・ krc (Karachay-Balkar) ・ ks (Kashmiri) ・ ksh (Kölsch) ・ ku (Kurdish) ・ kv (Komi) ・ kw (Cornish) ・ ky (Kirghiz)
la (Latin) ・ lad (Ladino) ・ lb (Luxembourgish) ・ [lbe (Lak)](http://n
