Wikipedia2vec
A tool for learning vector representations of words and entities from Wikipedia
Install / Use
/learn @wikipedia2vec/Wikipedia2vecREADME
Wikipedia2Vec
Wikipedia2Vec is a tool used for obtaining embeddings (or vector representations) of words and entities (i.e., concepts that have corresponding pages in Wikipedia) from Wikipedia. It is developed and maintained by Studio Ousia.
This tool enables you to learn embeddings of words and entities simultaneously, and places similar words and entities close to one another in a continuous vector space. Embeddings can be easily trained by a single command with a publicly available Wikipedia dump as input.
This tool implements the conventional skip-gram model to learn the embeddings of words, and its extension proposed in Yamada et al. (2016) to learn the embeddings of entities.
An empirical comparison between Wikipedia2Vec and existing embedding tools (i.e., FastText, Gensim, RDF2Vec, and Wiki2vec) is available here.
Documentation are available online at http://wikipedia2vec.github.io/.
Basic Usage
Wikipedia2Vec can be installed via PyPI:
% pip install wikipedia2vec
With this tool, embeddings can be learned by running a train command with a Wikipedia dump as input. For example, the following commands download the latest English Wikipedia dump and learn embeddings from this dump:
% wget https://dumps.wikimedia.org/enwiki/latest/enwiki-latest-pages-articles.xml.bz2
% wikipedia2vec train enwiki-latest-pages-articles.xml.bz2 MODEL_FILE
Then, the learned embeddings are written to MODEL_FILE. Note that this command can take many optional parameters. Please refer to our documentation for further details.
Pretrained Embeddings
Pretrained embeddings for 12 languages (i.e., English, Arabic, Chinese, Dutch, French, German, Italian, Japanese, Polish, Portuguese, Russian, and Spanish) can be downloaded from this page.
Use Cases
Wikipedia2Vec has been applied to the following tasks:
- Entity linking: Yamada et al., 2016, Eshel et al., 2017, Chen et al., 2019, Poerner et al., 2020, van Hulst et al., 2020.
- Named entity recognition: Sato et al., 2017, Lara-Clares and Garcia-Serrano, 2019.
- Question answering: Yamada et al., 2017, Poerner et al., 2020.
- Entity typing: Yamada et al., 2018.
- Text classification: Yamada et al., 2018, Yamada and Shindo, 2019, Alam et al., 2020.
- Relation classification: Poerner et al., 2020.
- Paraphrase detection: Duong et al., 2018.
- Knowledge graph completion: Shah et al., 2019, Shah et al., 2020.
- Fake news detection: Singh et al., 2019, Ghosal et al., 2020.
- Plot analysis of movies: Papalampidi et al., 2019.
- Novel entity discovery: Zhang et al., 2020.
- Entity retrieval: Gerritse et al., 2020.
- Deepfake detection: Zhong et al., 2020.
- Conversational information seeking: Rodriguez et al., 2020.
- Query expansion: Rosin et al., 2020.
References
If you use Wikipedia2Vec in a scientific publication, please cite the following paper:
Ikuya Yamada, Akari Asai, Jin Sakuma, Hiroyuki Shindo, Hideaki Takeda, Yoshiyasu Takefuji, Yuji Matsumoto, Wikipedia2Vec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from Wikipedia.
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
The embedding model was originally proposed in the following paper:
Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, Yoshiyasu Takefuji, Joint Learning of the Embedding of Words and Entities for Named Entity Disambiguation.
@inproceedings{yamada2016joint,
title={Joint Learning of the Embedding of Words and Entities for Named Entity Disambiguation},
author={Yamada, Ikuya and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu},
booktitle={Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning},
year={2016},
publisher={Association for Computational Linguistics},
pages={250--259}
}
The text classification model implemented in this example was proposed in the following paper:
Ikuya Yamada, Hiroyuki Shindo, Neural Attentive Bag-of-Entities Model for Text Classification.
@article{yamada2019neural,
title={Neural Attentive Bag-of-Entities Model for Text Classification},
author={Yamada, Ikuya and Shindo, Hiroyuki},
booktitle={Proceedings of The 23th SIGNLL Conference on Computational Natural Language Learning},
year={2019},
publisher={Association for Computational Linguistics},
pages = {563--573}
}
License
Related Skills
claude-opus-4-5-migration
96.8kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
model-usage
344.1kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
TrendRadar
50.4k⭐AI-driven public opinion & trend monitor with multi-platform aggregation, RSS, and smart alerts.🎯 告别信息过载,你的 AI 舆情监控助手与热点筛选工具!聚合多平台热点 + RSS 订阅,支持关键词精准筛选。AI 智能筛选新闻 + AI 翻译 + AI 分析简报直推手机,也支持接入 MCP 架构,赋能 AI 自然语言对话分析、情感洞察与趋势预测等。支持 Docker ,数据本地/云端自持。集成微信/飞书/钉钉/Telegram/邮件/ntfy/bark/slack 等渠道智能推送。
mcp-for-beginners
15.7kThis open-source curriculum introduces the fundamentals of Model Context Protocol (MCP) through real-world, cross-language examples in .NET, Java, TypeScript, JavaScript, Rust and Python. Designed for developers, it focuses on practical techniques for building modular, scalable, and secure AI workflows from session setup to service orchestration.
