Tomotopy
Python package of Tomoto, the Topic Modeling Tool
Install / Use
/learn @bab2min/TomotopyREADME
tomotopy
.. image:: https://badge.fury.io/py/tomotopy.svg :target: https://pypi.python.org/pypi/tomotopy
.. image:: https://zenodo.org/badge/186155463.svg :target: https://zenodo.org/badge/latestdoi/186155463
🌐
English,
한국어_.
.. _한국어: README.kr.rst
What is tomotopy?
tomotopy is a Python extension of tomoto (Topic Modeling Tool) which is a Gibbs-sampling based topic model library written in C++.
It utilizes a vectorization of modern CPUs for maximizing speed.
The current version of tomoto supports several major topic models including
- Latent Dirichlet Allocation (
tomotopy.LDAModel) - Labeled LDA (
tomotopy.LLDAModel) - Partially Labeled LDA (
tomotopy.PLDAModel) - Supervised LDA (
tomotopy.SLDAModel) - Dirichlet Multinomial Regression (
tomotopy.DMRModel) - Generalized Dirichlet Multinomial Regression (
tomotopy.GDMRModel) - Hierarchical Dirichlet Process (
tomotopy.HDPModel) - Hierarchical LDA (
tomotopy.HLDAModel) - Multi Grain LDA (
tomotopy.MGLDAModel) - Pachinko Allocation (
tomotopy.PAModel) - Hierarchical PA (
tomotopy.HPAModel) - Correlated Topic Model (
tomotopy.CTModel) - Dynamic Topic Model (
tomotopy.DTModel) - Pseudo-document based Topic Model (
tomotopy.PTModel).
Please visit https://bab2min.github.io/tomotopy to see more information.
Getting Started
You can install tomotopy easily using pip. (https://pypi.org/project/tomotopy/) ::
$ pip install --upgrade pip
$ pip install tomotopy
The supported OS and Python versions are:
- Linux (x86-64) with Python >= 3.6
- macOS >= 10.13 with Python >= 3.6
- Windows 7 or later (x86, x86-64) with Python >= 3.6
- Other OS with Python >= 3.6: Compilation from source code required (with c++14 compatible compiler)
After installing, you can start tomotopy by just importing. ::
import tomotopy as tp
print(tp.isa) # prints 'avx512', 'avx2', 'sse2' or 'none'
Currently, tomotopy can exploits AVX512, AVX2 or SSE2 SIMD instruction set for maximizing performance.
When the package is imported, it will check available instruction sets and select the best option.
If tp.isa tells none, iterations of training may take a long time.
But, since most of modern Intel or AMD CPUs provide SIMD instruction set, the SIMD acceleration could show a big improvement.
Here is a sample code for simple LDA training of texts from 'sample.txt' file. ::
import tomotopy as tp
mdl = tp.LDAModel(k=20)
for line in open('sample.txt'):
mdl.add_doc(line.strip().split())
for i in range(0, 100, 10):
mdl.train(10)
print('Iteration: {}\tLog-likelihood: {}'.format(i, mdl.ll_per_word))
for k in range(mdl.k):
print('Top 10 words of topic #{}'.format(k))
print(mdl.get_topic_words(k, top_n=10))
mdl.summary()
Performance of tomotopy
tomotopy uses Collapsed Gibbs-Sampling(CGS) to infer the distribution of topics and the distribution of words.
Generally CGS converges more slowly than Variational Bayes(VB) that gensim's LdaModel_ uses, but its iteration can be computed much faster.
In addition, tomotopy can take advantage of multicore CPUs with a SIMD instruction set, which can result in faster iterations.
.. _gensim's LdaModel: https://radimrehurek.com/gensim/models/ldamodel.html
Following chart shows the comparison of LDA model's running time between tomotopy and gensim.
The input data consists of 1000 random documents from English Wikipedia with 1,506,966 words (about 10.1 MB).
tomotopy trains 200 iterations and gensim trains 10 iterations.
.. image:: https://bab2min.github.io/tomotopy/images/tmt_i5.png
Performance in Intel i5-6600, x86-64 (4 cores)
.. image:: https://bab2min.github.io/tomotopy/images/tmt_xeon.png
Performance in Intel Xeon E5-2620 v4, x86-64 (8 cores, 16 threads)
Although tomotopy iterated 20 times more, the overall running time was 5~10 times faster than gensim. And it yields a stable result.
It is difficult to compare CGS and VB directly because they are totaly different techniques. But from a practical point of view, we can compare the speed and the result between them. The following chart shows the log-likelihood per word of two models' result.
.. image:: https://bab2min.github.io/tomotopy/images/LLComp.png
The SIMD instruction set has a great effect on performance. Following is a comparison between SIMD instruction sets.
.. image:: https://bab2min.github.io/tomotopy/images/SIMDComp.png
Fortunately, most of recent x86-64 CPUs provide AVX2 instruction set, so we can enjoy the performance of AVX2.
Model Save and Load
tomotopy provides save and load method for each topic model class,
so you can save the model into the file whenever you want, and re-load it from the file.
::
import tomotopy as tp
mdl = tp.HDPModel()
for line in open('sample.txt'):
mdl.add_doc(line.strip().split())
for i in range(0, 100, 10):
mdl.train(10)
print('Iteration: {}\tLog-likelihood: {}'.format(i, mdl.ll_per_word))
# save into file
mdl.save('sample_hdp_model.bin')
# load from file
mdl = tp.HDPModel.load('sample_hdp_model.bin')
for k in range(mdl.k):
if not mdl.is_live_topic(k): continue
print('Top 10 words of topic #{}'.format(k))
print(mdl.get_topic_words(k, top_n=10))
# the saved model is HDP model,
# so when you load it by LDA model, it will raise an exception
mdl = tp.LDAModel.load('sample_hdp_model.bin')
When you load the model from a file, a model type in the file should match the class of methods.
See more at tomotopy.LDAModel.save and tomotopy.LDAModel.load methods.
Interactive Model Viewer
.. raw:: html
<video src="https://private-user-images.githubusercontent.com/19266222/355924875-fc9d27f5-5542-4e65-ab69-1d96dc0913af.mp4?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjMwNTI4MTUsIm5iZiI6MTcyMzA1MjUxNSwicGF0aCI6Ii8xOTI2NjIyMi8zNTU5MjQ4NzUtZmM5ZDI3ZjUtNTU0Mi00ZTY1LWFiNjktMWQ5NmRjMDkxM2FmLm1wND9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA4MDclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwODA3VDE3NDE1NVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTk1N2YxODE3MzBiZTNhMjkyNTk1OWJkODRmZjc4ZTcyYzFkZGYxZjgxODUxYTNlNGYxMzllOTgzNDI0MjA4ZDImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.evTswIGMps594nQ6JCtbd6puFM8ARHM0emgaluIUxvY" style="max-width:100%"></video>
You can see the result of modeling using the interactive viewer since v0.13.0.
::
import tomotopy as tp
model = tp.LDAModel(...)
# ... some training codes ...
tp.viewer.open_viewer(model, host="localhost", port=9999)
# And open http://localhost:9999 in your web browser!
If you have a saved model file, you can also use the following command line.
::
python -m tomotopy.viewer a_trained_model.bin --host localhost --port 9999
See more at tomotopy.viewer module.
Documents in the Model and out of the Model
We can use Topic Model for two major purposes. The basic one is to discover topics from a set of documents as a result of trained model, and the more advanced one is to infer topic distributions for unseen documents by using trained model.
We named the document in the former purpose (used for model training) as document in the model, and the document in the later purpose (unseen document during training) as document out of the model.
In tomotopy, these two different kinds of document are generated differently.
A document in the model can be created by tomotopy.LDAModel.add_doc method.
add_doc can be called before tomotopy.LDAModel.train starts.
In other words, after train called, add_doc cannot add a document into the model because the set of document used for training has become fixed.
To acquire the instance of the created document, you should use tomotopy.LDAModel.docs like:
::
mdl = tp.LDAModel(k=20)
idx = mdl.add_doc(words)
if idx < 0: raise RuntimeError("Failed to add doc")
doc_inst = mdl.docs[idx]
# doc_inst is an instance of the added document
A document out of the model is generated by tomotopy.LDAModel.make_doc method. make_doc can be called only after train starts.
If you use make_doc before the set of document used for training has become fixed, you may get wrong results.
Since make_doc returns the instance directly, you can use its return value for other manipulations.
::
mdl = tp.LDAModel(k=20)
# add_doc ...
mdl.train(100)
doc_inst = mdl.make_doc(unseen_doc) # doc_inst is an instance of the unseen document
Inference for Unseen Documents
If a new document is created by tomotopy.LDAModel.make_doc, its topic distribution can be inferred by the model.
Inference for unseen document should be performed using tomotopy.LDAModel.infer method.
::
mdl = tp.LDAModel(k=20)
# add_doc ...
mdl.train(100)
doc_inst = mdl.make_doc(unseen_doc)
topic_dist, ll = mdl.infer(doc_inst)
print("Topic Distribution for Unseen Docs: ", topic_dist)
print("Log-likelihood of inference: ", ll)
The infer method can infer only one instance of tomotopy.Document or a list of instances of tomotopy.Document.
See more at tomotopy.LDAModel.infer.
Corpus and transform
Every topic model in tomotopy has its own internal document type.
A document can be created and added into suitable for each model through each model's add_doc method.
However, trying to add the same list of documents to different models becomes quite inconvenient,
because add_doc should be called for the same list of documents
