SkillAgentSearch skills...

WikidataIntegrator

A Wikidata Python module integrating the MediaWiki API and the Wikidata SPARQL endpoint

Install / Use

/learn @SuLab/WikidataIntegrator
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

Wikidata Integrator

Build Status Pyversions PyPi Binder <img src="https://img.shields.io/badge/slack-@genewiki/wdi_bot_dev-green.svg?logo=slack">

Downloads Downloads Downloads

Telegram channel

We have a telegram channel for Wikidata bot developers using the Wikidata Integrator. Follow this link to join this channel.

Installation

The easiest way to install WikidataIntegrator is using pip or pip3. WikidataIntegrator supports python 3.8 and higher, hence the suggestion for pip3. If python2 is installed pip will lead to an error indicating missing dependencies.

pip3 install wikidataintegrator

You can also clone the repo and execute with administrator rights or install into a virtualenv.


git clone https://github.com/SuLab/WikidataIntegrator.git

cd WikidataIntegrator

python3 setup.py install

To test for correct installation, start a python console and execute the following (Will retrieve the Wikidata item for 'Human'):

from wikidataintegrator import wdi_core

my_first_wikidata_item = wdi_core.WDItemEngine(wd_item_id='Q5')

# to check successful installation and retrieval of the data, you can print the json representation of the item
my_first_wikidata_item.get_wd_json_representation()

Introduction

WikidataIntegrator is a library for reading and writing to Wikidata/Wikibase. We created it for populating Wikidata with content from authoritative resources on Genes, Proteins, Diseases, Drugs and others. Details on the different tasks can be found on the bot's Wikidata page.

Pywikibot is an existing framework for interacting with the MediaWiki API. The reason why we came up with our own solution is that we need a high integration with the Wikidata SPARQL endpoint in order to ensure data consistency (duplicate check, consistency checks, correct item selection, etc.).

Compared to Pywikibot, WikidataIntegrator currently is not a full Python wrapper for the MediaWiki API but is solely focused on providing an easy means to generate Python based Wikidata bots, it therefore resembles a basic database connector like JDBC or ODBC.

Note: Rate Limits

New users may hit rate limits (of 8 edits per minute) when editing or creating items. Autoconfirmed users, (an account with at least 4 days of age and at least 50 edits), should not need to worry about hitting these limits. Users who anticipate making large numbers of edits to Wikidata should create a separate bot account and request approval.

The Core Parts

wdi_core supports two modes it can be operated in, a normal mode, updating each item at a time and a 'fastrun' mode, which is pre-loading data locally and then just updating items if the new data provided is differing from what is in Wikidata. The latter mode allows for great speedups (measured up to 9x) when tens of thousand of Wikidata items need to be checked if they require updates but only a small number will finally be updated, a situation usually encountered when keeping Wikidata in sync with an external resource.

wdi_core consists of a central class called WDItemEngine and WDLogin for authenticating with Wikidata/Wikipedia.

wdi_core.WDItemEngine

This is the central class which does all the heavy lifting.

Features:

  • Load a Wikidata item based on data to be written (e.g. a unique central identifier)
  • Load a Wikidata item based on its Wikidata item id (aka QID)
  • Checks for conflicts automatically (e.g. multiple items carrying a unique central identifier will trigger an exception)
  • Checks automatically if the correct item has been loaded by comparing it to the data provided
  • All Wikidata data types implemented
  • A dedicated WDItemEngine.write() method allows loading and consistency checks of data before any write to Wikidata is performed
  • Full access to the whole Wikidata item as a JSON document
  • Minimize the number of HTTP requests for reads and writes to improve performance
  • Method to easily execute SPARQL queries on the Wikidata endpoint.

There are two ways of working with Wikidata items:

  • A user can provide data, and WDItemEngine will search for and load/modify an existing item or create a new one, solely based on the data provided (preferred). This also performs consistency checks based on a set of SPARQL queries.
  • A user can work with a selected QID to specifically modify the data on the item. This requires that the user knows what he/she is doing and should only be used with great care, as this does not perform consistency checks.

Examples below illustrate the usage of WDItemEngine.

wdi_login.WDLogin

Login with username and password

In order to write bots for Wikidata, a bot account is required and each script needs to go through a login procedure. For obtaining a bot account in Wikidata, a specific task needs to be determined and then proposed to the Wikidata community. If the community discussion results in your bot code and account being considered useful for Wikidata, you are ready to go. However, the code of wdi_core can also run with normal user accounts, the differences are primarily that you have lower writing limits per minute.

wdi_login.WDLogin provides the login functionality and also stores the cookies and edit tokens required (For security reasons, every Wikidata edit requires an edit token). The constructor takes two essential parameters, username and password. Additionally, the server (default www.wikidata.org) and the the token renewal periods can be specified.

    login_instance = wdi_login.WDLogin(user='<bot user name>', pwd='<bot password>')     

Login using OAuth1

The Wikimedia universe currently only support authentication via OAuth1. If WDI should be used as a backend for a webapp or the bot should use OAuth for authentication, WDI supports this You just need to specify consumer token and consumer secret when instantiating wdi_login.WDLogin. In contrast to username and password login, OAuth is a 2 step process as manual user confirmation for OAuth login is required. This means that the method continue_oath() needs to be called after creating the wdi_login.WDLogin instance.

Example:

    login_instance = wdi_login.WDLogin(consumer_key='<your_consumer_key>', pwd='<your_consumer_secret>')
    login_instance.continue_oauth()

The method continue_oauth() will either promt the user for a callback URL (normal bot runs) or it will take a parameter so in the case of WDI being used as a backend for e.g. a web app, where the callback will provide the authentication information directly to the backend and so no copy and paste of the callback URL is required.

Wikidata Data Types

Currently, Wikidata supports 17 different data types. The data types are represented as their own classes in wdi_core. Each data type has its specialties, which means that some of them require special parameters (e.g. Globe Coordinates).

The data types currently implemented:

  • wdi_core.WDCommonsMedia
  • wdi_core.WDExternalID
  • wdi_core.WDForm
  • wdi_core.WDGeoShape
  • wdi_core.WDGlobeCoordinate
  • wdi_core.WDItemID
  • wdi_core.WDLexeme
  • wdi_core.WDMath
  • wdi_core.WDMonolingualText
  • wdi_core.WDMusicalNotation
  • wdi_core.WDProperty
  • wdi_core.WDQuantity
  • wdi_core.WDSense
  • wdi_core.WDString
  • wdi_core.WDTabularData
  • wdi_core.WDTime
  • wdi_core.WDUrl

For details of how to create values (=instances) with these data types, please (for now) consult the docstrings in the source code. Of note, these data type instances hold the values and, if specified, data type instances for references and qualifiers. Furthermore, calling the get_value() method of an instance returns either an integer, a string or a tuple, depending on the complexity of the data type.

Helper Methods

Execute SPARQL queries

The method wdi_core.WDItemEngine.execute_sparql_query() allows you to execute SPARQL queries without a hassle. It takes the actual query string (query), optional prefixes (prefix) if you do not want to use the standard prefixes of Wikidata, the actual entpoint URL (endpoint), and you can also specify a user agent for the http header sent to the SPARQL server (user_agent). The latter is very useful to let the operators of the endpoint know who you are, especially if you execute many queries on the endpoint. This allows the operators of the endpoint to contact you (e.g. specify a email address or the URL to your bot code repository.)

Logging

The method wdi_core.WDItemEngine.log() allows for using the Python built in logging functionality to collect errors and other logs. It takes two parameters, the log level (level) and the log message (message). It is advisable to separate log

View on GitHub
GitHub Stars259
CategoryDevelopment
Updated3mo ago
Forks46

Languages

Python

Security Score

92/100

Audited on Dec 2, 2025

No findings