SkillAgentSearch skills...

DHParser

DSL-Toolkit for Digital Humanities Applications (mirrors gitlab.lrz.de/badw-it/DHParser)

Install / Use

/learn @jecki/DHParser
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

DHParser

DHParser - Rapid prototyping of formal grammars and domain specific languages (DSL) in the Digital Humanities. See https://dhparser.readthedocs.io/en/latest/

This software is open source software under the Apache 2.0-License (see section License, below).

Copyright 2016-2025 Eckhart Arnold, Bavarian Academy of Sciences and Humanities

Purpose

DHParser is a parser development-framwork that has been developed with three main purposes in mind:

  1. Developing parsers for domain specific languages and notations, either existing notations, like, LaTeX, or newly created DSLs, like the Medieval-Latin-Dictionary-DSL. Typically, these languages are strict formal languages the grammar of which can be described with context-free grammars. (In cases where this does not hold like TeX, it is often still possible to describe a reasonably large subset of the formal language with a context free grammar.)

  2. Developing parsers for semi-structured or informally structured text-data.
    This kind of data is typically what you get when retro-digitizing textual data like printed bibliographies, or reference works or dictionaries. Often such works can be captured with a formal grammar, but these grammars require a lot of iterations and tests to develop and usually become much more ramified than the grammars of well-designed formal languages. Thus, DHParser's elaborated testing and debugging-framework for grammars.

    (See Florian Zacherl's Dissertation on the retro-digitalization of dictionary data for an interesting case study. I am confident that the development of a suitable formal grammar is much easier with an elaborated framework like DHParser than with the PHP-parsing-expression-grammar-kit that Florian Zacherl has used.)

  3. Developing processing pipelines for tree-structured data. In typical digital-humanities-applications one wants to produce different forms of output (say, printed, online-human-readable, online-machine-readable) from one and the same source of data. Therefore, the parsing stage (if the data source is structured text-data) will be followed by more or less intricate bifurcated processing pipelines.

Features

Ease of use

Directly compile existing EBNF-grammars:

DHParser recognizes various dialects of EBNF or PEG-syntax for specifying grammars. For any already given grammar-specification in EBNF or PEG, it is not unlikely that DHParser can generate a parser either right away or with only minor changes or additions.

You can try this by compiling the file XML_W3C_SPEC.ebnf in the examples/XML of the source-tree which contains the official XML-grammar directly extracted from www.w3.org/TR/xml/:

$ dhparser examples/XML/XML_W3C_SPEC.ebnf

This command produces a Python-Script XML_W3C_SPECParser.py in the same directory as the EBNF-file. This file can be run on any XML-file and will yield its concrete syntax tree, e.g.:

$ python examples/XML/XML_W3C_SPECParser.py examples/XML/example.xml

Note, that the concrete syntax tree of an XML file as returned by the generated parser is not the same as the data-tree encoded by that very XML-file. In order to receive the data tree, further transformations are necessary. See examples/XML/XMLParser.py for an example of how this can be done.

Use (small) grammars on the fly in Python code:

Small grammars can also directly be compiled from Python-code. (Here, we use DHParser's preferred syntax which does not require trailing semicolons and uses the tilde ~ as a special sign to denote "insignificant" whitespace.)

key_value_store.py:

#!/usr/bin/env python 
# A mini-DSL for a key value store
from DHParser.dsl import create_parser

# specify the grammar of your DSL in EBNF-notation
grammar = r'''@ drop = whitespace, strings
key_store   = ~ { entry }
entry       = key "="~ value          # ~ means: insignificant whitespace 
key         = /\w+/~                  # Scanner-less parsing: Use regular
value       = /\"[^"\n]*\"/~          # expressions wherever you like'''

# generating a parser is almost as simple as compiling a regular expression
parser = create_parser(grammar)       # parser factory for thread-safety

Now, parse some text and extract the data from the Python shell:

>>> from key_value_store import parser
>>> text = '''
        title    = "Odysee 2001"
        director = "Stanley Kubrick"
    '''
>>> data = parser(text)
>>> for entry in data.select('entry'):
        print(entry['key'], entry['value'])

title "Odysee 2001"
director "Stanley Kubrick"

Or, serialize as XML:

>>> print(data.as_xml())

<key_store>
  <entry>
    <key>title</key>
    <value>"Odysee 2001"</value>
  </entry>
  <entry>
    <key>director</key>
    <value>"Stanley Kubrick"</value>
  </entry>
</key_store>

Add the compiled grammar to your script to save startup-time:

Now, generating a parser with the "create_parser"-function is typically quite slow. If you want to save startup-time, you can simply read out the generated parser's python_src__-field and add the source code to your script.

In order to ensure that generated parser and the grammar stay in sync, you can check if parser needs to be updated by using the grammar_changed()-function as shown in the code-example, below:

#!/usr/bin/env python
# A mini-DSL for a key value store

import re

from DHParser.dsl import create_parser, grammar_changed
from DHParser.parse import *

# specify the grammar of your DSL in EBNF-notation
grammar = r'''@ drop = whitespace, strings
key_store   = ~ { entry }
entry       = key "="~ value          # ~ means: insignificant whitespace 
key         = /[\w]+/~                  # Scanner-less parsing: Use regular
value       = /\"[^"\n]*\"/~          # expressions wherever you like'''

# This class has been generated from the grammar above.
# Do not edit it manually!
class KeyValueGrammar(Grammar):
    r"""Parser for a KeyValue document.

    Instantiate this class and then call the instance with the
    source code as argument in order to use the parser, e.g.:
        parser = KeyValue()
        syntax_tree = parser(source_code)
    """
    source_hash__ = "d6d82738894cbdaa5b14ba8f4254e666"
    disposable__ = re.compile('$.')
    static_analysis_pending__ = []  # type: List[bool]
    parser_initialization__ = ["upon instantiation"]
    COMMENT__ = r''
    comment_rx__ = RX_NEVER_MATCH
    WHITESPACE__ = r'[ \t]*(?:\n[ \t]*(?![ \t]*\n))?'
    WSP_RE__ = mixin_comment(whitespace=WHITESPACE__, comment=COMMENT__)
    wsp__ = Whitespace(WSP_RE__)
    dwsp__ = Drop(Whitespace(WSP_RE__))
    value = Series(RegExp('\\"[^"\\n]*\\"'), dwsp__)
    key = Series(RegExp('[\\w]+'), dwsp__)
    entry = Series(key, Drop(Text("=")), dwsp__, value)
    key_store = Series(dwsp__, ZeroOrMore(entry))
    root__ = key_store

# Check if the grammar has changed. If so, recompile the gramar
# and raise an Exception displaying the updated parser-class.
if grammar_changed(KeyValueGrammar, grammar):
    parser = create_parser(grammar, branding="KeyValue")
    raise AssertionError(
        "Grammar changed! Please, update your source code with:\n\n" \
        + parser.python_src__)

if __name__ == "__main__":
    parser = KeyValueGrammar()
    example = '''
        title    = "Odysee 20

Related Skills

View on GitHub
GitHub Stars7
CategoryDevelopment
Updated13d ago
Forks2

Languages

Python

Security Score

85/100

Audited on Mar 18, 2026

No findings