SkillAgentSearch skills...

Megaparsec

Industrial-strength monadic parser combinator library

Install / Use

/learn @mrkkrp/Megaparsec
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

Megaparsec

License FreeBSD Hackage Stackage Nightly Stackage LTS CI

This is an industrial-strength monadic parser combinator library. Megaparsec is a feature-rich package that tries to find a nice balance between speed, flexibility, and quality of parse errors.

Features

The project provides flexible solutions to satisfy common parsing needs. The section describes them shortly. If you're looking for comprehensive documentation, see the section about documentation.

Core features

The package is built around MonadParsec, an MTL-style monad transformer. Most features work with all instances of MonadParsec. One can achieve various effects combining monad transformers, i.e. building a monadic stack. Since the common monad transformers like WriterT, StateT, ReaderT and others are instances of the MonadParsec type class, one can also wrap ParsecT in these monads, achieving, for example, backtracking state.

On the other hand ParsecT is an instance of many type classes as well. The most useful ones are Monad, Applicative, Alternative, and MonadParsec.

Megaparsec includes all functionality that is typically available in Parsec-like libraries and also features some special combinators:

  • parseError allows us to end parsing and report an arbitrary parse error.
  • withRecovery can be used to recover from parse errors “on-the-fly” and continue parsing. Once parsing is finished, several parse errors may be reported or ignored altogether.
  • observing makes it possible to “observe” parse errors without ending parsing.

In addition to that, Megaparsec features high-performance combinators similar to those found in [Attoparsec][attoparsec]:

  • tokens makes it easy to parse several tokens in a row (string and string' are built on top of this primitive). This is about 100 times faster than matching a string token by token. tokens returns “chunk” of original input, meaning that if you parse Text, it'll return Text without repacking.
  • takeWhile and takeWhile1 are about 150 times faster than approaches involving many, manyTill and other similar combinators.
  • takeP allows us to grab n tokens from the stream and returns them as a “chunk” of the stream.

Megaparsec is about as fast as Attoparsec if you write your parser carefully (see also the section about performance).

The library can currently work with the following types of input stream out-of-the-box:

  • String = [Char]
  • ByteString (strict and lazy)
  • Text (strict and lazy)

It's also possible to make it work with custom token streams by making them an instance of the Stream type class.

Error messages

  • Megaparsec has typed error messages and the ability to signal custom parse errors that better suit the user's domain of interest.

  • Since version 8, the location of parse errors can independent of current offset in the input stream. It is useful when you want a parse error to point to a particular position after performing some checks.

  • Instead of a single parse error Megaparsec produces so-called ParseErrorBundle data type that helps to manage multi-error messages and pretty-print them. Since version 8, reporting multiple parse errors at once has become easier.

External lexers

Megaparsec works well with streams of tokens produced by tools like Alex. The design of the Stream type class has been changed significantly in the recent versions, but user can still work with custom streams of tokens.

Character and binary parsing

Megaparsec has decent support for Unicode-aware character parsing. Functions for character parsing live in the [Text.Megaparsec.Char][tm-char] module. Similarly, there is [Text.Megaparsec.Byte][tm-byte] module for parsing streams of bytes.

Lexer

[Text.Megaparsec.Char.Lexer][tm-char-lexer] is a module that should help you write your lexer. If you have used Parsec in the past, this module “fixes” its particularly inflexible Text.Parsec.Token.

[Text.Megaparsec.Char.Lexer][tm-char-lexer] is intended to be imported using a qualified import, it's not included in [Text.Megaparsec][tm]. The module doesn't impose how you should write your parser, but certain approaches may be more elegant than others. An especially important theme is parsing of white space, comments, and indentation.

The design of the module allows one quickly solve simple tasks and doesn't get in the way when the need to implement something less standard arises.

[Text.Megaparsec.Byte.Lexer][tm-byte-lexer] is also available for users who wish to parse binary data.

Documentation

Megaparsec is well-documented. See the [current version of Megaparsec documentation on Hackage][hackage].

Tutorials

You can find the most complete Megaparsec tutorial [here][the-tutorial]. It should provide sufficient guidance to help you start with your parsing tasks.

Performance

Despite being flexible, Megaparsec is also fast. Here is how Megaparsec compares to [Attoparsec][attoparsec] (the fastest widely used parsing library in the Haskell ecosystem):

Test case | Execution time | Allocated | Max residency ------------------|---------------:|----------:|-------------: CSV (Attoparsec) | 76.50 μs | 397,784 | 10,544 CSV (Megaparsec) | 64.69 μs | 352,408 | 9,104 Log (Attoparsec) | 302.8 μs | 1,150,032 | 10,912 Log (Megaparsec) | 337.8 μs | 1,246,496 | 10,912 JSON (Attoparsec) | 18.20 μs | 128,368 | 9,032 JSON (Megaparsec) | 25.45 μs | 203,824 | 9,176

You can run the benchmarks yourself by executing:

$ nix-build -A benches.parsers-bench
$ cd result/bench
$ ./bench-memory
$ ./bench-speed

More information about benchmarking and development can be found [here][hacking].

Comparison with other solutions

There are quite a few libraries that can be used for parsing in Haskell, let's compare Megaparsec with some of them.

Megaparsec vs Attoparsec

[Attoparsec][attoparsec] is another prominent Haskell library for parsing. Although both libraries deal with parsing, it's usually easy to decide which you will need in particular project:

  • Attoparsec is sometimes faster but not that feature-rich. It should be used when you want to process large amounts of data where performance matters more than quality of error messages.

  • Megaparsec is good for parsing of source code or other human-readable texts. It has better error messages and it's implemented as a monad transformer.

So, if you work with something human-readable where the size of input data is moderate, it makes sense to go with Megaparsec, otherwise Attoparsec may be a better choice.

Megaparsec vs Parsec

Since Megaparsec is a fork of [Parsec][parsec], we are bound to list the main differences between the two libraries:

  • Better error messages. Megaparsec has typed error messages and custom error messages, it can also report multiple parse errors at once.

  • Megaparsec can show the line on which parse error happened as part of parse error. This makes it a lot easier to figure out where the error happened.

  • Some quirks and bugs of Parsec are fixed.

  • Better support for Unicode parsing in [Text.Megaparsec.Char][tm-char].

  • Megaparsec has more powerful combinators and can parse languages where indentation matters.

  • Better documentation.

  • Megaparsec can recover from parse errors “on the fly” and continue parsing.

  • Megaparsec allows us to conditionally process parse errors inside a running parser. In particular, it's possible to define regions in which parse errors, should they happen, will get a “context tag”, e.g. we could build a context stack like “in function definition foo”, “in expression x”, etc.

  • Megaparsec is faster and supports efficient operations tokens, takeWhileP, takeWhile1P, takeP, like Attoparsec.

If you want to see a detailed change log, CHANGELOG.md may be helpful. Also see [this original announcement][original-announcement] for another comparison.

Megaparsec vs Trifecta

[Trifecta][trifecta] is another Haskell library featuring good error messages. These are the common reasons why Trifecta may be problematic to use:

  • Complicated, doesn't have any tutorials available, and documentation doesn't help much.

  • Trifecta can parse String and ByteString natively, but not Text.

  • Depends on lens, which is a very heavy dependency. If you're not into `le

View on GitHub
GitHub Stars966
CategoryDevelopment
Updated14d ago
Forks92

Languages

Haskell

Security Score

85/100

Audited on Mar 22, 2026

No findings