Lark is a parser built with a focus on ergonomics, performance and resilience.
Lark can parse all context-free languages. That means it is capable of parsing almost any programming language out there, and to some degree most natural languages too.
Who is it for?
-
Beginners: Lark is very friendly for experimentation. It can parse any grammar you throw at it, no matter how complicated or ambiguous, and do so efficiently. It also constructs an annotated parse-tree for you, using only the grammar, and it gives you convienient and flexible tools to process that parse-tree.
-
Experts: Lark implements both Earley(SPPF) and LALR(1), and several different lexers, so you can trade-off power and speed, according to your requirements. It also provides a variety of sophisticated features and utilities.
What can it do?
- Parse all context-free grammars, and handle any ambiguity
- Build an annotated parse-tree automagically, no construction code required.
- Provide first-rate performance in terms of both Big-O complexity and measured run-time (considering that this is Python ;)
- Run on every Python interpreter (it's pure-python)
- Generate a stand-alone parser (for LALR(1) grammars)
And many more features. Read ahead and find out!
Most importantly, Lark will save you time and prevent you from getting parsing headaches.
- Documentation @readthedocs
- Cheatsheet (PDF)
- Tutorial for writing a JSON parser.
- Blog post: How to write a DSL with Lark
- Gitter chat
$ pip install lark-parser
Lark has no dependencies.
Lark provides syntax highlighting for its grammar files (*.lark):
- Lerche (Julia) - an unofficial clone, written entirely in Julia.
Here is a little program to parse "Hello, World!" (Or any other similar phrase):
from lark import Lark
l = Lark('''start: WORD "," WORD "!"
%import common.WORD // imports from terminal library
%ignore " " // Disregard spaces in text
''')
print( l.parse("Hello, World!") )
And the output is:
Tree(start, [Token(WORD, 'Hello'), Token(WORD, 'World')])
Notice punctuation doesn't appear in the resulting tree. It's automatically filtered away by Lark.
Lark is great at handling ambiguity. Let's parse the phrase "fruit flies like bananas":
See more examples here
- Builds a parse-tree (AST) automagically, based on the structure of the grammar
- Earley parser
- Can parse all context-free grammars
- Full support for ambiguous grammars
- LALR(1) parser
- Fast and light, competitive with PLY
- Can generate a stand-alone parser
- CYK parser, for highly ambiguous grammars
- EBNF grammar
- Unicode fully supported
- Python 2 & 3 compatible
- Automatic line & column tracking
- Standard library of terminals (strings, numbers, names, etc.)
- Import grammars from Nearley.js
- Extensive test suite
- MyPy support using type stubs
- And much more!
See the full list of features here
Lark is the fastest and lightest (lower is better)
Check out the JSON tutorial for more details on how the comparison was made.
Note: I really wanted to add PLY to the benchmark, but I couldn't find a working JSON parser anywhere written in PLY. If anyone can point me to one that actually works, I would be happy to add it!
Note 2: The parsimonious code has been optimized for this specific test, unlike the other benchmarks (Lark included). Its "real-world" performance may not be as good.
Library | Algorithm | Grammar | Builds tree? | Supports ambiguity? | Can handle every CFG? | Line/Column tracking | Generates Stand-alone |
---|---|---|---|---|---|---|---|
Lark | Earley/LALR(1) | EBNF | Yes! | Yes! | Yes! | Yes! | Yes! (LALR only) |
PLY | LALR(1) | BNF | No | No | No | No | No |
PyParsing | PEG | Combinators | No | No | No* | No | No |
Parsley | PEG | EBNF | No | No | No* | No | No |
Parsimonious | PEG | EBNF | Yes | No | No* | No | No |
ANTLR | LL(*) | EBNF | Yes | No | Yes? | Yes | No |
(* PEGs cannot handle non-deterministic grammars. Also, according to Wikipedia, it remains unanswered whether PEGs can really parse all deterministic CFGs)
- storyscript - The programming language for Application Storytelling
- tartiflette - a GraphQL engine by Dailymotion. Lark is used to parse the GraphQL schemas definitions.
- Hypothesis - Library for property-based testing
- mappyfile - a MapFile parser for working with MapServer configuration
- synapse - an intelligence analysis platform
- Datacube-core - Open Data Cube analyses continental scale Earth Observation data through time
- SPFlow - Library for Sum-Product Networks
- Torchani - Accurate Neural Network Potential on PyTorch
- Command-Block-Assembly - An assembly language, and C compiler, for Minecraft commands
- Fabric-SDK-Py - Hyperledger fabric SDK with Python 3.x
- required - multi-field validation using docstrings
- miniwdl - A static analysis toolkit for the Workflow Description Language
- pytreeview - a lightweight tree-based grammar explorer
- harmalysis - A language for harmonic analysis and music theory
Using Lark? Send me a message and I'll add your project!
Lark comes with a tool to convert grammars from Nearley, a popular Earley library for Javascript. It uses Js2Py to convert and run the Javascript postprocessing code segments.
Here's an example:
git clone https://github.com/Hardmath123/nearley
python -m lark.tools.nearley nearley/examples/calculator/arithmetic.ne main nearley > ncalc.py
You can use the output as a regular python module:
>>> import ncalc
>>> ncalc.parse('sin(pi/4) ^ e')
0.38981434460254655
Lark uses the MIT license.
(The standalone tool is under MPL2)
Lark is currently accepting pull-requests. See How to develop Lark
If you like Lark and feel like donating, you can do so at my patreon page.
If you wish for a specific feature to get a higher priority, you can request it in a follow-up email, and I'll consider it favorably.
If you have any questions or want my assistance, you can email me at erezshin at gmail com.
I'm also available for contract work.
-- Erez