API Reference¶
Lark¶
-
class
lark.
Lark
(grammar, **options)¶ Main interface for the library.
It’s mostly a thin wrapper for the many different parsers, and for the tree constructor.
- Parameters
grammar – a string or file-object containing the grammar spec (using Lark’s ebnf syntax)
options – a dictionary controlling various aspects of Lark.
Example
>>> Lark(r'''start: "foo" ''') Lark(...)
=== General Options ===
- start
The start symbol. Either a string, or a list of strings for multiple possible starts (Default: “start”)
- debug
Display debug information, such as warnings (default: False)
- transformer
Applies the transformer to every parse tree (equivlent to applying it after the parse, but faster)
- propagate_positions
Propagates (line, column, end_line, end_column) attributes into all tree branches.
- maybe_placeholders
When True, the
[]
operator returnsNone
when not matched.When
False
,[]
behaves like the?
operator, and returns no value at all. (default=False
. Recommended to set toTrue
)- cache
Cache the results of the Lark grammar analysis, for x2 to x3 faster loading. LALR only for now.
When
False
, does nothing (default)When
True
, caches to a temporary file in the local directoryWhen given a string, caches to the path pointed by the string
- regex
When True, uses the
regex
module instead of the stdlibre
.- g_regex_flags
Flags that are applied to all terminals (both regex and strings)
- keep_all_tokens
Prevent the tree builder from automagically removing “punctuation” tokens (default: False)
- tree_class
Lark will produce trees comprised of instances of this class instead of the default
lark.Tree
.
=== Algorithm Options ===
- parser
Decides which parser engine to use. Accepts “earley” or “lalr”. (Default: “earley”). (there is also a “cyk” option for legacy)
- lexer
Decides whether or not to use a lexer stage
“auto” (default): Choose for me based on the parser
“standard”: Use a standard lexer
“contextual”: Stronger lexer (only works with parser=”lalr”)
“dynamic”: Flexible and powerful (only with parser=”earley”)
“dynamic_complete”: Same as dynamic, but tries every variation of tokenizing possible.
- ambiguity
Decides how to handle ambiguity in the parse. Only relevant if parser=”earley”
“resolve”: The parser will automatically choose the simplest derivation (it chooses consistently: greedy for tokens, non-greedy for rules)
“explicit”: The parser will return all derivations wrapped in “_ambig” tree nodes (i.e. a forest).
“forest”: The parser will return the root of the shared packed parse forest.
=== Misc. / Domain Specific Options ===
- postlex
Lexer post-processing (Default: None) Only works with the standard and contextual lexers.
- priority
How priorities should be evaluated - auto, none, normal, invert (Default: auto)
- lexer_callbacks
Dictionary of callbacks for the lexer. May alter tokens during lexing. Use with caution.
- use_bytes
Accept an input of type
bytes
instead ofstr
(Python 3 only).- edit_terminals
A callback for editing the terminals before parse.
=== End Options ===
-
save
(f)¶ Saves the instance into the given file object
Useful for caching and multiprocessing.
-
classmethod
load
(f)¶ Loads an instance from the given file object
Useful for caching and multiprocessing.
-
classmethod
open
(grammar_filename, rel_to=None, **options)¶ Create an instance of Lark with the grammar given by its filename
If
rel_to
is provided, the function will find the grammar filename in relation to it.Example
>>> Lark.open("grammar_file.lark", rel_to=__file__, parser="lalr") Lark(...)
-
parse
(text, start=None, on_error=None)¶ Parse the given text, according to the options provided.
- Parameters
text (str) – Text to be parsed.
start (str, optional) – Required if Lark was given multiple possible start symbols (using the start option).
on_error (function, optional) – if provided, will be called on UnexpectedToken error. Return true to resume parsing. LALR only. See examples/error_puppet.py for an example of how to use on_error.
- Returns
If a transformer is supplied to
__init__
, returns whatever is the result of the transformation. Otherwise, returns a Tree instance.
Using Unicode character classes with regex
¶
Python’s builtin re
module has a few persistent known bugs and also won’t parse
advanced regex features such as character classes.
With pip install lark-parser[regex]
, the regex
module will be
installed alongside lark and can act as a drop-in replacement to re
.
Any instance of Lark instantiated with regex=True
will use the regex
module instead of re
.
For example, we can use character classes to match PEP-3131 compliant Python identifiers:
from lark import Lark
>>> g = Lark(r"""
?start: NAME
NAME: ID_START ID_CONTINUE*
ID_START: /[\p{Lu}\p{Ll}\p{Lt}\p{Lm}\p{Lo}\p{Nl}_]+/
ID_CONTINUE: ID_START | /[\p{Mn}\p{Mc}\p{Nd}\p{Pc}·]+/
""", regex=True)
>>> g.parse('வணக்கம்')
'வணக்கம்'
Tree¶
-
class
lark.
Tree
(data, children, meta=None)¶ The main tree class.
Creates a new tree, and stores “data” and “children” in attributes of the same name. Trees can be hashed and compared.
- Parameters
data – The name of the rule or alias
children – List of matched sub-rules and terminals
meta – Line & Column numbers (if
propagate_positions
is enabled). meta attributes: line, column, start_pos, end_line, end_column, end_pos
-
pretty
(indent_str=' ')¶ Returns an indented string representation of the tree.
Great for debugging.
-
iter_subtrees
()¶ Depth-first iteration.
Iterates over all the subtrees, never returning to the same node twice (Lark’s parse-tree is actually a DAG).
-
find_pred
(pred)¶ Returns all nodes of the tree that evaluate pred(node) as true.
-
find_data
(data)¶ Returns all nodes of the tree whose data equals the given data.
-
iter_subtrees_topdown
()¶ Breadth-first iteration.
Iterates over all the subtrees, return nodes in order like pretty() does.
Token¶
-
class
lark.
Token
(type_, value, pos_in_stream=None, line=None, column=None, end_line=None, end_column=None, end_pos=None)¶ A string with meta-information, that is produced by the lexer.
When parsing text, the resulting chunks of the input that haven’t been discarded, will end up in the tree as Token instances. The Token class inherits from Python’s
str
, so normal string comparisons and operations will work as expected.-
type
¶ Name of the token (as specified in grammar)
-
value
¶ Value of the token (redundant, as
token.value == token
will always be true)
-
pos_in_stream
¶ The index of the token in the text
-
line
¶ The line of the token in the text (starting with 1)
-
column
¶ The column of the token in the text (starting with 1)
-
end_line
¶ The line where the token ends
-
end_column
¶ The next column after the end of the token. For example, if the token is a single character with a column value of 4, end_column will be 5.
-
end_pos
¶ the index where the token ends (basically
pos_in_stream + len(token)
)
-
Transformer, Visitor & Interpreter¶
ForestVisitor, ForestTransformer, & TreeForestTransformer¶
UnexpectedInput¶
-
class
lark.exceptions.
UnexpectedInput
¶ UnexpectedInput Error.
Used as a base class for the following exceptions:
UnexpectedToken
: The parser received an unexpected tokenUnexpectedCharacters
: The lexer encountered an unexpected string
After catching one of these exceptions, you may call the following helper methods to create a nicer error message.
-
get_context
(text, span=40)¶ Returns a pretty string pinpointing the error in the text, with span amount of context characters around it.
Note
The parser doesn’t hold a copy of the text it has to parse, so you have to provide it again
-
match_examples
(parse_fn, examples, token_type_match_fallback=False, use_accepts=False)¶ Allows you to detect what’s wrong in the input text by matching against example errors.
Given a parser instance and a dictionary mapping some label with some malformed syntax examples, it’ll return the label for the example that bests matches the current error. The function will iterate the dictionary until it finds a matching error, and return the corresponding value.
For an example usage, see examples/error_reporting_lalr.py
- Parameters
parse_fn – parse function (usually
lark_instance.parse
)examples – dictionary of
{'example_string': value}
.use_accepts – Recommended to call this with
use_accepts=True
. The default isFalse
for backwards compatibility.
-
class
lark.exceptions.
UnexpectedToken
(token, expected, considered_rules=None, state=None, puppet=None)¶ When the parser throws UnexpectedToken, it instantiates a puppet with its internal state. Users can then interactively set the puppet to the desired puppet state, and resume regular parsing.
see: ParserPuppet.
-
class
lark.exceptions.
UnexpectedCharacters
(seq, lex_pos, line, column, allowed=None, considered_tokens=None, state=None, token_history=None)¶
ParserPuppet¶
-
class
lark.parsers.lalr_puppet.
ParserPuppet
(parser, state_stack, value_stack, start, stream, set_state)¶ ParserPuppet gives you advanced control over error handling when parsing with LALR.
For a simpler, more streamlined interface, see the
on_error
argument toLark.parse()
.-
feed_token
(token)¶ Feed the parser with a token, and advance it to the next state, as if it received it from the lexer.
Note that
token
has to be an instance ofToken
.
-
copy
()¶ Create a new puppet with a separate state.
Calls to feed_token() won’t affect the old puppet, and vice-versa.
-
pretty
()¶ Print the output of
choices()
in a way that’s easier to read.
-
choices
()¶ Returns a dictionary of token types, matched to their action in the parser.
Only returns token types that are accepted by the current state.
Updated by
feed_token()
.
-
resume_parse
()¶ Resume parsing from the current puppet state.
-