Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix typos #1242

Merged
merged 1 commit into from
Jan 29, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Lark can parse all context-free languages. To put it simply, it means that it is

**Who is it for?**

- **Beginners**: Lark is very friendly for experimentation. It can parse any grammar you throw at it, no matter how complicated or ambiguous, and do so efficiently. It also constructs an annotated parse-tree for you, using only the grammar and an input, and it gives you convienient and flexible tools to process that parse-tree.
- **Beginners**: Lark is very friendly for experimentation. It can parse any grammar you throw at it, no matter how complicated or ambiguous, and do so efficiently. It also constructs an annotated parse-tree for you, using only the grammar and an input, and it gives you convenient and flexible tools to process that parse-tree.

- **Experts**: Lark implements both Earley(SPPF) and LALR(1), and several different lexers, so you can trade-off power and speed, according to your requirements. It also provides a variety of sophisticated features and utilities.

Expand Down
2 changes: 1 addition & 1 deletion docs/ide/app/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -1001,7 +1001,7 @@ def children(self, n=None):
"""
Access children of widget.

If ``n`` is ommitted, it returns a list of all child-widgets;
If ``n`` is omitted, it returns a list of all child-widgets;
Else, it returns the N'th child, or None if its out of bounds.

:param n: Optional offset of child widget to return.
Expand Down
2 changes: 1 addition & 1 deletion examples/advanced/dynamic_complete.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ def score(tree: Tree):
"""
Scores an option by how many children (and grand-children, and
grand-grand-children, ...) it has.
This means that the option with fewer large terminals get's selected
This means that the option with fewer large terminals gets selected

Between
object
Expand Down
2 changes: 1 addition & 1 deletion lark/exceptions.py
Original file line number Diff line number Diff line change
Expand Up @@ -217,7 +217,7 @@ class UnexpectedToken(ParseError, UnexpectedInput):
expected: The set of expected tokens
considered_rules: Which rules were considered, to deduce the expected tokens
state: A value representing the parser state. Do not rely on its value or type.
interactive_parser: An instance of ``InteractiveParser``, that is initialized to the point of failture,
interactive_parser: An instance of ``InteractiveParser``, that is initialized to the point of failure,
and can be used for debugging and error handling.

Note: These parameters are available as attributes of the instance.
Expand Down
2 changes: 1 addition & 1 deletion lark/lexer.py
Original file line number Diff line number Diff line change
Expand Up @@ -390,7 +390,7 @@ def _regexp_has_newline(r: str):

class LexerState:
"""Represents the current state of the lexer as it scans the text
(Lexer objects are only instanciated per grammar, not per text)
(Lexer objects are only instantiated per grammar, not per text)
"""

__slots__ = 'text', 'line_ctr', 'last_token'
Expand Down
2 changes: 1 addition & 1 deletion lark/parsers/cyk.py
Original file line number Diff line number Diff line change
Expand Up @@ -250,7 +250,7 @@ def get_any_nt_unit_rule(g):


def _remove_unit_rule(g, rule):
"""Removes 'rule' from 'g' without changing the langugage produced by 'g'."""
"""Removes 'rule' from 'g' without changing the language produced by 'g'."""
new_rules = [x for x in g.rules if x != rule]
refs = [x for x in g.rules if x.lhs == rule.rhs[0]]
new_rules += [build_unit_skiprule(rule, ref) for ref in refs]
Expand Down
2 changes: 1 addition & 1 deletion lark/reconstruct.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ class Reconstructor(TreeMatcher):
The reconstructor cannot generate values from regexps. If you need to produce discarded
regexes, such as newlines, use `term_subs` and provide default values for them.

Paramters:
Parameters:
parser: a Lark instance
term_subs: a dictionary of [Terminal name as str] to [output text as str]
"""
Expand Down
2 changes: 1 addition & 1 deletion lark/tree_matcher.py
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ class TreeMatcher:

Supports templates and inlined rules (`rule{a, b,..}` and `_rule`)

Initiialize with an instance of Lark.
Initialize with an instance of Lark.
"""

def __init__(self, parser):
Expand Down
4 changes: 2 additions & 2 deletions lark/tree_templates.py
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ def match(self, tree: TreeOrCode) -> Optional[MatchResult]:
return self.conf._match_tree_template(self.tree, tree)

def search(self, tree: TreeOrCode) -> Iterator[Tuple[Tree[str], MatchResult]]:
"""Search for all occurances of the tree template inside ``tree``.
"""Search for all occurrences of the tree template inside ``tree``.
"""
tree = self.conf._get_tree(tree)
for subtree in tree.iter_subtrees():
Expand All @@ -153,7 +153,7 @@ def apply_vars(self, vars: Mapping[str, Tree[str]]) -> Tree[str]:


def translate(t1: Template, t2: Template, tree: TreeOrCode):
"""Search tree and translate each occurrance of t1 into t2.
"""Search tree and translate each occurrence of t1 into t2.
"""
tree = t1.conf._get_tree(tree) # ensure it's a tree, parse if necessary and possible
for subtree, vars in t1.search(tree):
Expand Down
4 changes: 2 additions & 2 deletions lark/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -189,7 +189,7 @@ def dedup_list(l: List[T]) -> List[T]:
dedup = set()
# This returns None, but that's expected
return [x for x in l if not (x in dedup or dedup.add(x))] # type: ignore[func-returns-value]
# 2x faster (ordered in PyPy and CPython 3.6+, gaurenteed to be ordered in Python 3.7+)
# 2x faster (ordered in PyPy and CPython 3.6+, guaranteed to be ordered in Python 3.7+)
# return list(dict.fromkeys(l))


Expand All @@ -214,7 +214,7 @@ def reversed(self) -> Dict[int, Any]:

def combine_alternatives(lists):
"""
Accepts a list of alternatives, and enumerates all their possible concatinations.
Accepts a list of alternatives, and enumerates all their possible concatenations.

Examples:
>>> combine_alternatives([range(2), [4,5]])
Expand Down
4 changes: 2 additions & 2 deletions tests/test_logger.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ def test_non_debug(self):
with capture_log() as log:
Lark(collision_grammar, parser='lalr', debug=False)
log = log.getvalue()
# no log messge
# no log message
self.assertEqual(len(log), 0)

def test_loglevel_higher(self):
Expand All @@ -58,7 +58,7 @@ def test_loglevel_higher(self):
with capture_log() as log:
Lark(collision_grammar, parser='lalr', debug=True)
log = log.getvalue()
# no log messge
# no log message
self.assertEqual(len(log), 0)

if __name__ == '__main__':
Expand Down
6 changes: 3 additions & 3 deletions tests/test_parser.py
Original file line number Diff line number Diff line change
Expand Up @@ -430,7 +430,7 @@ def test_earley2(self):
def test_earley3(self):
"""Tests prioritization and disambiguation for pseudo-terminals (there should be only one result)

By default, `+` should immitate regexp greedy-matching
By default, `+` should imitate regexp greedy-matching
"""
grammar = """
start: A A
Expand Down Expand Up @@ -1472,7 +1472,7 @@ def test_g_regex_flags(self):
# # This parse raises an exception because the lexer will always try to consume
# # "a" first and will never match the regular expression
# # This behavior is subject to change!!
# # Thie won't happen with ambiguity handling.
# # This won't happen with ambiguity handling.
# g = _Lark("""start: (A | /a?ab/)+
# A: "a" """)
# self.assertRaises(LexError, g.parse, 'aab')
Expand Down Expand Up @@ -1743,7 +1743,7 @@ def test_line_and_column(self):

def test_reduce_cycle(self):
"""Tests an edge-condition in the LALR parser, in which a transition state looks exactly like the end state.
It seems that the correct solution is to explicitely distinguish finalization in the reduce() function.
It seems that the correct solution is to explicitly distinguish finalization in the reduce() function.
"""

l = _Lark("""
Expand Down
2 changes: 1 addition & 1 deletion tests/test_reconstructor.py
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ def test_switch_grammar_unicode_terminal(self):
This test checks that a parse tree built with a grammar containing only ascii characters can be reconstructed
with a grammar that has unicode rules (or vice versa). The original bug assigned ANON terminals to unicode
keywords, which offsets the ANON terminal count in the unicode grammar and causes subsequent identical ANON
tokens (e.g., `+=`) to mis-match between the two grammars.
tokens (e.g., `+=`) to mismatch between the two grammars.
"""

g1 = """
Expand Down