Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Store Syntax.node children in array #12

Closed
wants to merge 2 commits into from
Closed

Conversation

Kha
Copy link
Member

@Kha Kha commented Mar 28, 2019

I've done a first pass through the frontend, which could certainly be improved in some places. Unfortunately, it currently fails with an assertion violation from Array.iterateAux. I haven't taken a close look yet, but it looks like partial leads to some confusion of arguments.

end Array

def List.toArrayAux {α : Type u} : List α → Array α → Array α
@[inlineIfReduce] def List.toArrayAux {α : Type u} : List α → Array α → Array α
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes for some nice array literal syntax [a, b, c, ...].toArray. I've verified that it compiles down to n Array.push.


instance many1.tokens (r : Parser) [Parser.HasTokens r] : Parser.HasTokens (many1 r) :=
⟨tokens r⟩

--TODO(Sebastian): should this be an `Array` as well?
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'd need a heterogeneous Array.map here, which I assume would need a builtin to avoid allocations. On the other hand, I don't think we will keep this code anyway.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a cool builtin :)
However, we need to implement it in Lean using unsafe features. If we do it C++, we will not be able to specialize the primitive for a given f : a -> b. We don't have yet support for implementing new "safe" primitives in Lean using unsafe primitives. This is not the first time I miss this feature.

Ah yes, it would need to be in Lean. Being able to write primitives in Lean would indeed be very cool. Of course, I can't help thinking about an insane safe variant, something like

def PointedArray := Array PointedType
-- identity at runtime
@[extern] def PointedArray.ofArray : Array a -> PointedArray := ...
@[extern] def PointedArray.toArray (arr : PointedArray) : arr.allType a -> Array a := ...
...

and then having a loop invariant in map that says that a prefix of the elements is of the new type and the remainder is of the old type, or something...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might work.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After writing this, I remembered that this is just a special case of GHC's Coercible type class. It should correspond to the instances (except for universes, maybe?)

Coercible a {t : PointedType // t.type = a}
Coercible a b -> Coercible (Array a) (Array b)
Coercible {t : PointedType // t.type = a} a

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is Coercible an opaque type class that only developers can add instances?
If it is opaque, we need a congruence "axiom/lemma" to make it extensible.

lemma Coercible.congr (a b : Type u) (F : Type u -> Type v) (c : Coercible a b) : Coercible (F a) (F b)

Then, we can use Coercible.congr to define new instances such as

Coercible a b -> Coercible (List a) (List b)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reference implementation is just Eq, and the reference implementation for primitives Coercible a {t : PointedType // t.type = a} uses Eq.recOn.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, should we use a different name? I can see users getting confused when they see Coercible and HasCoe :)

Copy link
Member Author

@Kha Kha Apr 1, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it's not clear to me who should provide the instances in the case of Lean (and how hard we should try to prevent others from doing that). If the inductive command does it anyway for "ground coercions" like Coercible a {x : a // p}, we may not need an explicit congr axiom but let inductive handle everything.

Actually, I don't think the Subtype coercions I sketched above would help us with map, since we still couldn't express that a prefix of the Array has a different type from the remainder. It looks like we would need a dependent variant of the class instead.

class HasReinterpretDep (α : Type u) (β : α → Type v) :=
(reinterpret {} (a : α) : β a)
export HasReinterpretDep (reinterpret)
abbreviation HasReinterpret (α : Type u) (β : Type v) := HasReinterpretDep α (λ _, β)

instance PointedType.ReinterpretTo (α : Type u) : HasReinterpret α PointedType :=
⟨λ a, ⟨α, a⟩⟩

instance PointedType.ReinterpretFrom : HasReinterpretDep PointedType PointedType.type :=
⟨PointedType.val⟩

Then we could reinterpret into Array PointedType, and out of it when we can prove that all types are the same.

def HasReinterpret.FromHasReinterpretDep {α : Type u} {β : α → Type v} {β' : Type v} (h : ∀ a : α, β a = β')
  [HasReinterpretDep α β] : HasReinterpret α β' :=
⟨λ a, cast (h a) (reinterpret a)⟩

noncomputable instance (α β) [HasReinterpret α β] : HasReinterpret (Array α) (Array β) :=
⟨λ arr, ⟨arr.sz, reinterpret ∘ arr.data⟩⟩

Though to be fair, now that I've written this down... we would not need any of this for map if we had a dependently-typed variant of Array (but it might still be useful for other use cases).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we would not need any of this for map if we had a dependently-typed variant of Array (but it might still be useful for other use cases).

The dependently-typed variant will be quite messy. In this kind of variant, it is natural to include the array size in the type. From Lean 3:
https://github.com/leanprover/lean/blob/master/library/init/data/array/basic.lean#L11
This may look good for performance, but it is not. We would still have to track the array size in runtime object (for implementing del), and we would also be implicitly carrying the size in all functions manipulating arrays.
BTW, I recall we discussed that language that treats Type arguments as the value cell size at runtime, but I am not considering this kind of solution. It would be a major refactoring, and it is not clear whether we would be able to produce better code. The only thing I know for sure is that we would produce more code and have more arguments.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Coercion type class would also be useful to implement this idiom:
https://github.com/leanprover/lean/blob/master/tests/lean/run/term_app2.lean#L17
https://github.com/leanprover/lean/blob/master/tests/lean/run/term_app2.lean#L50
It would allow us to go from List α to List {a : α // a ∈ l} with 0 cost.
BTW, I did not think about how to define the coercion

instance {p : a -> Prop} : Coercion a {x : a // p a} 

The instance above is nonsensical since the type {x : a // p a} may be uninhabited. We had a similar issue with HasCoe and resolved the issue by hard-coding support in the elaborator. BTW, we (Daniel, you and I) have discussed a dependent HasCoe, but it is quite messy and generates many problems for type class resolution.
Finally, recall that we have experienced many problems with HasCoe+type class resolution, and are considering using a different solution for HasCoe. We will have similar problems with Coercion.

@leodemoura
Copy link
Member

leodemoura commented Mar 28, 2019

I haven't taken a close look yet, but it looks like partial leads to some confusion of arguments.

I fixed a similar bug yesterday. If you send me a small repro, I will investigate.

Copy link
Member

@leodemoura leodemoura left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'd need a heterogeneous Array.map here, which I assume would need a builtin to avoid allocations. On the other hand, I don't think we will keep this code anyway.

This is a cool builtin :)
However, we need to implement it in Lean using unsafe features. If we do it C++, we will not be able to specialize the primitive for a given f : a -> b. We don't have yet support for implementing new "safe" primitives in Lean using unsafe primitives. This is not the first time I miss this feature.

@@ -65,6 +71,9 @@ a.index ⟨i.toNat, h⟩
def get [Inhabited α] (a : @& Array α) (i : @& Nat) : α :=
if h : i < a.sz then a.index ⟨i, h⟩ else default α

def oget (a : @& Array α) (i : @& Nat) : Option α :=
if h : i < a.size then some (a.index ⟨i, h⟩) else none
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Borrowed annotations are only taken into account for primitive operations.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, I keep struggling when trying to name functions such as oget. We have a similar problem in the List library. In a system like Lean, we always have the following family of related functions:
1- A version that assumes the result type is inhabited and uses default a.
2- A version that returns Option a or Except e a.
3- A version that takes a "proof" that guarantees there is no exceptional case. Actually, for the array library, we have two instances of this one: index (using Fin) and a low level idx using USize.
It would be great to have a convention for naming these variants and use it consistently everywhere.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought there was some precedent, but the only one I found was foldr1Opt (which I also named, oh well). So I went with getOpt now, which I definitely prefer to some single-letter prefix.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought there was some precedent, but the only one I found was foldr1Opt (which I also named, oh well). So I went with getOpt now, which I definitely prefer to some single-letter prefix.

The suffix Opt is better. OCaml uses nth : list a -> int -> a and nth_opt : list a -> int -> option a.
There have been other instances of the problem above in the Lean library. For example, the list library in Lean 2 had https://github.com/leanprover/lean/blob/CADE25/library/data/list/basic.lean:

definition nth : list T → nat → option T
definition inth [h : inhabited T] (l : list T) (n : nat) : T :=
definition ith : Π (l : list T) (i : nat), i < length l → T

@@ -108,9 +117,28 @@ def pop (a : Array α) : Array α :=
@[inline] def iterate (a : Array α) (b : β) (f : Π i : Fin a.sz, α → β → β) : β :=
iterateAux a f 0 b

@[inline] def foldl (a : Array α) (b : β) (f : α → β → β) : β :=
@[inline] def foldl (a : Array α) (f : α → β → β) (b : β) : β :=
iterate a b (λ _, f)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change is consistent with the order used at List.foldl. This is good, but the order at List.foldl is awful for the $ notation. With foldl (a : Array α) (b : β) (f : α → β → β), we can write

array.foldl b $ fun a b, <big-chunk-of-code>

instead of

array.foldl (fun a b, <big-chunk-of-code>) b

Any ideas on how to improve this?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, parameter ordering is hard. I assume Haskell etc. chose this order for partial applications. One approach is to just hope the everything will be better eventually and we can write

let f (a b) := <big-chunk-of-code> in -- order-independent elaboration to the rescue
array.foldl f b

or maybe even

array.foldl(init := b) $ <big-chunk-of-code> -- more descriptive anyway

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

array.foldl(init := b) $ -- more descriptive anyway

I really want this :)

@Kha
Copy link
Member Author

Kha commented Mar 28, 2019

I'm not sure it's even the same issue, but the following lines crash the array_test on this branch:

modified   tests/compiler/array_test.lean
@@ -21,4 +21,6 @@ do
  IO.println (toString a2),
  let a2 := a.pop,
  IO.println a2,
+ a3 ← a.mmap (λ x, pure x),
+ IO.println a3,
  pure 0

@leodemoura
Copy link
Member

I'm not sure it's even the same issue, but the following lines crash the array_test on this branch:

Are you sure you have the latest version? array_test.lean is working for me. I fixed a very similar bug yesterday.

@Kha
Copy link
Member Author

Kha commented Mar 28, 2019

It doesn't work for me :( . I've rebase this branch and added the changed test.

@leodemoura
Copy link
Member

It doesn't work for me :( . I've rebase this branch and added the changed test.

My mistake, I missed your changes to array_test.lean. I managed to reproduce the crash. I will investigate.

@leodemoura
Copy link
Member

I found the issue. There is a mismatch between @miterateAux._main (the definition sent to the kernel) and @miterateAux._main._unsafe_rec (the definition sent to the compiler).

#check @miterateAux._main
#check @miterateAux._main._unsafe_rec

We use a function mk_aux_definition to add them to the environment. This function captures all arguments defined in the surrounding context. Before, I added partial def, the compilation strategy would never use extra arguments from the surrounding context. Now, this is not true anymore. In the example above, partial_rec will add [Inhabited \beta] to justify the base case :(

@leodemoura
Copy link
Member

I pushed a fix for the _unsafe_rec issue. It should fix the problem. I will test more later. I have to go to a meeting now.

@@ -173,7 +179,7 @@ def updateLeading (source : String) : Syntax → Syntax :=
partial def getHeadInfo : Syntax → Option SourceInfo
| (atom a) := a.info
| (ident id) := id.info
| (rawNode n) := n.args.foldr (λ s r, getHeadInfo s <|> r) none
| (rawNode n) := n.args.foldl (λ s r, getHeadInfo s <|> r) none
| _ := none
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

foldr is the faithful translation for the code before partial def. For example, suppose n.args is [c_1, c_2, c_3].
With foldr we get

getHeadInfo c_1 <|> (getHeadInfo c_2 <|> (getHeadInfo c_3 <|> none))

So, we stop the search as soon as a getHeadInfo c_i returns some v.
On the other hand, with foldl we get.

getHeadInfo c_3 <|> (getHeadInfo c_2 <|> (getHeadInfo c_1 <|> none))

We will traverse the whole list even if getHeadInfo c_1 return some v.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ignore comment above, I got confused. Lean is strict. Both foldr and foldl will traverse the whole list. I was also incorrectly assuming that Option.orelse was marked as [inline]. After marked it with [inline] generated optimal code in both cases. Moreover, both specializations become tail recursive :)

@Kha
Copy link
Member Author

Kha commented Mar 29, 2019

The branch is working now. The timings are indistinguishable from master.

@leodemoura
Copy link
Member

The branch is working now. The timings indistinguishable from master.

I think core.lean is not good for benchmarking. First, there are too many components being executed, and many of them are implemented in C++. We already have many components written in Lean, and non-trivial interactions between them. Another problem is that the compiler is also quite new, and often generates bad code and/or something that we did not expect. My fear is that as soon as we have the whole frontend in Lean, the performance will be bad, and it will be very hard to spot where the problem is, and how to fix it without major refactoring.
Note that, we have already found and fixed a few performance and memory consumption issues, but they were not easy to find.
Artificial benchmarks generators (e.g., https://github.com/leanprover/lean4/blob/master/tests/playground/gen.lean) are great for spotting performance problems. Recall that we found memory leaks and exposed issues with the coroutine abstractions using it. We should create new generators that exercise different components of the new frontend (e.g., expander).
In the meantime, could you please push the new array primitives you added to master. I want to profile array manipulating programs written in Lean, and make sure the compiler is producing descent code.

else pure b

@[inline] def miterate (a : Array α) (b : β) (f : Π i : Fin a.sz, α → β → m β) : m β :=
miterateAux a f 0 b
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we define iterate in terms of this? I assume that specialization works very well on the id monad :) .

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we define iterate in terms of this? I assume that specialization works very well on the id monad :) .

It should if we assume the compiler is perfect. We should try and check the generated code.

BTW, I think we should define the id monad using

def Id (type : Type u) : Type u := type

Type expressions will look more uniform, and make clear the user intent. Example:

ExceptT e (StateT s Id) a

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, I'll do the change and check the compiler output. I agree with Id.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, the specializations look good. Pushed to master.

@leodemoura
Copy link
Member

The views are still being created in this branch, right?
What about the 0-cost views we have discussed at Zulip?
Here is the relevant fragment from Zulip

It would be great if we could use this thread to describe alternative designs and motivate the current one.
I will start by describing an idiom we should avoid. Suppose we have a constructor

node : Kind -> List Syntax -> Syntax
and we write patterns such as

match n with
| node kind1 [c1, c2, c3, c4] := ...
| node kind2 [c1, c2] := ...
| node kind3 [c1, c2, c3, c4, c5] := ...
| ...

The generated code will be bloated with many nested List.casesOn.
Another problem is that we have many levels of indirection.
We can avoid the problem by using Array Syntax instead of List Syntax. If we use Arrays we can't easily extract the children, but we can write 0-cost views such as

@[inline] def withKind1Args {a : Type} (args : Array Syntax) (fn : Syntax -> Syntax -> Syntax -> Syntax -> a) : a :=
fn (args.get 0) (args.get 1) (args.get 2) (args.get 3)

If the default value for Syntax is Syntax.missing, then this approach should work well, and we can write the match above as:

match n with
| node kind1 as := withKind1Args as $ fun c1 c2 c3 c4, ...
| node kind2 as := withKind2Args as $ fun c1 c2, ...
| node kind3 as := withKind3Args as $ fun c1 c2 c3 c4 c5, ...
| ...

This is way faster than the previous approach using Lists. We can make it even faster by using a new primitive Tuple type. Tuples would be like arrays but without support for increasing their size dynamically. That is, we don't have the m_capacity field and save 8 bytes in the object header.

@leodemoura
Copy link
Member

BTW, I have modified List.toArray in the master branch. Now, the resultant array size and capacity will be equal to the List length. For example, List.toArray [1, ..., 7] produces an array with capacity 7 instead of 14.

@Kha
Copy link
Member Author

Kha commented Apr 1, 2019

I've pushed some very simple expander tests with and without views to master and this branch. This branch is just slightly slower without views, and about 10% slower with views, most probably because the many combinator still uses lists as tagged above. Depending on the test, the views are ~10%-50% slower than no views.

@Kha
Copy link
Member Author

Kha commented Apr 1, 2019

What about the 0-cost views we have discussed at Zulip?

I agree that 0-cost views are probably the only practical solution, but I'd really like to keep the named structure fields for my sanity. I believe we should be able to generate the same 0-cost code with view structures if we make the following adjustments:

  • views are shallow: all fields are of type Syntax
  • HasView instances should be inlined; this should not be an issue anymore with shallow views
  • viewed data should not escape the current function, thus compiling down to zero-allocation Array.gets (though join points may (currently) prevent this)
  • review should only be called on constructor terms, thus removing the allocation of the view

@Kha
Copy link
Member Author

Kha commented Apr 1, 2019

The shallow view would still lose quite a bit of type information compared to the current views. We've previously discussed a more lightweight model where fields could be typed by a type like

-- "syntax with a view"
-- NOTE: does not assert the actual kind of its content, which may be `Syntax.missing`
def VSyntax (k : NodeKind) := Syntax
@[inline] def VSyntax.view {k} [HasView k v] : VSyntax k -> v := HasView.view
...

Now we have the same amount of typedness as before, though certainly many more view and review calls (textually), and we sadly lose the ability to do nested matches on views.

@leodemoura
Copy link
Member

I've pushed some very simple expander tests with and without views to master and this branch. This branch is just slightly slower without views, and about 10% slower with views, most probably because the many combinator still uses lists as tagged above. Depending on the test, the views are ~10%-50% slower than no views.

I am playing with the tests. Here are some preliminatry comments.

  • A hand-written "to node2" transformation is more than 10x faster:
partial def toNode2 : Syntax → Syntax
| (Syntax.rawNode ⟨kind, as, scopes⟩) := Syntax.rawNode ⟨⟨`node2⟩, as.map toNode2, scopes⟩
| other                               := other
  • The view based version doesn't follow the view-review idiom used at expander.lean.
def testSimple := test [(`node, λ stx, pure $ Syntax.mkNode ⟨`node2⟩ $ let v := view node stx in v.children)]
  • I found the generated code quite confusing too. It is not what I expected when I saw the file.
    Here is the generated code for node.HasView.
>> node.HasView'._lambda_1
λ (_x_1 : _obj),
  let _x_2 : _obj := Syntax.asNode._main _x_1
  in @Option.casesOn _x_2 (let _x_3 : _obj := node.HasView'._lambda_1._closed_1 in _x_3)
       (let _x_4 : _obj := _proj.0 _x_2,
            _ : _void := _inc _x_4,
            _ : _void := _dec _x_2,
            _x_5 : _obj := _proj.1 _x_4,
            _ : _void := _inc _x_5,
            _ : _void := _dec _x_4
        in @List.casesOn _x_5 (let _x_6 : _obj := node.HasView'._lambda_1._closed_1 in _x_6)
             (let _x_7 : _obj := _proj.0 _x_5,
                  _ : _void := _inc _x_7,
                  _ : _void := _dec _x_5,
                  _x_8 : _obj := Syntax.asNode._main _x_7
              in @Option.casesOn _x_8
                   (let _x_9 : _obj := Lean.Parser.detailIdentPartEscaped.HasView'._lambda_2._closed_2 in _x_9)
                   (let _x_10 : _obj := _proj.0 _x_8,
                        _ : _void := _inc _x_10,
                        _ : _void := _dec _x_8,
                        _x_11 : _obj := _proj.1 _x_10,
                        _ : _void := _inc _x_11,
                        _ : _void := _dec _x_10,
                        _x_12 : _obj := List.map._main._at.Lean.Parser.identUnivSpec.HasView'._spec_1 _x_11
                    in _x_12)))
  • The view version creates bigger trees since review leaf {} creates an extra node containing foo. Not sure whether it impacts the performance numbers or not.

@leodemoura
Copy link
Member

leodemoura commented Apr 1, 2019

I agree that 0-cost views are probably the only practical solution, but I'd really like to keep the named structure fields for my sanity.

People have been writing Lisp macros for a long time without using views. I agree their syntax object is simpler since they do not have tokens, but we can easily create a 0-cost view that does ignore tokens.
Example, for an if-then-else node, we create

@[inline] def withIfArgs {a : Type} (args : Array Syntax) (fn : Syntax -> Syntax -> Syntax  -> a) : a :=
fn (args.get 1) (args.get 3) (args.get 5)

We may also create the "update" function that "reuses" the tokens from a given source.

@[inline] def updateIf {a : Type} (src : Syntax) (c t e : Syntax) : Syntax :=
Syntax.rawNode src.Kind [src.get 0, c, src.get 2, t, src.get 4, e]

We can automatically generate these simple functions.

@leodemoura
Copy link
Member

I believe we should be able to generate the same 0-cost code with view structures if we make the following adjustments:

I like the low tech solution I described in the previous message.

  • It doesn't rely on any fancy optimization in the compiler.
  • It doesn't rely on type classes, and we don't have to fix the type class resolution procedure to get descent compilation times.

@leodemoura
Copy link
Member

The shallow view would still lose quite a bit of type information compared to the current views.

I am fine with this. For me, it is much more important to make sure I can predict the performance/shape of the generated code. I feel like that I can only achieve this with the current approach by using trace.compiler, and I get surprised over and over again with unexpected hidden costs.

@leodemoura
Copy link
Member

I pushed a proof of concept for the low tech views.
https://github.com/leanprover/lean4/blob/master/tests/playground/lowtech_expander.lean
Here is the transformer for if (not c) then t else e ==> if c then e else t.

def flipIf : Transformer :=
λ n, withIf n $ λ c t e,
 isNot c (pure n.val) $ λ c',
 pure (updateIf n c' e t)

and here is the generated code:

λ (_x_1 : _obj) (_x_2 : UInt8),
  let _x_3 : _obj := _proj.1 _x_1,
      _ : _void := _inc _x_3,
      _x_4 : _obj := Syntax.Inhabited,
      _x_5 : _obj := 1,
      _x_6 : _obj := Array.get ◾ _x_4 _x_3 _x_5,
      _ : _void := _inc _x_1,
      _x_7 : _obj := Option._cnstr.1.0.0 _x_1
  in @Syntax.casesOn _x_6
       (let _ : _void := _dec _x_1, _ : _void := _dec _x_3, _x_8 : _obj := Except._cnstr.1.0.0 _x_7 in _x_8)
       (let _x_9 : _obj := _proj.0 _x_6,
            _ : _void := _inc _x_9,
            _x_10 : _obj := _proj.1 _x_6,
            _ : _void := _inc _x_10,
            _ : _void := _dec _x_6,
            _x_11 : _obj := notKind,
            _x_12 : UInt8 := Lean.Name.decEq _x_9 _x_11,
            _ : _void := _dec _x_9
        in @Bool.casesOn _x_12
             (let _ : _void := _dec _x_1,
                  _ : _void := _dec _x_10,
                  _ : _void := _dec _x_3,
                  _x_13 : _obj := Except._cnstr.1.0.0 _x_7
              in _x_13)
             (let _ : _void := _dec _x_7,
                  _x_14 : _obj := _proj.0 _x_1,
                  _ : _void := _inc _x_14,
                  _x_15 : _obj := _proj.2 _x_1,
                  _ : _void := _inc _x_15,
                  _x_16 : _obj := _reset.3 _x_1,
                  _x_17 : _obj := 3,
                  _x_18 : _obj := Array.get ◾ _x_4 _x_3 _x_17,
                  _x_19 : _obj := 5,
                  _x_20 : _obj := Array.get ◾ _x_4 _x_3 _x_19,
                  _x_21 : _obj := Array.get ◾ _x_4 _x_10 _x_5,
                  _ : _void := _dec _x_10,
                  _x_22 : _obj := Array.set ◾ _x_3 _x_5 _x_21,
                  _x_23 : _obj := Array.set ◾ _x_22 _x_17 _x_20,
                  _x_24 : _obj := Array.set ◾ _x_23 _x_19 _x_18,
                  _x_25 : _obj := _reuse.1.0.0.0 _x_16 _x_14 _x_24 _x_15,
                  _x_26 : _obj := Option._cnstr.1.0.0 _x_25,
                  _x_27 : _obj := Except._cnstr.1.0.0 _x_26
              in _x_27))
       (let _ : _void := _dec _x_6,
            _ : _void := _dec _x_1,
            _ : _void := _dec _x_3,
            _x_28 : _obj := Except._cnstr.1.0.0 _x_7
        in _x_28)
       (let _ : _void := _dec _x_6,
            _ : _void := _dec _x_1,
            _ : _void := _dec _x_3,
            _x_29 : _obj := Except._cnstr.1.0.0 _x_7
        in _x_29)

@leodemoura
Copy link
Member

I pushed a small experiment where I use the new [init] attribute to generate unique Nat values for identifying SyntaxNodeKinds: 8004fcf
There are two benefits:

  • We can use an array to map SyntaxNodeKind to transformers and elaborators.
  • Testing whether two kinds are equal or not is fast.

There is one disadvantage: we cannot store Syntax objects directly into .olean files. The issue is that the SyntaxNodeKinds may be different. However, we can fix this by having a fixKinds procedure that uses the name field to fix the id field. This trick is equivalent to the work performed by the linker in a compiler. It is also similar to the approach we wanted to use for storing low-level bytecode. Here is the proof of concept
b75706f

@Kha
Copy link
Member Author

Kha commented Apr 1, 2019

Thanks, this is very helpful. I'm still a little uneasy when I think of how more complex code like this would be written in this style, and in particular how many "type" confusions one would inevitably encounter. How would one even debug which with combinator out of 5 nested ones failed to match?

@leodemoura
Copy link
Member

I'm still a little uneasy when I think of how more complex code like this would be written in this style, and in particular how many "type" confusions one would inevitably encounter. How would one even debug which with combinator out of 5 nested ones failed to match?

I wrote the skeleton for letTransformer: d73d262

I have two versions:
1- Using a <?> combinator, and Syntax.case "kind-matching" function. The idiom would be: given a n : Syntax,

(n.case kind_1 $ fun (n : SyntaxNode), withKind1 n $ fun <args>,  <code for this case>)
<?>
...
<?>
(n.case kind_m $ fun (n : SyntaxNode), withKindM n $ fun <args>, <code for this case>)

The generated code is reasonable, but it relies on may compiler transformations to get efficient code. For example, I am currently investigating a missing optimization exposed by this idiom.

2- Using a n.isNode and if-then-else chains. Given a n : Syntax we would write:

n.isNode $ fun (k : SyntaxNodeKind) (n : SyntaxNode), 
if k == kind_1 then withKind1 n $ fun <args>, <code for this case>
else if k == kind_2 then withKind2 n $ fun <args>, <code for this case>
...
else withKindM n $ fun <args>, <code for this case>

@leodemoura
Copy link
Member

How would one even debug which with combinator out of 5 nested ones failed to match?

I am not sure whether this is a problem or not. The code above looks reasonable to me, and I would debug it like any other piece of code. Are you concerned with mistakes such as: type in a letLhsId-node is optional, but a macro writer may forget this fact and treat it as regular term node?
I am not sure this is a big problem. Racket is untyped, and it didn't prevent anybody from writing very complicated macros.

If debugging the transformer becomes a big problem, we can add a "trace" to TransformerM. If we use EState instead of ExceptT, we will probably produce more efficient code anyway.

@leodemoura
Copy link
Member

I inspected the generated code, and the approach using if-then-else is much more compiler friendly than the one using the <?> combinator. The code pattern

(n.case kind1 ...)
<?>
(n.case kind2 ...)

generates something like the following

let result_1 := match n with <code for first case>
in match result_1 with
    | Except.ok none := match n with <code for second case>
    | other := ...

So, each <?> combinator application forces the compiler to perform a float match transformation. Since each case is probably not a small piece of code, the compiler generates a new join point for each <?> application.

@Kha
Copy link
Member Author

Kha commented Apr 2, 2019

One aspect we should not forget is how we want such code to look eventually, e.g. using syntax quotations. Should we use them everywhere possible? What do they need to look like to achieve that? Should we implement them using the current views and then use them to switch to low-tech views, or should we switch before that?

@Kha
Copy link
Member Author

Kha commented Apr 2, 2019

It's not completely thought out yet, but I could imagine the code to look something like this:

def letTransformer' : Transformer :=
λ stx, syntax_match stx with
  | `(let $n:ident : $t := $_ in $_) := noExpansion
  | `(let $n:ident := $v in $b) := pure `(let $n:ident : _ := $v in $b)
  | `(let $n:ident $($b:binder)+ $t:optType := $v in $b) := noExpansion -- TODO
  | `(let $p:pattern := $v in $b) := noExpansion --TODO

which may compile down to something very close to your code given a sufficiently smart syntax_match transformer... (except for SyntaxNode, which I don't feel is the right abstraction level for syntax quotations)?

For reference, this is what I could imagine the full code to look like:

def getOptType (stx : Syntax) : Syntax :=
syntax_match stx with
| `(optType|) := `(_)
| `(optType|: $t) := t

def letTransformer' : Transformer :=
λ stx, syntax_match stx with
  | `(let $n:ident : $t := $_) := noExpansion
  | `(let $n:ident := $v) := pure `(let $n:ident : _ := $v)
  | `(let $n:ident $($b:binder)+ $t:optType := $v) :=
    pure `(let $n:ident : Π $($b:binder)+, $(getOptType t) := λ $($b:binder)+, $v)
  | `(let $p:pattern := $v in $b) := pure `(match $v with $p:pattern := $b)

@leodemoura
Copy link
Member

I agree that a syntax_match macro would be awesome. That being said, it must have a clear runtime cost model and should be compatible with memory reuse. Note that the two idioms above will reuse the memory cells for the Syntax and the array (if not shared).

@leodemoura
Copy link
Member

BTW, we will need something similar for Expr. The Lean 3 approach using match is too expensive. Recall the open issue we have there.
So, I am wondering whether syntax_match macro is an idiom that we can reuse for different types: Expr, Syntax

@Kha
Copy link
Member Author

Kha commented Apr 2, 2019

Yes, we should be able to reuse the model for Expr. The biggest difference is that we don't need to propagate macro scopes for Expr, i.e. it should be even simpler.

@leodemoura leodemoura force-pushed the master branch 2 times, most recently from c227d0e to 865bb8d Compare April 24, 2019 18:40
@Kha
Copy link
Member Author

Kha commented Jul 1, 2019

Implemented by the new new parser

@Kha Kha closed this Jul 1, 2019
@leodemoura leodemoura deleted the syntax-array branch April 2, 2022 21:46
ChrisHughes24 pushed a commit to ChrisHughes24/lean4 that referenced this pull request Dec 2, 2022
* add Mem notation for lists

* delete comment
leodemoura added a commit that referenced this pull request Apr 29, 2024
Mathlib is crashing with #4006. Here is the stacktrace produced by Kim:
```
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x100000000000a)
  * frame #0: 0x00000001066db21c libleanshared.dylib`lean::ir::interpreter::eval_body(lean::object_ref const&) + 2816
    frame #1: 0x00000001066dd464 libleanshared.dylib`lean::ir::interpreter::call(lean::name const&, lean::array_ref<lean::object_ref> const&) + 1360
    frame #2: 0x00000001066db394 libleanshared.dylib`lean::ir::interpreter::eval_body(lean::object_ref const&) + 3192
    frame #3: 0x00000001066dd464 libleanshared.dylib`lean::ir::interpreter::call(lean::name const&, lean::array_ref<lean::object_ref> const&) + 1360
    frame #4: 0x00000001066db394 libleanshared.dylib`lean::ir::interpreter::eval_body(lean::object_ref const&) + 3192
    frame #5: 0x00000001066dd464 libleanshared.dylib`lean::ir::interpreter::call(lean::name const&, lean::array_ref<lean::object_ref> const&) + 1360
    frame #6: 0x00000001066db394 libleanshared.dylib`lean::ir::interpreter::eval_body(lean::object_ref const&) + 3192
    frame #7: 0x00000001066df288 libleanshared.dylib`lean::ir::interpreter::stub_m(lean_object**) + 556
    frame #8: 0x00000001066d6ee0 libleanshared.dylib`lean_object* lean::ir::interpreter::with_interpreter<lean_object*>(lean::environment const&, lean::options const&, lean::name const&, std::__1::function<lean_object* (lean::ir::interpreter&)> const&) + 320
    frame #9: 0x00000001066dee84 libleanshared.dylib`lean::ir::interpreter::stub_m_aux(lean_object**) + 92
    frame #10: 0x00000001066deafc libleanshared.dylib`lean::ir::interpreter::stub_9_aux(lean_object*, lean_object*, lean_object*, lean_object*, lean_object*, lean_object*, lean_object*, lean_object*, lean_object*) + 60
    frame #11: 0x00000001066f52a0 libleanshared.dylib`lean_apply_6 + 1748
    frame #12: 0x00000001055d1ac8 libleanshared.dylib`l_Array_forInUnsafe_loop___at___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___spec__10___lambda__2 + 156
    frame #13: 0x00000001055d47e8 libleanshared.dylib`l_Array_forInUnsafe_loop___at___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___spec__10___lambda__2___boxed + 144
    frame #14: 0x00000001066f5bcc libleanshared.dylib`lean_apply_7 + 1348
    frame #15: 0x00000001055ccccc libleanshared.dylib`l_Lean_Elab_withInfoTreeContext___at___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___spec__4 + 528
    frame #16: 0x00000001066f5b6c libleanshared.dylib`lean_apply_7 + 1252
    frame #17: 0x00000001055d1550 libleanshared.dylib`l_Lean_Elab_withLogging___at___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___spec__6 + 240
    frame #18: 0x00000001055d4cb4 libleanshared.dylib`l_Array_forInUnsafe_loop___at___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___spec__10 + 940
    frame #19: 0x00000001055d5394 libleanshared.dylib`l___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___lambda__1 + 60
    frame #20: 0x00000001055d5740 libleanshared.dylib`l___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___lambda__1___boxed + 148
    frame #21: 0x00000001066f11f4 libleanshared.dylib`lean_apply_1 + 840
    frame #22: 0x0000000103bce27c libleanshared.dylib`l_Lean_profileitIOUnsafe___rarg___lambda__1 + 24
    frame #23: 0x0000000103bce4ec libleanshared.dylib`l_Lean_profileitIOUnsafe___rarg___lambda__1___boxed + 20
    frame #24: 0x00000001066f10bc libleanshared.dylib`lean_apply_1 + 528
    frame #25: 0x0000000106644260 libleanshared.dylib`lean_profileit + 128
    frame #26: 0x0000000103bce3e0 libleanshared.dylib`l_Lean_profileitIOUnsafe___rarg + 112
    frame #27: 0x00000001055d564c libleanshared.dylib`l___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore + 268
    frame #28: 0x00000001055d6264 libleanshared.dylib`l_Lean_Elab_Term_applyAttributes + 52
    frame #29: 0x000000010597b840 libleanshared.dylib`l_Array_forInUnsafe_loop___at_Lean_Elab_Command_elabAttr___spec__6___lambda__1 + 740
    frame #30: 0x000000010597daf4 libleanshared.dylib`l_Array_forInUnsafe_loop___at_Lean_Elab_Command_elabAttr___spec__6___lambda__1___boxed + 124
    frame #31: 0x00000001066f65d8 libleanshared.dylib`lean_apply_8 + 1252
    frame #32: 0x00000001066f5b6c libleanshared.dylib`lean_apply_7 + 1252
    frame #33: 0x0000000104f587b0 libleanshared.dylib`l___private_Lean_Elab_InfoTree_Main_0__Lean_Elab_withSavedPartialInfoContext___at_Lean_Elab_Command_liftTermElabM___spec__2___rarg___lambda__1 + 344
    frame #34: 0x0000000104f59ec4 libleanshared.dylib`l___private_Lean_Elab_InfoTree_Main_0__Lean_Elab_withSavedPartialInfoContext___at_Lean_Elab_Command_liftTermElabM___spec__2___rarg + 280
    frame #35: 0x0000000104f5af20 libleanshared.dylib`l_Lean_Elab_Command_liftTermElabM___rarg___lambda__1 + 144
    frame #36: 0x00000001066f5ab8 libleanshared.dylib`lean_apply_7 + 1072
    frame #37: 0x0000000105636090 libleanshared.dylib`l_Lean_Elab_Term_TermElabM_run___rarg + 844
    frame #38: 0x0000000104f5b8fc libleanshared.dylib`l_Lean_Elab_Command_liftTermElabM___rarg + 1696
    frame #39: 0x000000010597d67c libleanshared.dylib`l_Array_forInUnsafe_loop___at_Lean_Elab_Command_elabAttr___spec__6 + 928
    frame #40: 0x000000010597de60 libleanshared.dylib`l_Lean_Elab_Command_elabAttr + 772
    frame #41: 0x000000010597e838 libleanshared.dylib`l_Lean_Elab_Command_elabAttr___boxed + 20
    frame #42: 0x00000001066f2cd4 libleanshared.dylib`lean_apply_3 + 868
    frame #43: 0x0000000104f385f8 libleanshared.dylib`l_Lean_Elab_withInfoTreeContext___at___private_Lean_Elab_Command_0__Lean_Elab_Command_elabCommandUsing___spec__2 + 396
    frame #44: 0x0000000104f39e48 libleanshared.dylib`l___private_Lean_Elab_Command_0__Lean_Elab_Command_elabCommandUsing + 484
    frame #45: 0x00000001066f2cf0 libleanshared.dylib`lean_apply_3 + 896
    frame #46: 0x0000000104f341d4 libleanshared.dylib`l_Lean_withTraceNode___at_Lean_Elab_Command_runLinters___spec__11 + 788
    frame #47: 0x00000001066f2d54 libleanshared.dylib`lean_apply_3 + 996
    frame #48: 0x00000001066f2cf0 libleanshared.dylib`lean_apply_3 + 896
    frame #49: 0x0000000104f40e30 libleanshared.dylib`l_Lean_Elab_withLogging___at_Lean_Elab_Command_elabCommand___spec__2 + 104
    frame #50: 0x0000000104f4c51c libleanshared.dylib`l_Lean_Elab_Command_elabCommandTopLevel___lambda__1 + 432
    frame #51: 0x00000001066f10e8 libleanshared.dylib`lean_apply_1 + 572
    frame #52: 0x0000000103bce27c libleanshared.dylib`l_Lean_profileitIOUnsafe___rarg___lambda__1 + 24
    frame #53: 0x0000000103bce4ec libleanshared.dylib`l_Lean_profileitIOUnsafe___rarg___lambda__1___boxed + 20
    frame #54: 0x00000001066f10bc libleanshared.dylib`lean_apply_1 + 528
    frame #55: 0x0000000106644260 libleanshared.dylib`lean_profileit + 128
    frame #56: 0x0000000103bce3e0 libleanshared.dylib`l_Lean_profileitIOUnsafe___rarg + 112
    frame #57: 0x0000000104f4fce0 libleanshared.dylib`l_Lean_Elab_Command_elabCommandTopLevel + 1284
    frame #58: 0x00000001057d2f30 libleanshared.dylib`l_Lean_Elab_Frontend_elabCommandAtFrontend + 1384
    frame #59: 0x00000001057d63b8 libleanshared.dylib`l_Lean_Elab_Frontend_processCommand + 1332
    frame #60: 0x00000001057d6e48 libleanshared.dylib`l_Lean_Elab_Frontend_processCommands + 72
    frame #61: 0x00000001057d7248 libleanshared.dylib`l_Lean_Elab_IO_processCommands + 212
    frame #62: 0x00000001057d83d0 libleanshared.dylib`l_Lean_Elab_runFrontend___lambda__3 + 76
    frame #63: 0x00000001057d96d0 libleanshared.dylib`lean_run_frontend + 2436
    frame #64: 0x00000001065e72b4 libleanshared.dylib`lean::run_new_frontend(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, lean::options const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, lean::name const&, unsigned int, lean::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>> const&, unsigned char) + 244
    frame #65: 0x00000001065e9c8c libleanshared.dylib`lean_main + 8348
    frame #66: 0x0000000184f93f28 dyld`start + 2236
```

cc @Kha
github-merge-queue bot pushed a commit that referenced this pull request Apr 29, 2024
Mathlib is crashing with #4006. Here is the stacktrace produced by Kim:
```
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x100000000000a)
  * frame #0: 0x00000001066db21c libleanshared.dylib`lean::ir::interpreter::eval_body(lean::object_ref const&) + 2816
    frame #1: 0x00000001066dd464 libleanshared.dylib`lean::ir::interpreter::call(lean::name const&, lean::array_ref<lean::object_ref> const&) + 1360
    frame #2: 0x00000001066db394 libleanshared.dylib`lean::ir::interpreter::eval_body(lean::object_ref const&) + 3192
    frame #3: 0x00000001066dd464 libleanshared.dylib`lean::ir::interpreter::call(lean::name const&, lean::array_ref<lean::object_ref> const&) + 1360
    frame #4: 0x00000001066db394 libleanshared.dylib`lean::ir::interpreter::eval_body(lean::object_ref const&) + 3192
    frame #5: 0x00000001066dd464 libleanshared.dylib`lean::ir::interpreter::call(lean::name const&, lean::array_ref<lean::object_ref> const&) + 1360
    frame #6: 0x00000001066db394 libleanshared.dylib`lean::ir::interpreter::eval_body(lean::object_ref const&) + 3192
    frame #7: 0x00000001066df288 libleanshared.dylib`lean::ir::interpreter::stub_m(lean_object**) + 556
    frame #8: 0x00000001066d6ee0 libleanshared.dylib`lean_object* lean::ir::interpreter::with_interpreter<lean_object*>(lean::environment const&, lean::options const&, lean::name const&, std::__1::function<lean_object* (lean::ir::interpreter&)> const&) + 320
    frame #9: 0x00000001066dee84 libleanshared.dylib`lean::ir::interpreter::stub_m_aux(lean_object**) + 92
    frame #10: 0x00000001066deafc libleanshared.dylib`lean::ir::interpreter::stub_9_aux(lean_object*, lean_object*, lean_object*, lean_object*, lean_object*, lean_object*, lean_object*, lean_object*, lean_object*) + 60
    frame #11: 0x00000001066f52a0 libleanshared.dylib`lean_apply_6 + 1748
    frame #12: 0x00000001055d1ac8 libleanshared.dylib`l_Array_forInUnsafe_loop___at___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___spec__10___lambda__2 + 156
    frame #13: 0x00000001055d47e8 libleanshared.dylib`l_Array_forInUnsafe_loop___at___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___spec__10___lambda__2___boxed + 144
    frame #14: 0x00000001066f5bcc libleanshared.dylib`lean_apply_7 + 1348
    frame #15: 0x00000001055ccccc libleanshared.dylib`l_Lean_Elab_withInfoTreeContext___at___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___spec__4 + 528
    frame #16: 0x00000001066f5b6c libleanshared.dylib`lean_apply_7 + 1252
    frame #17: 0x00000001055d1550 libleanshared.dylib`l_Lean_Elab_withLogging___at___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___spec__6 + 240
    frame #18: 0x00000001055d4cb4 libleanshared.dylib`l_Array_forInUnsafe_loop___at___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___spec__10 + 940
    frame #19: 0x00000001055d5394 libleanshared.dylib`l___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___lambda__1 + 60
    frame #20: 0x00000001055d5740 libleanshared.dylib`l___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___lambda__1___boxed + 148
    frame #21: 0x00000001066f11f4 libleanshared.dylib`lean_apply_1 + 840
    frame #22: 0x0000000103bce27c libleanshared.dylib`l_Lean_profileitIOUnsafe___rarg___lambda__1 + 24
    frame #23: 0x0000000103bce4ec libleanshared.dylib`l_Lean_profileitIOUnsafe___rarg___lambda__1___boxed + 20
    frame #24: 0x00000001066f10bc libleanshared.dylib`lean_apply_1 + 528
    frame #25: 0x0000000106644260 libleanshared.dylib`lean_profileit + 128
    frame #26: 0x0000000103bce3e0 libleanshared.dylib`l_Lean_profileitIOUnsafe___rarg + 112
    frame #27: 0x00000001055d564c libleanshared.dylib`l___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore + 268
    frame #28: 0x00000001055d6264 libleanshared.dylib`l_Lean_Elab_Term_applyAttributes + 52
    frame #29: 0x000000010597b840 libleanshared.dylib`l_Array_forInUnsafe_loop___at_Lean_Elab_Command_elabAttr___spec__6___lambda__1 + 740
    frame #30: 0x000000010597daf4 libleanshared.dylib`l_Array_forInUnsafe_loop___at_Lean_Elab_Command_elabAttr___spec__6___lambda__1___boxed + 124
    frame #31: 0x00000001066f65d8 libleanshared.dylib`lean_apply_8 + 1252
    frame #32: 0x00000001066f5b6c libleanshared.dylib`lean_apply_7 + 1252
    frame #33: 0x0000000104f587b0 libleanshared.dylib`l___private_Lean_Elab_InfoTree_Main_0__Lean_Elab_withSavedPartialInfoContext___at_Lean_Elab_Command_liftTermElabM___spec__2___rarg___lambda__1 + 344
    frame #34: 0x0000000104f59ec4 libleanshared.dylib`l___private_Lean_Elab_InfoTree_Main_0__Lean_Elab_withSavedPartialInfoContext___at_Lean_Elab_Command_liftTermElabM___spec__2___rarg + 280
    frame #35: 0x0000000104f5af20 libleanshared.dylib`l_Lean_Elab_Command_liftTermElabM___rarg___lambda__1 + 144
    frame #36: 0x00000001066f5ab8 libleanshared.dylib`lean_apply_7 + 1072
    frame #37: 0x0000000105636090 libleanshared.dylib`l_Lean_Elab_Term_TermElabM_run___rarg + 844
    frame #38: 0x0000000104f5b8fc libleanshared.dylib`l_Lean_Elab_Command_liftTermElabM___rarg + 1696
    frame #39: 0x000000010597d67c libleanshared.dylib`l_Array_forInUnsafe_loop___at_Lean_Elab_Command_elabAttr___spec__6 + 928
    frame #40: 0x000000010597de60 libleanshared.dylib`l_Lean_Elab_Command_elabAttr + 772
    frame #41: 0x000000010597e838 libleanshared.dylib`l_Lean_Elab_Command_elabAttr___boxed + 20
    frame #42: 0x00000001066f2cd4 libleanshared.dylib`lean_apply_3 + 868
    frame #43: 0x0000000104f385f8 libleanshared.dylib`l_Lean_Elab_withInfoTreeContext___at___private_Lean_Elab_Command_0__Lean_Elab_Command_elabCommandUsing___spec__2 + 396
    frame #44: 0x0000000104f39e48 libleanshared.dylib`l___private_Lean_Elab_Command_0__Lean_Elab_Command_elabCommandUsing + 484
    frame #45: 0x00000001066f2cf0 libleanshared.dylib`lean_apply_3 + 896
    frame #46: 0x0000000104f341d4 libleanshared.dylib`l_Lean_withTraceNode___at_Lean_Elab_Command_runLinters___spec__11 + 788
    frame #47: 0x00000001066f2d54 libleanshared.dylib`lean_apply_3 + 996
    frame #48: 0x00000001066f2cf0 libleanshared.dylib`lean_apply_3 + 896
    frame #49: 0x0000000104f40e30 libleanshared.dylib`l_Lean_Elab_withLogging___at_Lean_Elab_Command_elabCommand___spec__2 + 104
    frame #50: 0x0000000104f4c51c libleanshared.dylib`l_Lean_Elab_Command_elabCommandTopLevel___lambda__1 + 432
    frame #51: 0x00000001066f10e8 libleanshared.dylib`lean_apply_1 + 572
    frame #52: 0x0000000103bce27c libleanshared.dylib`l_Lean_profileitIOUnsafe___rarg___lambda__1 + 24
    frame #53: 0x0000000103bce4ec libleanshared.dylib`l_Lean_profileitIOUnsafe___rarg___lambda__1___boxed + 20
    frame #54: 0x00000001066f10bc libleanshared.dylib`lean_apply_1 + 528
    frame #55: 0x0000000106644260 libleanshared.dylib`lean_profileit + 128
    frame #56: 0x0000000103bce3e0 libleanshared.dylib`l_Lean_profileitIOUnsafe___rarg + 112
    frame #57: 0x0000000104f4fce0 libleanshared.dylib`l_Lean_Elab_Command_elabCommandTopLevel + 1284
    frame #58: 0x00000001057d2f30 libleanshared.dylib`l_Lean_Elab_Frontend_elabCommandAtFrontend + 1384
    frame #59: 0x00000001057d63b8 libleanshared.dylib`l_Lean_Elab_Frontend_processCommand + 1332
    frame #60: 0x00000001057d6e48 libleanshared.dylib`l_Lean_Elab_Frontend_processCommands + 72
    frame #61: 0x00000001057d7248 libleanshared.dylib`l_Lean_Elab_IO_processCommands + 212
    frame #62: 0x00000001057d83d0 libleanshared.dylib`l_Lean_Elab_runFrontend___lambda__3 + 76
    frame #63: 0x00000001057d96d0 libleanshared.dylib`lean_run_frontend + 2436
    frame #64: 0x00000001065e72b4 libleanshared.dylib`lean::run_new_frontend(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, lean::options const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, lean::name const&, unsigned int, lean::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>> const&, unsigned char) + 244
    frame #65: 0x00000001065e9c8c libleanshared.dylib`lean_main + 8348
    frame #66: 0x0000000184f93f28 dyld`start + 2236
```

cc @Kha
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants