-
Notifications
You must be signed in to change notification settings - Fork 446
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Store Syntax.node children in array #12
Conversation
library/init/data/array/basic.lean
Outdated
end Array | ||
|
||
def List.toArrayAux {α : Type u} : List α → Array α → Array α | ||
@[inlineIfReduce] def List.toArrayAux {α : Type u} : List α → Array α → Array α |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes for some nice array literal syntax [a, b, c, ...].toArray
. I've verified that it compiles down to n Array.push
.
|
||
instance many1.tokens (r : Parser) [Parser.HasTokens r] : Parser.HasTokens (many1 r) := | ||
⟨tokens r⟩ | ||
|
||
--TODO(Sebastian): should this be an `Array` as well? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We'd need a heterogeneous Array.map
here, which I assume would need a builtin to avoid allocations. On the other hand, I don't think we will keep this code anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a cool builtin :)
However, we need to implement it in Lean using unsafe features. If we do it C++, we will not be able to specialize the primitive for a given f : a -> b. We don't have yet support for implementing new "safe" primitives in Lean using unsafe primitives. This is not the first time I miss this feature.
Ah yes, it would need to be in Lean. Being able to write primitives in Lean would indeed be very cool. Of course, I can't help thinking about an insane safe variant, something like
def PointedArray := Array PointedType
-- identity at runtime
@[extern] def PointedArray.ofArray : Array a -> PointedArray := ...
@[extern] def PointedArray.toArray (arr : PointedArray) : arr.allType a -> Array a := ...
...
and then having a loop invariant in map
that says that a prefix of the elements is of the new type and the remainder is of the old type, or something...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After writing this, I remembered that this is just a special case of GHC's Coercible type class. It should correspond to the instances (except for universes, maybe?)
Coercible a {t : PointedType // t.type = a}
Coercible a b -> Coercible (Array a) (Array b)
Coercible {t : PointedType // t.type = a} a
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is Coercible
an opaque type class that only developers can add instances?
If it is opaque, we need a congruence "axiom/lemma" to make it extensible.
lemma Coercible.congr (a b : Type u) (F : Type u -> Type v) (c : Coercible a b) : Coercible (F a) (F b)
Then, we can use Coercible.congr
to define new instances such as
Coercible a b -> Coercible (List a) (List b)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reference implementation is just Eq
, and the reference implementation for primitives Coercible a {t : PointedType // t.type = a}
uses Eq.recOn
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BTW, should we use a different name? I can see users getting confused when they see Coercible
and HasCoe
:)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it's not clear to me who should provide the instances in the case of Lean (and how hard we should try to prevent others from doing that). If the inductive
command does it anyway for "ground coercions" like Coercible a {x : a // p}
, we may not need an explicit congr axiom but let inductive
handle everything.
Actually, I don't think the Subtype coercions I sketched above would help us with map
, since we still couldn't express that a prefix of the Array has a different type from the remainder. It looks like we would need a dependent variant of the class instead.
class HasReinterpretDep (α : Type u) (β : α → Type v) :=
(reinterpret {} (a : α) : β a)
export HasReinterpretDep (reinterpret)
abbreviation HasReinterpret (α : Type u) (β : Type v) := HasReinterpretDep α (λ _, β)
instance PointedType.ReinterpretTo (α : Type u) : HasReinterpret α PointedType :=
⟨λ a, ⟨α, a⟩⟩
instance PointedType.ReinterpretFrom : HasReinterpretDep PointedType PointedType.type :=
⟨PointedType.val⟩
Then we could reinterpret into Array PointedType
, and out of it when we can prove that all types are the same.
def HasReinterpret.FromHasReinterpretDep {α : Type u} {β : α → Type v} {β' : Type v} (h : ∀ a : α, β a = β')
[HasReinterpretDep α β] : HasReinterpret α β' :=
⟨λ a, cast (h a) (reinterpret a)⟩
noncomputable instance (α β) [HasReinterpret α β] : HasReinterpret (Array α) (Array β) :=
⟨λ arr, ⟨arr.sz, reinterpret ∘ arr.data⟩⟩
Though to be fair, now that I've written this down... we would not need any of this for map
if we had a dependently-typed variant of Array
(but it might still be useful for other use cases).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we would not need any of this for map if we had a dependently-typed variant of Array (but it might still be useful for other use cases).
The dependently-typed variant will be quite messy. In this kind of variant, it is natural to include the array size in the type. From Lean 3:
https://github.com/leanprover/lean/blob/master/library/init/data/array/basic.lean#L11
This may look good for performance, but it is not. We would still have to track the array size in runtime object (for implementing del
), and we would also be implicitly carrying the size in all functions manipulating arrays.
BTW, I recall we discussed that language that treats Type
arguments as the value cell size at runtime, but I am not considering this kind of solution. It would be a major refactoring, and it is not clear whether we would be able to produce better code. The only thing I know for sure is that we would produce more code and have more arguments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Coercion
type class would also be useful to implement this idiom:
https://github.com/leanprover/lean/blob/master/tests/lean/run/term_app2.lean#L17
https://github.com/leanprover/lean/blob/master/tests/lean/run/term_app2.lean#L50
It would allow us to go from List α
to List {a : α // a ∈ l}
with 0 cost.
BTW, I did not think about how to define the coercion
instance {p : a -> Prop} : Coercion a {x : a // p a}
The instance above is nonsensical since the type {x : a // p a}
may be uninhabited. We had a similar issue with HasCoe
and resolved the issue by hard-coding support in the elaborator. BTW, we (Daniel, you and I) have discussed a dependent HasCoe
, but it is quite messy and generates many problems for type class resolution.
Finally, recall that we have experienced many problems with HasCoe
+type class resolution, and are considering using a different solution for HasCoe
. We will have similar problems with Coercion
.
I fixed a similar bug yesterday. If you send me a small repro, I will investigate. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We'd need a heterogeneous Array.map here, which I assume would need a builtin to avoid allocations. On the other hand, I don't think we will keep this code anyway.
This is a cool builtin :)
However, we need to implement it in Lean using unsafe
features. If we do it C++, we will not be able to specialize the primitive for a given f : a -> b
. We don't have yet support for implementing new "safe" primitives in Lean using unsafe primitives. This is not the first time I miss this feature.
library/init/data/array/basic.lean
Outdated
@@ -65,6 +71,9 @@ a.index ⟨i.toNat, h⟩ | |||
def get [Inhabited α] (a : @& Array α) (i : @& Nat) : α := | |||
if h : i < a.sz then a.index ⟨i, h⟩ else default α | |||
|
|||
def oget (a : @& Array α) (i : @& Nat) : Option α := | |||
if h : i < a.size then some (a.index ⟨i, h⟩) else none |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Borrowed annotations are only taken into account for primitive operations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BTW, I keep struggling when trying to name functions such as oget
. We have a similar problem in the List
library. In a system like Lean, we always have the following family of related functions:
1- A version that assumes the result type is inhabited and uses default a
.
2- A version that returns Option a
or Except e a
.
3- A version that takes a "proof" that guarantees there is no exceptional case. Actually, for the array library, we have two instances of this one: index
(using Fin
) and a low level idx
using USize
.
It would be great to have a convention for naming these variants and use it consistently everywhere.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought there was some precedent, but the only one I found was foldr1Opt
(which I also named, oh well). So I went with getOpt
now, which I definitely prefer to some single-letter prefix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought there was some precedent, but the only one I found was foldr1Opt (which I also named, oh well). So I went with getOpt now, which I definitely prefer to some single-letter prefix.
The suffix Opt
is better. OCaml uses nth : list a -> int -> a
and nth_opt : list a -> int -> option a
.
There have been other instances of the problem above in the Lean library. For example, the list library in Lean 2 had https://github.com/leanprover/lean/blob/CADE25/library/data/list/basic.lean:
definition nth : list T → nat → option T
definition inth [h : inhabited T] (l : list T) (n : nat) : T :=
definition ith : Π (l : list T) (i : nat), i < length l → T
library/init/data/array/basic.lean
Outdated
@@ -108,9 +117,28 @@ def pop (a : Array α) : Array α := | |||
@[inline] def iterate (a : Array α) (b : β) (f : Π i : Fin a.sz, α → β → β) : β := | |||
iterateAux a f 0 b | |||
|
|||
@[inline] def foldl (a : Array α) (b : β) (f : α → β → β) : β := | |||
@[inline] def foldl (a : Array α) (f : α → β → β) (b : β) : β := | |||
iterate a b (λ _, f) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change is consistent with the order used at List.foldl
. This is good, but the order at List.foldl
is awful for the $
notation. With foldl (a : Array α) (b : β) (f : α → β → β)
, we can write
array.foldl b $ fun a b, <big-chunk-of-code>
instead of
array.foldl (fun a b, <big-chunk-of-code>) b
Any ideas on how to improve this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, parameter ordering is hard. I assume Haskell etc. chose this order for partial applications. One approach is to just hope the everything will be better eventually and we can write
let f (a b) := <big-chunk-of-code> in -- order-independent elaboration to the rescue
array.foldl f b
or maybe even
array.foldl(init := b) $ <big-chunk-of-code> -- more descriptive anyway
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
array.foldl(init := b) $ -- more descriptive anyway
I really want this :)
I'm not sure it's even the same issue, but the following lines crash the array_test on this branch:
|
Are you sure you have the latest version? |
It doesn't work for me :( . I've rebase this branch and added the changed test. |
My mistake, I missed your changes to |
I found the issue. There is a mismatch between
We use a function |
I pushed a fix for the |
@@ -173,7 +179,7 @@ def updateLeading (source : String) : Syntax → Syntax := | |||
partial def getHeadInfo : Syntax → Option SourceInfo | |||
| (atom a) := a.info | |||
| (ident id) := id.info | |||
| (rawNode n) := n.args.foldr (λ s r, getHeadInfo s <|> r) none | |||
| (rawNode n) := n.args.foldl (λ s r, getHeadInfo s <|> r) none | |||
| _ := none |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
foldr
is the faithful translation for the code before partial def
. For example, suppose n.args
is [c_1, c_2, c_3]
.
With foldr
we get
getHeadInfo c_1 <|> (getHeadInfo c_2 <|> (getHeadInfo c_3 <|> none))
So, we stop the search as soon as a getHeadInfo c_i
returns some v
.
On the other hand, with foldl
we get.
getHeadInfo c_3 <|> (getHeadInfo c_2 <|> (getHeadInfo c_1 <|> none))
We will traverse the whole list even if getHeadInfo c_1
return some v
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ignore comment above, I got confused. Lean is strict. Both foldr
and foldl
will traverse the whole list. I was also incorrectly assuming that Option.orelse
was marked as [inline]
. After marked it with [inline]
generated optimal code in both cases. Moreover, both specializations become tail recursive :)
The branch is working now. The timings are indistinguishable from |
I think |
library/init/data/array/basic.lean
Outdated
else pure b | ||
|
||
@[inline] def miterate (a : Array α) (b : β) (f : Π i : Fin a.sz, α → β → m β) : m β := | ||
miterateAux a f 0 b |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we define iterate
in terms of this? I assume that specialization works very well on the id
monad :) .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we define iterate in terms of this? I assume that specialization works very well on the id monad :) .
It should if we assume the compiler is perfect. We should try and check the generated code.
BTW, I think we should define the id
monad using
def Id (type : Type u) : Type u := type
Type expressions will look more uniform, and make clear the user intent. Example:
ExceptT e (StateT s Id) a
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great, I'll do the change and check the compiler output. I agree with Id
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, the specializations look good. Pushed to master
.
The views are still being created in this branch, right? It would be great if we could use this thread to describe alternative designs and motivate the current one. node : Kind -> List Syntax -> Syntax
and we write patterns such as
match n with
| node kind1 [c1, c2, c3, c4] := ...
| node kind2 [c1, c2] := ...
| node kind3 [c1, c2, c3, c4, c5] := ...
| ... The generated code will be bloated with many nested List.casesOn. @[inline] def withKind1Args {a : Type} (args : Array Syntax) (fn : Syntax -> Syntax -> Syntax -> Syntax -> a) : a :=
fn (args.get 0) (args.get 1) (args.get 2) (args.get 3) If the default value for match n with
| node kind1 as := withKind1Args as $ fun c1 c2 c3 c4, ...
| node kind2 as := withKind2Args as $ fun c1 c2, ...
| node kind3 as := withKind3Args as $ fun c1 c2 c3 c4 c5, ...
| ... This is way faster than the previous approach using Lists. We can make it even faster by using a new primitive Tuple type. Tuples would be like arrays but without support for increasing their size dynamically. That is, we don't have the m_capacity field and save 8 bytes in the object header. |
BTW, I have modified |
I've pushed some very simple expander tests with and without views to |
I agree that 0-cost views are probably the only practical solution, but I'd really like to keep the named structure fields for my sanity. I believe we should be able to generate the same 0-cost code with view structures if we make the following adjustments:
|
The shallow view would still lose quite a bit of type information compared to the current views. We've previously discussed a more lightweight model where fields could be typed by a type like -- "syntax with a view"
-- NOTE: does not assert the actual kind of its content, which may be `Syntax.missing`
def VSyntax (k : NodeKind) := Syntax
@[inline] def VSyntax.view {k} [HasView k v] : VSyntax k -> v := HasView.view
... Now we have the same amount of typedness as before, though certainly many more |
I am playing with the tests. Here are some preliminatry comments.
|
People have been writing Lisp macros for a long time without using
We may also create the "update" function that "reuses" the tokens from a given source.
We can automatically generate these simple functions. |
I like the low tech solution I described in the previous message.
|
I am fine with this. For me, it is much more important to make sure I can predict the performance/shape of the generated code. I feel like that I can only achieve this with the current approach by using |
I pushed a proof of concept for the low tech views. def flipIf : Transformer :=
λ n, withIf n $ λ c t e,
isNot c (pure n.val) $ λ c',
pure (updateIf n c' e t) and here is the generated code: λ (_x_1 : _obj) (_x_2 : UInt8),
let _x_3 : _obj := _proj.1 _x_1,
_ : _void := _inc _x_3,
_x_4 : _obj := Syntax.Inhabited,
_x_5 : _obj := 1,
_x_6 : _obj := Array.get ◾ _x_4 _x_3 _x_5,
_ : _void := _inc _x_1,
_x_7 : _obj := Option._cnstr.1.0.0 _x_1
in @Syntax.casesOn _x_6
(let _ : _void := _dec _x_1, _ : _void := _dec _x_3, _x_8 : _obj := Except._cnstr.1.0.0 _x_7 in _x_8)
(let _x_9 : _obj := _proj.0 _x_6,
_ : _void := _inc _x_9,
_x_10 : _obj := _proj.1 _x_6,
_ : _void := _inc _x_10,
_ : _void := _dec _x_6,
_x_11 : _obj := notKind,
_x_12 : UInt8 := Lean.Name.decEq _x_9 _x_11,
_ : _void := _dec _x_9
in @Bool.casesOn _x_12
(let _ : _void := _dec _x_1,
_ : _void := _dec _x_10,
_ : _void := _dec _x_3,
_x_13 : _obj := Except._cnstr.1.0.0 _x_7
in _x_13)
(let _ : _void := _dec _x_7,
_x_14 : _obj := _proj.0 _x_1,
_ : _void := _inc _x_14,
_x_15 : _obj := _proj.2 _x_1,
_ : _void := _inc _x_15,
_x_16 : _obj := _reset.3 _x_1,
_x_17 : _obj := 3,
_x_18 : _obj := Array.get ◾ _x_4 _x_3 _x_17,
_x_19 : _obj := 5,
_x_20 : _obj := Array.get ◾ _x_4 _x_3 _x_19,
_x_21 : _obj := Array.get ◾ _x_4 _x_10 _x_5,
_ : _void := _dec _x_10,
_x_22 : _obj := Array.set ◾ _x_3 _x_5 _x_21,
_x_23 : _obj := Array.set ◾ _x_22 _x_17 _x_20,
_x_24 : _obj := Array.set ◾ _x_23 _x_19 _x_18,
_x_25 : _obj := _reuse.1.0.0.0 _x_16 _x_14 _x_24 _x_15,
_x_26 : _obj := Option._cnstr.1.0.0 _x_25,
_x_27 : _obj := Except._cnstr.1.0.0 _x_26
in _x_27))
(let _ : _void := _dec _x_6,
_ : _void := _dec _x_1,
_ : _void := _dec _x_3,
_x_28 : _obj := Except._cnstr.1.0.0 _x_7
in _x_28)
(let _ : _void := _dec _x_6,
_ : _void := _dec _x_1,
_ : _void := _dec _x_3,
_x_29 : _obj := Except._cnstr.1.0.0 _x_7
in _x_29) |
I pushed a small experiment where I use the new
There is one disadvantage: we cannot store |
Thanks, this is very helpful. I'm still a little uneasy when I think of how more complex code like this would be written in this style, and in particular how many "type" confusions one would inevitably encounter. How would one even debug which |
I wrote the skeleton for I have two versions: (n.case kind_1 $ fun (n : SyntaxNode), withKind1 n $ fun <args>, <code for this case>)
<?>
...
<?>
(n.case kind_m $ fun (n : SyntaxNode), withKindM n $ fun <args>, <code for this case>) The generated code is reasonable, but it relies on may compiler transformations to get efficient code. For example, I am currently investigating a missing optimization exposed by this idiom. 2- Using a n.isNode $ fun (k : SyntaxNodeKind) (n : SyntaxNode),
if k == kind_1 then withKind1 n $ fun <args>, <code for this case>
else if k == kind_2 then withKind2 n $ fun <args>, <code for this case>
...
else withKindM n $ fun <args>, <code for this case> |
I am not sure whether this is a problem or not. The code above looks reasonable to me, and I would debug it like any other piece of code. Are you concerned with mistakes such as: If debugging the transformer becomes a big problem, we can add a "trace" to |
I inspected the generated code, and the approach using (n.case kind1 ...)
<?>
(n.case kind2 ...) generates something like the following
So, each |
One aspect we should not forget is how we want such code to look eventually, e.g. using syntax quotations. Should we use them everywhere possible? What do they need to look like to achieve that? Should we implement them using the current views and then use them to switch to low-tech views, or should we switch before that? |
It's not completely thought out yet, but I could imagine the code to look something like this: def letTransformer' : Transformer :=
λ stx, syntax_match stx with
| `(let $n:ident : $t := $_ in $_) := noExpansion
| `(let $n:ident := $v in $b) := pure `(let $n:ident : _ := $v in $b)
| `(let $n:ident $($b:binder)+ $t:optType := $v in $b) := noExpansion -- TODO
| `(let $p:pattern := $v in $b) := noExpansion --TODO which may compile down to something very close to your code given a sufficiently smart For reference, this is what I could imagine the full code to look like: def getOptType (stx : Syntax) : Syntax :=
syntax_match stx with
| `(optType|) := `(_)
| `(optType|: $t) := t
def letTransformer' : Transformer :=
λ stx, syntax_match stx with
| `(let $n:ident : $t := $_) := noExpansion
| `(let $n:ident := $v) := pure `(let $n:ident : _ := $v)
| `(let $n:ident $($b:binder)+ $t:optType := $v) :=
pure `(let $n:ident : Π $($b:binder)+, $(getOptType t) := λ $($b:binder)+, $v)
| `(let $p:pattern := $v in $b) := pure `(match $v with $p:pattern := $b) |
I agree that a |
BTW, we will need something similar for |
Yes, we should be able to reuse the model for |
c227d0e
to
865bb8d
Compare
Implemented by the new new parser |
* add Mem notation for lists * delete comment
Mathlib is crashing with #4006. Here is the stacktrace produced by Kim: ``` * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x100000000000a) * frame #0: 0x00000001066db21c libleanshared.dylib`lean::ir::interpreter::eval_body(lean::object_ref const&) + 2816 frame #1: 0x00000001066dd464 libleanshared.dylib`lean::ir::interpreter::call(lean::name const&, lean::array_ref<lean::object_ref> const&) + 1360 frame #2: 0x00000001066db394 libleanshared.dylib`lean::ir::interpreter::eval_body(lean::object_ref const&) + 3192 frame #3: 0x00000001066dd464 libleanshared.dylib`lean::ir::interpreter::call(lean::name const&, lean::array_ref<lean::object_ref> const&) + 1360 frame #4: 0x00000001066db394 libleanshared.dylib`lean::ir::interpreter::eval_body(lean::object_ref const&) + 3192 frame #5: 0x00000001066dd464 libleanshared.dylib`lean::ir::interpreter::call(lean::name const&, lean::array_ref<lean::object_ref> const&) + 1360 frame #6: 0x00000001066db394 libleanshared.dylib`lean::ir::interpreter::eval_body(lean::object_ref const&) + 3192 frame #7: 0x00000001066df288 libleanshared.dylib`lean::ir::interpreter::stub_m(lean_object**) + 556 frame #8: 0x00000001066d6ee0 libleanshared.dylib`lean_object* lean::ir::interpreter::with_interpreter<lean_object*>(lean::environment const&, lean::options const&, lean::name const&, std::__1::function<lean_object* (lean::ir::interpreter&)> const&) + 320 frame #9: 0x00000001066dee84 libleanshared.dylib`lean::ir::interpreter::stub_m_aux(lean_object**) + 92 frame #10: 0x00000001066deafc libleanshared.dylib`lean::ir::interpreter::stub_9_aux(lean_object*, lean_object*, lean_object*, lean_object*, lean_object*, lean_object*, lean_object*, lean_object*, lean_object*) + 60 frame #11: 0x00000001066f52a0 libleanshared.dylib`lean_apply_6 + 1748 frame #12: 0x00000001055d1ac8 libleanshared.dylib`l_Array_forInUnsafe_loop___at___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___spec__10___lambda__2 + 156 frame #13: 0x00000001055d47e8 libleanshared.dylib`l_Array_forInUnsafe_loop___at___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___spec__10___lambda__2___boxed + 144 frame #14: 0x00000001066f5bcc libleanshared.dylib`lean_apply_7 + 1348 frame #15: 0x00000001055ccccc libleanshared.dylib`l_Lean_Elab_withInfoTreeContext___at___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___spec__4 + 528 frame #16: 0x00000001066f5b6c libleanshared.dylib`lean_apply_7 + 1252 frame #17: 0x00000001055d1550 libleanshared.dylib`l_Lean_Elab_withLogging___at___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___spec__6 + 240 frame #18: 0x00000001055d4cb4 libleanshared.dylib`l_Array_forInUnsafe_loop___at___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___spec__10 + 940 frame #19: 0x00000001055d5394 libleanshared.dylib`l___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___lambda__1 + 60 frame #20: 0x00000001055d5740 libleanshared.dylib`l___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___lambda__1___boxed + 148 frame #21: 0x00000001066f11f4 libleanshared.dylib`lean_apply_1 + 840 frame #22: 0x0000000103bce27c libleanshared.dylib`l_Lean_profileitIOUnsafe___rarg___lambda__1 + 24 frame #23: 0x0000000103bce4ec libleanshared.dylib`l_Lean_profileitIOUnsafe___rarg___lambda__1___boxed + 20 frame #24: 0x00000001066f10bc libleanshared.dylib`lean_apply_1 + 528 frame #25: 0x0000000106644260 libleanshared.dylib`lean_profileit + 128 frame #26: 0x0000000103bce3e0 libleanshared.dylib`l_Lean_profileitIOUnsafe___rarg + 112 frame #27: 0x00000001055d564c libleanshared.dylib`l___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore + 268 frame #28: 0x00000001055d6264 libleanshared.dylib`l_Lean_Elab_Term_applyAttributes + 52 frame #29: 0x000000010597b840 libleanshared.dylib`l_Array_forInUnsafe_loop___at_Lean_Elab_Command_elabAttr___spec__6___lambda__1 + 740 frame #30: 0x000000010597daf4 libleanshared.dylib`l_Array_forInUnsafe_loop___at_Lean_Elab_Command_elabAttr___spec__6___lambda__1___boxed + 124 frame #31: 0x00000001066f65d8 libleanshared.dylib`lean_apply_8 + 1252 frame #32: 0x00000001066f5b6c libleanshared.dylib`lean_apply_7 + 1252 frame #33: 0x0000000104f587b0 libleanshared.dylib`l___private_Lean_Elab_InfoTree_Main_0__Lean_Elab_withSavedPartialInfoContext___at_Lean_Elab_Command_liftTermElabM___spec__2___rarg___lambda__1 + 344 frame #34: 0x0000000104f59ec4 libleanshared.dylib`l___private_Lean_Elab_InfoTree_Main_0__Lean_Elab_withSavedPartialInfoContext___at_Lean_Elab_Command_liftTermElabM___spec__2___rarg + 280 frame #35: 0x0000000104f5af20 libleanshared.dylib`l_Lean_Elab_Command_liftTermElabM___rarg___lambda__1 + 144 frame #36: 0x00000001066f5ab8 libleanshared.dylib`lean_apply_7 + 1072 frame #37: 0x0000000105636090 libleanshared.dylib`l_Lean_Elab_Term_TermElabM_run___rarg + 844 frame #38: 0x0000000104f5b8fc libleanshared.dylib`l_Lean_Elab_Command_liftTermElabM___rarg + 1696 frame #39: 0x000000010597d67c libleanshared.dylib`l_Array_forInUnsafe_loop___at_Lean_Elab_Command_elabAttr___spec__6 + 928 frame #40: 0x000000010597de60 libleanshared.dylib`l_Lean_Elab_Command_elabAttr + 772 frame #41: 0x000000010597e838 libleanshared.dylib`l_Lean_Elab_Command_elabAttr___boxed + 20 frame #42: 0x00000001066f2cd4 libleanshared.dylib`lean_apply_3 + 868 frame #43: 0x0000000104f385f8 libleanshared.dylib`l_Lean_Elab_withInfoTreeContext___at___private_Lean_Elab_Command_0__Lean_Elab_Command_elabCommandUsing___spec__2 + 396 frame #44: 0x0000000104f39e48 libleanshared.dylib`l___private_Lean_Elab_Command_0__Lean_Elab_Command_elabCommandUsing + 484 frame #45: 0x00000001066f2cf0 libleanshared.dylib`lean_apply_3 + 896 frame #46: 0x0000000104f341d4 libleanshared.dylib`l_Lean_withTraceNode___at_Lean_Elab_Command_runLinters___spec__11 + 788 frame #47: 0x00000001066f2d54 libleanshared.dylib`lean_apply_3 + 996 frame #48: 0x00000001066f2cf0 libleanshared.dylib`lean_apply_3 + 896 frame #49: 0x0000000104f40e30 libleanshared.dylib`l_Lean_Elab_withLogging___at_Lean_Elab_Command_elabCommand___spec__2 + 104 frame #50: 0x0000000104f4c51c libleanshared.dylib`l_Lean_Elab_Command_elabCommandTopLevel___lambda__1 + 432 frame #51: 0x00000001066f10e8 libleanshared.dylib`lean_apply_1 + 572 frame #52: 0x0000000103bce27c libleanshared.dylib`l_Lean_profileitIOUnsafe___rarg___lambda__1 + 24 frame #53: 0x0000000103bce4ec libleanshared.dylib`l_Lean_profileitIOUnsafe___rarg___lambda__1___boxed + 20 frame #54: 0x00000001066f10bc libleanshared.dylib`lean_apply_1 + 528 frame #55: 0x0000000106644260 libleanshared.dylib`lean_profileit + 128 frame #56: 0x0000000103bce3e0 libleanshared.dylib`l_Lean_profileitIOUnsafe___rarg + 112 frame #57: 0x0000000104f4fce0 libleanshared.dylib`l_Lean_Elab_Command_elabCommandTopLevel + 1284 frame #58: 0x00000001057d2f30 libleanshared.dylib`l_Lean_Elab_Frontend_elabCommandAtFrontend + 1384 frame #59: 0x00000001057d63b8 libleanshared.dylib`l_Lean_Elab_Frontend_processCommand + 1332 frame #60: 0x00000001057d6e48 libleanshared.dylib`l_Lean_Elab_Frontend_processCommands + 72 frame #61: 0x00000001057d7248 libleanshared.dylib`l_Lean_Elab_IO_processCommands + 212 frame #62: 0x00000001057d83d0 libleanshared.dylib`l_Lean_Elab_runFrontend___lambda__3 + 76 frame #63: 0x00000001057d96d0 libleanshared.dylib`lean_run_frontend + 2436 frame #64: 0x00000001065e72b4 libleanshared.dylib`lean::run_new_frontend(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, lean::options const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, lean::name const&, unsigned int, lean::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>> const&, unsigned char) + 244 frame #65: 0x00000001065e9c8c libleanshared.dylib`lean_main + 8348 frame #66: 0x0000000184f93f28 dyld`start + 2236 ``` cc @Kha
Mathlib is crashing with #4006. Here is the stacktrace produced by Kim: ``` * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x100000000000a) * frame #0: 0x00000001066db21c libleanshared.dylib`lean::ir::interpreter::eval_body(lean::object_ref const&) + 2816 frame #1: 0x00000001066dd464 libleanshared.dylib`lean::ir::interpreter::call(lean::name const&, lean::array_ref<lean::object_ref> const&) + 1360 frame #2: 0x00000001066db394 libleanshared.dylib`lean::ir::interpreter::eval_body(lean::object_ref const&) + 3192 frame #3: 0x00000001066dd464 libleanshared.dylib`lean::ir::interpreter::call(lean::name const&, lean::array_ref<lean::object_ref> const&) + 1360 frame #4: 0x00000001066db394 libleanshared.dylib`lean::ir::interpreter::eval_body(lean::object_ref const&) + 3192 frame #5: 0x00000001066dd464 libleanshared.dylib`lean::ir::interpreter::call(lean::name const&, lean::array_ref<lean::object_ref> const&) + 1360 frame #6: 0x00000001066db394 libleanshared.dylib`lean::ir::interpreter::eval_body(lean::object_ref const&) + 3192 frame #7: 0x00000001066df288 libleanshared.dylib`lean::ir::interpreter::stub_m(lean_object**) + 556 frame #8: 0x00000001066d6ee0 libleanshared.dylib`lean_object* lean::ir::interpreter::with_interpreter<lean_object*>(lean::environment const&, lean::options const&, lean::name const&, std::__1::function<lean_object* (lean::ir::interpreter&)> const&) + 320 frame #9: 0x00000001066dee84 libleanshared.dylib`lean::ir::interpreter::stub_m_aux(lean_object**) + 92 frame #10: 0x00000001066deafc libleanshared.dylib`lean::ir::interpreter::stub_9_aux(lean_object*, lean_object*, lean_object*, lean_object*, lean_object*, lean_object*, lean_object*, lean_object*, lean_object*) + 60 frame #11: 0x00000001066f52a0 libleanshared.dylib`lean_apply_6 + 1748 frame #12: 0x00000001055d1ac8 libleanshared.dylib`l_Array_forInUnsafe_loop___at___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___spec__10___lambda__2 + 156 frame #13: 0x00000001055d47e8 libleanshared.dylib`l_Array_forInUnsafe_loop___at___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___spec__10___lambda__2___boxed + 144 frame #14: 0x00000001066f5bcc libleanshared.dylib`lean_apply_7 + 1348 frame #15: 0x00000001055ccccc libleanshared.dylib`l_Lean_Elab_withInfoTreeContext___at___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___spec__4 + 528 frame #16: 0x00000001066f5b6c libleanshared.dylib`lean_apply_7 + 1252 frame #17: 0x00000001055d1550 libleanshared.dylib`l_Lean_Elab_withLogging___at___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___spec__6 + 240 frame #18: 0x00000001055d4cb4 libleanshared.dylib`l_Array_forInUnsafe_loop___at___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___spec__10 + 940 frame #19: 0x00000001055d5394 libleanshared.dylib`l___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___lambda__1 + 60 frame #20: 0x00000001055d5740 libleanshared.dylib`l___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore___lambda__1___boxed + 148 frame #21: 0x00000001066f11f4 libleanshared.dylib`lean_apply_1 + 840 frame #22: 0x0000000103bce27c libleanshared.dylib`l_Lean_profileitIOUnsafe___rarg___lambda__1 + 24 frame #23: 0x0000000103bce4ec libleanshared.dylib`l_Lean_profileitIOUnsafe___rarg___lambda__1___boxed + 20 frame #24: 0x00000001066f10bc libleanshared.dylib`lean_apply_1 + 528 frame #25: 0x0000000106644260 libleanshared.dylib`lean_profileit + 128 frame #26: 0x0000000103bce3e0 libleanshared.dylib`l_Lean_profileitIOUnsafe___rarg + 112 frame #27: 0x00000001055d564c libleanshared.dylib`l___private_Lean_Elab_Term_0__Lean_Elab_Term_applyAttributesCore + 268 frame #28: 0x00000001055d6264 libleanshared.dylib`l_Lean_Elab_Term_applyAttributes + 52 frame #29: 0x000000010597b840 libleanshared.dylib`l_Array_forInUnsafe_loop___at_Lean_Elab_Command_elabAttr___spec__6___lambda__1 + 740 frame #30: 0x000000010597daf4 libleanshared.dylib`l_Array_forInUnsafe_loop___at_Lean_Elab_Command_elabAttr___spec__6___lambda__1___boxed + 124 frame #31: 0x00000001066f65d8 libleanshared.dylib`lean_apply_8 + 1252 frame #32: 0x00000001066f5b6c libleanshared.dylib`lean_apply_7 + 1252 frame #33: 0x0000000104f587b0 libleanshared.dylib`l___private_Lean_Elab_InfoTree_Main_0__Lean_Elab_withSavedPartialInfoContext___at_Lean_Elab_Command_liftTermElabM___spec__2___rarg___lambda__1 + 344 frame #34: 0x0000000104f59ec4 libleanshared.dylib`l___private_Lean_Elab_InfoTree_Main_0__Lean_Elab_withSavedPartialInfoContext___at_Lean_Elab_Command_liftTermElabM___spec__2___rarg + 280 frame #35: 0x0000000104f5af20 libleanshared.dylib`l_Lean_Elab_Command_liftTermElabM___rarg___lambda__1 + 144 frame #36: 0x00000001066f5ab8 libleanshared.dylib`lean_apply_7 + 1072 frame #37: 0x0000000105636090 libleanshared.dylib`l_Lean_Elab_Term_TermElabM_run___rarg + 844 frame #38: 0x0000000104f5b8fc libleanshared.dylib`l_Lean_Elab_Command_liftTermElabM___rarg + 1696 frame #39: 0x000000010597d67c libleanshared.dylib`l_Array_forInUnsafe_loop___at_Lean_Elab_Command_elabAttr___spec__6 + 928 frame #40: 0x000000010597de60 libleanshared.dylib`l_Lean_Elab_Command_elabAttr + 772 frame #41: 0x000000010597e838 libleanshared.dylib`l_Lean_Elab_Command_elabAttr___boxed + 20 frame #42: 0x00000001066f2cd4 libleanshared.dylib`lean_apply_3 + 868 frame #43: 0x0000000104f385f8 libleanshared.dylib`l_Lean_Elab_withInfoTreeContext___at___private_Lean_Elab_Command_0__Lean_Elab_Command_elabCommandUsing___spec__2 + 396 frame #44: 0x0000000104f39e48 libleanshared.dylib`l___private_Lean_Elab_Command_0__Lean_Elab_Command_elabCommandUsing + 484 frame #45: 0x00000001066f2cf0 libleanshared.dylib`lean_apply_3 + 896 frame #46: 0x0000000104f341d4 libleanshared.dylib`l_Lean_withTraceNode___at_Lean_Elab_Command_runLinters___spec__11 + 788 frame #47: 0x00000001066f2d54 libleanshared.dylib`lean_apply_3 + 996 frame #48: 0x00000001066f2cf0 libleanshared.dylib`lean_apply_3 + 896 frame #49: 0x0000000104f40e30 libleanshared.dylib`l_Lean_Elab_withLogging___at_Lean_Elab_Command_elabCommand___spec__2 + 104 frame #50: 0x0000000104f4c51c libleanshared.dylib`l_Lean_Elab_Command_elabCommandTopLevel___lambda__1 + 432 frame #51: 0x00000001066f10e8 libleanshared.dylib`lean_apply_1 + 572 frame #52: 0x0000000103bce27c libleanshared.dylib`l_Lean_profileitIOUnsafe___rarg___lambda__1 + 24 frame #53: 0x0000000103bce4ec libleanshared.dylib`l_Lean_profileitIOUnsafe___rarg___lambda__1___boxed + 20 frame #54: 0x00000001066f10bc libleanshared.dylib`lean_apply_1 + 528 frame #55: 0x0000000106644260 libleanshared.dylib`lean_profileit + 128 frame #56: 0x0000000103bce3e0 libleanshared.dylib`l_Lean_profileitIOUnsafe___rarg + 112 frame #57: 0x0000000104f4fce0 libleanshared.dylib`l_Lean_Elab_Command_elabCommandTopLevel + 1284 frame #58: 0x00000001057d2f30 libleanshared.dylib`l_Lean_Elab_Frontend_elabCommandAtFrontend + 1384 frame #59: 0x00000001057d63b8 libleanshared.dylib`l_Lean_Elab_Frontend_processCommand + 1332 frame #60: 0x00000001057d6e48 libleanshared.dylib`l_Lean_Elab_Frontend_processCommands + 72 frame #61: 0x00000001057d7248 libleanshared.dylib`l_Lean_Elab_IO_processCommands + 212 frame #62: 0x00000001057d83d0 libleanshared.dylib`l_Lean_Elab_runFrontend___lambda__3 + 76 frame #63: 0x00000001057d96d0 libleanshared.dylib`lean_run_frontend + 2436 frame #64: 0x00000001065e72b4 libleanshared.dylib`lean::run_new_frontend(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, lean::options const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, lean::name const&, unsigned int, lean::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>> const&, unsigned char) + 244 frame #65: 0x00000001065e9c8c libleanshared.dylib`lean_main + 8348 frame #66: 0x0000000184f93f28 dyld`start + 2236 ``` cc @Kha
I've done a first pass through the frontend, which could certainly be improved in some places. Unfortunately, it currently fails with an assertion violation from
Array.iterateAux
. I haven't taken a close look yet, but it looks likepartial
leads to some confusion of arguments.