-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make :m field of %Deep{} lazy #20
base: master
Are you sure you want to change the base?
Conversation
Benchmark
This suggests that I now have O(n log(n)) runtime for the view functions and O(1) for concat! Something is definitely wrong. |
Many functions are too lazy now. I need to add back some strictness in certain places. Just like Haskell performance debugging :D |
def view_l(%Empty{}), do: nil | ||
def view_l(%Single{monoid: mo, x: x}), do: {x, %Empty{monoid: mo}} | ||
|
||
def view_l(%Deep{l: %One{a: x}, m: m, r: sf}), | ||
do: {x, rot_l(m, sf)} | ||
do: {x, rot_l(m.(), sf)} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since m
is being forced here, this is still as strict as before, right?
I guess that to achieve the O(1) in head
/tail
(when they are implemented) one would need to indeed return the thunk unevaluated, so that those functions would not force it, and one would need to force it to get a Tree
back.
It looks like it'd have to be something like this for head
/tail
:
def view_l(%Deep{l: %One{a: x}, m: m, r: sf}),
do: {x, fn -> rot_l(m, sf) end}
and force m
inside each function as needed. So that head
would be something like:
def head(t) do
with {x, _thunk} <- view_l(t) do
x
end
end
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense, that simplifies the whole story a lot more.
Also, looking at the numbers from #22 , it seems that Seq:
Looks like they are doing more work? |
Yes, that's probably the overhead of the additional wrapping function. |
Might solve #16. To be sure we would need some benchmarking though.