Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: add parallelism to hashing #411

Merged
merged 2 commits into from
Oct 25, 2022
Merged

perf: add parallelism to hashing #411

merged 2 commits into from
Oct 25, 2022

Conversation

rklaehn
Copy link
Contributor

@rklaehn rklaehn commented Oct 25, 2022

The most expensive thing when adding a file is hashing the content.

This adds parallelism to this hashing by splitting the tree building into two stages. One where the leafs are encoded, and one where the actual tree is being built.

parallel_hash

This makes sure that iroh-cli now uses all available cores, and therefore increases perf quite a bit. But we are now limited by the DB.

@rklaehn rklaehn requested review from ramfox and Arqu October 25, 2022 09:01
tokio::task::spawn_blocking(|| {
chunk.and_then(|chunk| TreeNode::Leaf(chunk.freeze()).encode())
}).err_into::<anyhow::Error>()
}).buffered(hash_par).map(|x| x.and_then(|x| x));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if there is a nicer way to do this than .map(|x| x.and_then(|x| x)).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can't you use flatten?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that you mention it... But apparently not.

image

@rklaehn rklaehn mentioned this pull request Oct 25, 2022
@b5
Copy link
Member

b5 commented Oct 25, 2022

I'm not qualified to review async code, but in smoke testing I did an iroh add -> iroh get loop on a 350MB file, and the result that came back was missing data from the end of the file, roughly ~75MB worth. I tested it on main without issue. Will play some more to see if the problem is repeatable

  1. Any idea what might cause such a thing?
  2. We should probably make sure there's a solid roundtrip test if we don't already have one.
  3. All of this is totally worth it, the speedup is very nice

@rklaehn
Copy link
Contributor Author

rklaehn commented Oct 25, 2022

I'm not qualified to review async code, but in smoke testing I did an iroh add -> iroh get loop on a 350MB file, and the result that came back was missing data from the end of the file, roughly ~75MB worth. I tested it on main without issue. Will play some more to see if the problem is repeatable

Thanks for the test. Trying things out is often more useful than staring at the code. I will check this, and also try to figure out how to test this.

@rklaehn
Copy link
Contributor Author

rklaehn commented Oct 25, 2022

I'm not qualified to review async code, but in smoke testing I did an iroh add -> iroh get loop on a 350MB file, and the result that came back was missing data from the end of the file, roughly ~75MB worth. I tested it on main without issue. Will play some more to see if the problem is repeatable

  1. Any idea what might cause such a thing?
  2. We should probably make sure there's a solid roundtrip test if we don't already have one.
  3. All of this is totally worth it, the speedup is very nice

Can you retry once #408 is merged?

@b5
Copy link
Member

b5 commented Oct 25, 2022

hrm, still seeing the issue on get when running this branch. hash results are the same compared to main, but this might be an issue with the content not actually being in the blocks?

@rklaehn
Copy link
Contributor Author

rklaehn commented Oct 25, 2022

hrm, still seeing the issue on get when running this branch. hash results are the same compared to main, but this might be an issue with the content not actually being in the blocks?

Can you tell me how to reproduce this? I tried with a 300m random file, and am not seeing this.

Copy link
Member

@b5 b5 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've tested this a bunch locally, and am good with it. I'd like @dignifiedquire to sign off

@b5 b5 added this to the v0.1.0 milestone Oct 25, 2022
@b5 b5 merged commit 12db7ed into main Oct 25, 2022
@rklaehn rklaehn deleted the rklaehn/hash-parallelism branch October 25, 2022 17:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

3 participants