-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool stuff. Exciting to see the benchmark of this!
ethcore/private-tx/src/lib.rs
Outdated
use std::time::Duration; | ||
use ethereum_types::{H128, H256, U256, Address}; | ||
use hash::keccak; | ||
use im::{HashMap as IMHashMap}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe just use im::HashMap as IMHashMap;
ethcore/src/state/account.rs
Outdated
@@ -30,6 +30,7 @@ use trie::{Trie, Recorder}; | |||
use ethtrie::{TrieFactory, TrieDB, SecTrieDB, Result as TrieResult}; | |||
use pod_account::*; | |||
use rlp::{RlpStream, encode}; | |||
use im::{HashMap as IMHashMap}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here
Do you plan to add benchmarks in this PR or in a separate one? The potential performance impact is my only concern at this point. |
I agree with David, that since the whole point of switching is performance optimization, it would be nice to have some benchmarks to justify the changes. Also note, that |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes LGTM but I'd also like to see some benchmarks. Also as @ordian mentioned maybe we can switch to the thread unsafe version since we won't be removing any locks at this point?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wow, looks like this is a first usage of the persistent data structures in parity-ethereum! Would be great to take a look on some benchmarks also! 👍
for (k, v) in self.storage_changes.drain() { | ||
let old_storage_changes = { | ||
let mut tmp = im::HashMap::new(); | ||
::std::mem::swap(&mut tmp, &mut self.storage_changes); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not quite related to this specific PR, but I just wonder, what will happen if it'll fail to commit? Should we concern about this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then that error will be propagate out and cause the block import to fail. I think we handled that correctly in Client
.
I did a benchmark on this, it's actually slower:
I think the reason is due to
So for the short-term, I think it would be okay to keep the current state checkpointing logic as is -- I don't think that's part of our current performance bottleneck right now. |
Thank you for trying this out, these things can be subtle! :)
Is it true that while most might be really small, there are some that are really big? |
Yes that's true. Some indeed can be big. |
rel #9427
This uses
im::HashMap
forAccount::storage_changes
. In the future we should refactor to use PDS for other account changes, but right now I'm still figuring out some entanglement related tostate
andstate_db
mod.The previous bottleneck was in
Account::clone_dirty
, where we do a full clone of all changes at each checkpoint. This PR relieves that issue. The complexity comparison is as follows:O(1)
->O(logn)
O(1)
->O(logn)
O(n)
->O(1)
Note that
O(logn)
is actually a small number -- if a storage uses up all 256-bit variations and we consider the complexity to beO(log(2)n)
, then it will only take at most 256 hops.I'm still thinking about how to do a benchmark on this change, but currently haven't thought of a good way.