-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide ASIC resistant PoW scheme for BSQ swaps #5858
Conversation
Remove (possible draft) 'ProofOfWorkService(Test)', which is a near duplicate of the class 'HashCashService' but is currently unused.
Replace 'BiFunction<T, U, Boolean>' with the primitive specialisation 'BiPredicate<T, U>' in HashCashService & FilterManager. As part of this, replace similar predicate constructs found elsewhere. NOTE: This touches the DAO packages (trivially @ VoteResultService).
Implement the Equihash (https://eprint.iacr.org/2015/946.pdf) algorithm for solving/verifying memory-hard client-puzzles/proof-of-work problems for ASIC-resistant DoS attack protection. The scheme is asymmetric, so that even though solving a puzzle is slow and memory-intensive, needing 100's of kB to MB's of memory, the solution verification is instant. Instead of a single 64-bit counter/nonce, as in the case of Hashcash, Equihash solutions are larger objects ranging from 10's of bytes to a few kB, depending on the puzzle parameters used. These need to be stored in entirety, in the proof-of-work field of each offer payload. Include logic for fine-grained difficulty control in Equihash with a double-precision floating point number. This is based on lexicographic comparison with a target hash, like in Bitcoin, instead of just counting the number of leading zeros of a hash. The code is unused at present. Also add some simple unit tests.
Provide a (vastly cut down) drop-in replacement for the Guava multimap instance 'indexMultimap', of type 'ListMultimap<Integer, Integer>', used to map table row indices to block values, to detect collisions at a given block position (that is, in a given table column). The replacement stores (multi-)mappings from ints to ints in a flat int- array, only spilling over to a ListMultimap if there are more than 4 values added for a given key. This vastly reduces the amount of boxing and memory usage when running 'Equihash::findCollisions' to build up the next table as part of Wagner's algorithm.
Manually iterate over colliding table rows using a while- loop and a custom 'PrimitiveIterator.OfInt' implementation, instead of a foreach lambda called on an IntStream, in 'Equihash::findCollisions'. Profiling shows that this results in a slight speedup.
Run the initial XorTable fillup in 'Equihash::computeAllHashes' in parallel, using a parallel stream, to get an easy speed up. (The solver spends about half its time computing BLAKE2b hashes before iteratively building tables of partial collisions using 'Equihash::findCollisions'.) As part of this, replace the use of 'java.nio.ByteBuffer' array wrapping in 'Utilities::(bytesToIntsBE|intsToBytesBE)' with manual for-loops, as profiling reveals an unexpected bottleneck in the former when used in a multithreaded setting. (Lock contention somewhere in unsafe code?)
Add a numeric version field to the 'ProofOfWork' protobuf object, along with a list of allowed version numbers, 'enabled_pow_versions', to the filter. The versions are taken to be in order of preference from most to least preferred when creating a PoW, with an empty list signifying use of the default algorithm only (that is, version 0: Hashcash). An explicit list is used instead of an upper & lower version bound, in case a new PoW algorithm (or changed algorithm params) turns out to provide worse resistance than an earlier version. (The fields are unused for now, to be enabled in a later commit.)
Provide a utility method, 'Equihash::adjustDifficulty', to linearise and normalise the expected time taken to solve a puzzle, as a function of the provided difficulty, by taking into account the fact that there could be 0, 1, 2 or more puzzle solutions for any given nonce. (Wagner's algorithm is supposed to give 2 solutions on average, but the observed number is fewer, possibly due to duplicate removal.) For tractability, assume that the solution count has a Poisson distribution, which seems to have good agreement with the tests. Also add some (disabled) benchmarks to EquihashTest. These reveal an Equihash-90-5 solution time of ~146ms per puzzle per unit difficulty on a Core i3 laptop, with a verification time of ~50 microseconds.
Add an abstract base class, 'ProofOfWorkService', for the existing PoW implementation 'HashCashService' and a new 'EquihashProofOfWorkService' PoW implementation based on Equihash-90-5 (which has 72 byte solutions & 5-10 MB peak memory usage). Since the current 'ProofOfWork' protobuf object only provides a 64-bit counter field to hold the puzzle solution (as that is all Hashcash requires), repurpose the 'payload' field to hold the Equihash puzzle solution bytes, with the 'challenge' field equal to the puzzle seed: the SHA256 hash of the offerId & makerAddress. Use a difficulty scale factor of 3e-5 (derived from benchmarking) to try to make the average Hashcash & Equihash puzzle solution times roughly equal for any given log-difficulty/numLeadingZeros integer chosen in the filter. NOTE: An empty enabled-version-list in the filter defaults to Hashcash (= version 0) only. The new Equihash-90-5 PoW scheme is version 1.
3a25b7c
to
0c94e23
Compare
Change the type of the 'difficulty' field in the Filter & ProofOfWork proto objects from int32/bytes to double and make it use a linear scale, in place of the original logarithmic scale which counts the (effective) number of required zeros. This allows fine-grained difficulty control for Equihash, though for Hashcash it simply rounds up to the nearest power of 2 internally. NOTE: This is a breaking change to PoW & filter serialisation (unlike the earlier PR commits), as the proto field version nums aren't updated.
@stejbac |
|
|
@stejbac The difficulty value to be set is 2 pow x, e.g. 2 pow 18 = 262144 means 18 leading zeros for hashcash. It is roughly the number if iterations for hashcash. Values in the range of 300 000-800 000 produce on my machine a pow duration of about 0.5-4 sec. Could we add some documentation to the Filter so that the Filter maintainer finds quickly some guidance when he sets or changes the values? |
Remove all the 'challengeValidation', 'difficultyValidation' and 'testDifficulty' BiPredicate method params from 'HashCashService' & 'ProofOfWorkService', to simplify the API. These were originally included to aid testing, but turned out to be unnecessary. Patches committed on behalf of @chimp1984.
@chimp1984 I've just pushed those two patches now (as a single commit). Yes, the list of enabled version numbers is in order of preference, so that the first is preferred when creating PoWs and they are both allowed when verifying PoWs. Adding Equihash to the list (in any order) won't cause Hashcash PoWs to be recreated, unless Hashcash is also removed from the list. Generation times for Hashcash & Equihash are intended to be roughly the same for any given difficulty (at least when it's a power-of-two), with a difficulty of 2 pow n meaning the difficulty of finding > n leading zeros for Hashcash (or equivalently finding a hash with exactly n leading zeros), so 2 * (2 pow n) Hashcash iterations on average, i.e. 524288 average iterations for a difficulty of 2 pow 18 = 262144. I can attempt to update the BSQ Swaps wiki, as information there about the difficulty would be out of date with this PR. |
Fix a trivial bug in the iterator returned by 'IntListMultimap::get', caused by mistaken use of the iterator index in place of the key when doing lookups into the overspill map. This was causing puzzle solutions to be invalid about 3% of the time, as well as substantially reducing the average number of solutions found per nonce. As the fix increases the mean solution count per nonce to the correct value of 2.0 predicted by the paper (regardless of puzzle params k & n), inline the affected constants to simplify 'Equihash::adjustDifficulty'.
I just discovered a trivial bug in the iterator returned by |
common/src/main/java/bisq/common/crypto/EquihashProofOfWorkService.java
Outdated
Show resolved
Hide resolved
common/src/main/java/bisq/common/crypto/EquihashProofOfWorkService.java
Outdated
Show resolved
Hide resolved
@stejbac Is there a commit outstanding or is the PR ready for merge? |
1. Reorder the PoW fields in the 'Filter' proto by field index, instead of contextually. 2. Deduplicate expression for 'pow' & replace if-block with boolean op to simplify 'FilterManager::isProofOfWorkValid'. 3. Avoid slightly confusing use of null char as a separator to prevent hashing collisions in 'EquihashProofOfWorkService::getChallenge'. Use comma separator and escape the 'itemId' & 'ownerId' arguments instead. (based on PR bisq-network#5858 review comments)
Avoid repurposing the 'ProofOfWork.payload' field for Equihash puzzle solutions, as that may be of later use in interactive PoW schemes such as P2P network DoS protection (where the challenge may be a random nonce instead of derived from the offer ID). Instead, make the payload the UTF-8 bytes of the offer ID, just as with Hashcash. Also, make the puzzle seed the SHA-256 hash of the payload concatenated with the challenge, instead of just the 256-bit challenge on its own, so that the PoW is tied to a particular payload and cannot be reused for other payloads in the case of future randomly chosen challenges.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
utACK
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
utACK - based on #5858 (review)
1. Reorder the PoW fields in the 'Filter' proto by field index, instead of contextually. 2. Deduplicate expression for 'pow' & replace if-block with boolean op to simplify 'FilterManager::isProofOfWorkValid'. 3. Avoid slightly confusing use of null char as a separator to prevent hashing collisions in 'EquihashProofOfWorkService::getChallenge'. Use comma separator and escape the 'itemId' & 'ownerId' arguments instead. (based on PR #5858 review comments)
This PR adds an implementation of the Equihash memory-hard Proof of Work scheme for DoS/spam attack protection, to replace the present simple Hashcash scheme. It also adds a simple versioning system to make it easier to migrate from one algorithm to another in case of failure to provide adequate DoS attach protection, that is, if an attacker spamming the BSQ swap offer book has too much of a speed/efficiency advantage over an honest user, e.g. due to use of FPGAs, GPUs, etc.
The Equihash scheme is asymmetric, meaning that producing a PoW by solving a hashing puzzle is slow and consumes up to MBs of memory, but verification is instant. The puzzle solutions are larger than that of the present Hashcash scheme (where it is just a 64-bit counter/nonce) but still fairly small - 10's of bytes to a few kB in size, depending on the puzzle parameters. So they shouldn't bloat the BSQ swap offer payloads significantly.
--
Note that one of the commits (5da8df2) touches the DAO packages (VoteResultService), although the code change is trivial.
Also, the last commit (e0595aa) makes a breaking change the proto serialisation, by changing the Filter/PoW difficulty field to a linear-scale floating point -- that commit isn't essential to the PR, though it would cause less disruption to include it before the 1.8.0 release.