This repository has been archived by the owner on Aug 2, 2021. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 110
Bug when hashing a file: Returned hash count is not always the same #1211
Comments
The problem is a concurrency issue in your It's hard to tell, but perhaps you also ideally should also call |
Closed by ethereum/go-ethereum#19028 |
zelig
referenced
this issue
in ethereum/go-ethereum
Feb 12, 2019
* swarm/storage: fix HashExplore concurrency bug ethersphere#1211 * swarm/storage: lock as value not pointer * swarm/storage: wait for to complete * swarm/storage: fix linter problems * swarm/storage: append to nil slice
skylenet
referenced
this issue
in holiman/go-ethereum
Feb 19, 2019
…reum#19028) * swarm/storage: fix HashExplore concurrency bug ethersphere#1211 * swarm/storage: lock as value not pointer * swarm/storage: wait for to complete * swarm/storage: fix linter problems * swarm/storage: append to nil slice (cherry picked from commit 3d22a46)
dshulyak
referenced
this issue
in status-im/go-ethereum
Mar 14, 2019
…reum#19028) * swarm/storage: fix HashExplore concurrency bug ethersphere#1211 * swarm/storage: lock as value not pointer * swarm/storage: wait for to complete * swarm/storage: fix linter problems * swarm/storage: append to nil slice (cherry picked from commit 3d22a46)
kiku-jw
referenced
this issue
in kiku-jw/go-ethereum
Mar 29, 2019
…reum#19028) * swarm/storage: fix HashExplore concurrency bug ethersphere#1211 * swarm/storage: lock as value not pointer * swarm/storage: wait for to complete * swarm/storage: fix linter problems * swarm/storage: append to nil slice
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
This problem appeared while working on #1185.
To reproduce it, one can run the
TestGetAllReferences
inswarm/storage/filestore_test.go
, which is currently disabled onmaster
due to this bug.Running that test, the result is sometimes 247, sometimes 248 (which apparently should be the correct number), and sometimes even 246 for splitting a 1000000 bytes file.
This seems like a major bug and should be addressed soon.
The text was updated successfully, but these errors were encountered: