-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add hashWangYi1
#13823
Add hashWangYi1
#13823
Conversation
stride double hashing) part of recent sets & tables changes (which has still been causing bugs over a month later (e.g., two days ago #13794) as well as still having several "figure this out" implementation question comments in them (see just diffs of this PR). This topic has been discussed in many places: #13393 #13418 #13440 #13794 Alternative/non-mandatory stronger integer hashes (or vice-versa opt-in identity hashes) are a better solution that is more general (no illusion of one hard-coded sequence solving all problems) while retaining the virtues of linear probing such as cache obliviousness and age-less tables under delete-heavy workloads (still untested after a month of this change). The only real solution for truly adversarial keys is a hash keyed off of data unobservable to attackers. That all fits better with a few families of user-pluggable/define-switchable hashes which can be provided in a separate PR more about `hashes.nim`. This PR carefully preserves the better (but still hard coded!) probing of the `intsets` and other recent fixes like `move` annotations, hash order invariant tests, `intsets.missingOrExcl` fixing, and the move of `rightSize` into `hashcommon.nim`.
(which arguably could be resolved outside the stdlib). Add version1 of Wang Yi's hash specialized to 8 byte integers. This gives simple help to users having trouble with overly colliding hash(key)s. I.e., A) `import hashes; proc hash(x: myInt): Hash = hashWangYi1(int(x))` in the instantiation context of a `HashSet` or `Table` or B) more globally, compile with `nim c -d:hashWangYi1`. No hash can be all things to all use cases, but this one is A) vetted to scramble well by the SMHasher test suite (a necessarily limited but far more thorough test than prior proposals here), B) only a few ALU ops on many common CPUs, and C) possesses an easy via "grade school multi-digit multiplication" fall back for weaker deployment contexts. Some people might want to stampede ahead unbridled, but my view is that a good plan is to A) include this in the stdlib for a release or three to let people try it on various key sets nim-core could realistically never access/test (maybe mentioning it in the changelog so people actually try it out), B) have them report problems (if any), C) if all seems good, make the stdlib more novice friendly by adding `hashIdentity(x)=x` and changing the default `hash() = hashWangYi1` with some `when defined` rearranging so users can `-d:hashIdentity` if they want the old behavior back. This plan is compatible with any number of competing integer hashes if people want to add them. I would strongly recommend they all *at least* pass the SMHasher suite since the idea here is to become more friendly to novices who do not generally understand hashing failure modes.
I should perhaps have said, if it isn't obvious, that if users do have a secret key, they can merge that into the hash in some instantiation context like: import hashes, sets, mysecret
proc hash(x: int): Hash =
hashWangYi1(x + mysecret.secret) Things can be made stronger than that using new secrets/salt on some per-resize/per- The best attack resilience requires more invasive changes, though..actually detecting and then mitigating the attack somehow. It's probably wisest to get experience with that in |
Needs Tests, and |
|
I like it but IMO we can already make it the new default and emulate the old hashing via |
a changelog.md entry.
arithmetic in doubled up 64-bit, but truncate the hash to the lower 32-bits, but then still return `uint64` to be the same. So, type correct but truncated hash value. Update `thashes.nim` as well.
lib/pure/hashes.nim
Outdated
const P0 = 0xa0761d6478bd642f'u64 | ||
const P1 = 0xe7037ed1a0b428db'u64 | ||
const P5x8 = 0xeb44accab455d165'u64 xor 8'u64 | ||
Hash(hiXorLo(hiXorLo(P0, uint64(x) xor P1), P5x8)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be a cast[Hash]
, I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. I'll try that. Yeah I was getting i386/32-bit errors in the CI but there was no line number in the range exception. So, I was kind of guessing my way around. Thanks for the pointer! Pushed an attempt to fix. We'll see how it goes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, that cast
instead of convert seems to have fixed the Linux i386.
Any ideas why tests/closure/tclosure.nim
would be failing on the CI? It works for me locally on an Linux_amd64 system, but seems to stop between the end of block doNotation
and block tforum
. My guess would be block fib50
since that engages the new hash()
, but I don't get an assertion error from the CI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, though that tclosure.nim
failure had repeated quite a few times it now appears to have worked and Linux i386 passes all tests.
The testament nim pcat collections
seems to be infinite looping holding up other people's CI's, and I am unsure why, and I do not see how to kill my CI jobs in the UI. So you should kill them which I think you (or someone) also did yesterday.
Also, before merging this we still need to decide on a defined(js)
branch. It seems the arithmetic JS does creates zero output hashes with hiXorLoFallback64
. Over at https://github.com/c-blake/adix in althash.nim
, I have a hiXorLoFallback32
in 32-bit arithmetic that gives bit identical hashes with the C backend, but different hashes with the JS backend (not sure why, but may be fixable) and also uses 16 multiplications and so is probably kinda slow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Never mind Re killing jobs. I guess they timeout at 90 minutes. We do need to figure out why they infinite loop, though, and also decide about defined(js)
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm investigating.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found a decent answer for the JS part that gives the same function (in low 32-bits anyway, which is what sets & tables care about). Might not be hard to extend that to 53 bits if people complain. Getting all the way to 64 bits will require changing the backend to use BigInt
rather than Number
for int64
/uint64
, but I doubt anyone is using even 4 gigaEntry int-keyed tables on the JS backend. That'd be at least 64 GiB.
very long time, I think). `$a`/`$b` depend on iteration order which varies with table range reduced hash order which varies with range for some `hash()`. With 3 elements, 3!=6 is small and we've just gotten lucky with past experimental `hash()` changes. An alternate fix here would be to not stringify but use the HashSet operators, but it is not clear that doesn't alter the "spirit" of the test.
guarantee the same low order bits of output hashes (for `isSafeInteger` input numbers). Since `hashWangYi1` output bits are equally random in all their bits, this means that tables will be safely scrambled for table sizes up to 2**32 or 4 gigaentries which is probably fine, as long as the integer keys are all < 2**53 (also likely fine). (I'm unsure why the infidelity with C/C++ back ends cut off is 32, not 53 bits.) Since HashSet & Table only use the low order bits, a quick corollary of this is that `$` on most int-keyed sets/tables will be the same in all the various back ends which seems a nice-to-have trait.
the CI hang for testament pcat collections?
I doubt this has to do with the Also, looking at that |
@narimiran - it looks like your |
As for the
The error message is cut off (interestingly in a source-whitespace neutral way), but in |
As for the third and final nimble packages failure, |
Indeed they were. I just pushed a fix for that. |
Cool. Thanks, @narimiran. Any idea about the compiler crash on VM code generation or what about the (And if you are following the other thread on the original bug report, my conclusion is this proposed hash function has higher quality output at lower CPU time cost and lower assembly size than all others mentioned, where the output is being measured with something like 500+ statistical tests..So, virtually indistinguishable from random. I doubt it'll ever fail us except in targeted, direct attacks where anything without secrets can fail.) |
Also, I misinterpreted the CI. It seems that the only failing test is now the |
@Araq @alaviss, is there a reason why we use Node 8.x in our Cis? (Btw, according to this table, it has reached the end of life by the end of 2019.) And just a minor correction: @c-blake, the crossed-out part above is about 0.10.x, not 10.x, if I'm reading the table linked above correctly. |
Oops. You are right, @narimiran. |
Because that's how it was setup when I ported everything to Azure Pipelines. I guess this depends on what version of JS do we want to support. |
Note that, presently, this PR will work either with the old version or with a newer one, just giving a different hash order across them and just testing for the existence of What NodeJS version in general on the CI sounds like a broader question. Just my two cents, but given that < 10.x is no longer 'well supported' "at home"/"off Azure", it's likely easier on everyone trying to get CIs to work to bump that version. |
All checks have passed now. (I made |
@@ -1008,7 +1007,7 @@ when isMainModule and not defined(release): | |||
|
|||
block toSeqAndString: | |||
var a = toHashSet([2, 7, 5]) | |||
var b = initHashSet[int]() | |||
var b = initHashSet[int](rightSize(a.len)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this change still needed? Does the example work without it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That change is necessary because toHashSet
also uses rightSize
for efficiency (to prevent doubling-up however many times). If you drop the assert
you could drop the rightSize
. If you compare the output sets by membership, not by string
/$
which depends upon hash order then you could also drop the rightSize
. OTOH, if you are considering it a "good example" then rightSize
is good style since you start with the right size of an output table which is always good style if you know it (and sadly not often provided by hash table libraries).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, also it's part of a block toSeqAndString
which, at least to me, made it seem more correct to keep the stringification but make the HashSet
table size the same, not whatever the default initial size is. "Hash order" is really a concept inherently relative to a given hash table size since it's hash() and mask
. So, if you did want to drop the $
s we would also probably want to change the block title. But I think it's fine as-is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The change to sugar.nim
is, however, not strictly necessary but the sugar-internal consistency might be considered "better style".
You can also re-close #11764 if you want. |
* test for issue nim-lang#15624 and PR nim-lang#15915 for patch nim-lang#13823 * Update thashes.nim no need mention PR nim-lang#15915, fixed in nim-lang#15937 * rebase to devel(issue maybe fixed), ignore ouputs * Apply suggestions from code review Co-authored-by: flywind <43030857+xflywind@users.noreply.github.com>
* test for issue nim-lang#15624 and PR nim-lang#15915 for patch nim-lang#13823 * Update thashes.nim no need mention PR nim-lang#15915, fixed in nim-lang#15937 * rebase to devel(issue maybe fixed), ignore ouputs * Apply suggestions from code review Co-authored-by: flywind <43030857+xflywind@users.noreply.github.com>
This is an alternate resolution to #13393 (which arguably could be resolved outside the stdlib).
Add version1 of Wang Yi's hash specialized to 8 byte integers. This gives simple help to users having trouble with overly colliding hash(key)s. I.e.,
A)
import hashes; proc hash(x: myInt): Hash = hashWangYi1(int(x))
in the instantiation context of aHashSet
/Table
or
B) more globally, compile with
nim c -d:hashWangYi1
.No hash can be all things to all use cases, but this one is A) vetted to scramble well by the SMHasher test suite (a necessarily limited but far more thorough test than prior proposals here), B) only a few ALU ops on many common CPUs, and C) possesses an easy via "grade school multi-digit multiplication" fall back for weaker deployment contexts.
Some people might want to stampede ahead unbridled, but my view is that a good plan is to:
A) include this in the stdlib for a release or three to let people try it on various key sets nim-core could realistically never access/test (maybe mentioning it in the changelog so people actually try it out),
B) have them report problems (if any),
C) if all seems good, make the stdlib more novice friendly by adding
hashIdentity(x)=x
and changing the defaulthash() = hashWangYi1
with somewhen defined
rearranging so users can-d:hashIdentity
if they want the old behavior back.This plan is compatible with any number of competing integer hashes if people want to add them. I would strongly recommend they all at least pass the SMHasher suite since the idea here is to become more friendly to novices who do not generally understand hashing failure modes.