Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Crypto: Improve checked add functions & add tests #8501

Closed

Conversation

DaGenix
Copy link

@DaGenix DaGenix commented Aug 14, 2013

Improve the functions that keep track of the size of the data on which a Digest is being computed and add some tests. The tests are a little redundant for the success cases - if they didn't work, the Digests that use them would fail. The failure tests are a bit more useful since its very difficult to test them through the Digests - processing 2^64+1 bits of data would take a long time!

The performance stays basically the same:

before:

test sha1::bench::sha1_10 ... bench: 74 ns/iter (+/- 0) = 135 MB/s
test sha1::bench::sha1_1k ... bench: 6154 ns/iter (+/- 57) = 166 MB/s
test sha1::bench::sha1_64k ... bench: 392056 ns/iter (+/- 2536) = 167 MB/s
test sha2::bench::sha256_10 ... bench: 82 ns/iter (+/- 3) = 121 MB/s
test sha2::bench::sha256_1k ... bench: 6789 ns/iter (+/- 79) = 150 MB/s
test sha2::bench::sha256_64k ... bench: 431722 ns/iter (+/- 6228) = 151 MB/s
test sha2::bench::sha512_10 ... bench: 56 ns/iter (+/- 1) = 178 MB/s
test sha2::bench::sha512_1k ... bench: 4414 ns/iter (+/- 33) = 231 MB/s
test sha2::bench::sha512_64k ... bench: 280875 ns/iter (+/- 3053) = 233 MB/s

after:

test sha1::bench::sha1_10 ... bench: 72 ns/iter (+/- 1) = 138 MB/s
test sha1::bench::sha1_1k ... bench: 6146 ns/iter (+/- 169) = 166 MB/s
test sha1::bench::sha1_64k ... bench: 389713 ns/iter (+/- 5396) = 168 MB/s
test sha2::bench::sha256_10 ... bench: 77 ns/iter (+/- 1) = 129 MB/s
test sha2::bench::sha256_1k ... bench: 6743 ns/iter (+/- 57) = 151 MB/s
test sha2::bench::sha256_64k ... bench: 430467 ns/iter (+/- 5599) = 152 MB/s
test sha2::bench::sha512_10 ... bench: 58 ns/iter (+/- 1) = 172 MB/s
test sha2::bench::sha512_1k ... bench: 4423 ns/iter (+/- 42) = 231 MB/s
test sha2::bench::sha512_64k ... bench: 281124 ns/iter (+/- 4358) = 233 MB/s


match x {
match bits {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this be let (hi, low) = bits; to avoid the rightward drift?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great point. I will rebase to make this change.

@DaGenix
Copy link
Author

DaGenix commented Aug 14, 2013

Rebased to address comment by @huonw.

@DaGenix DaGenix closed this Aug 15, 2013
Palmer Cox added 2 commits August 15, 2013 22:51
The shift_add_check_overflow and shift_add_check_overflow_tuple functions are
re-written to be more efficient and to make use of the CheckedAdd instrinsic
instead of manually checking for integer overflow.

* The invokation leading_zeros() is removed and replaced with simple integer
  comparison. The leading_zeros() method results in a ctpop LLVM instruction
  and it may not be efficient on all architectures; integer comparisons,
  however, are efficient on just about any architecture.
* The methods lose the ability for the caller to specify a particular shift
  value - that functionality wasn't being used and removing it allows for the
  code to be simplified.
* Finally, the methods are renamed to add_bytes_to_bits and
  add_bytes_to_bits_tuple to reflect their very specific purposes.
@DaGenix
Copy link
Author

DaGenix commented Aug 16, 2013

So, my previous implementation would fail!() if the number of bytes being added where greater than 0x1fffffffffffffff. There really isn't any good reason for that since its a perfectly valid size to add. Admittedly this is a bit pedantic - in practice its just about impossible to process that many bytes at a time. However, the whole point of this chunk of code is to handle the case where someone attempts to process more than 2^64-1 (or 2^128-1) bits, which is something that I suspect no one has ever tried to do (who tries to hash 16 billion gigabytes?). The Sha1 and Sha2 standards do, however, define a maximum message size and the only alternative to enforcing that is to ignore it.

Anyway, if we're going to be strict about enforcing the maximum message size, we might as well do it right. I've rebased the commits so that any "bytes" value should work. I also updated the performance numbers, although they stay basically the same.

@DaGenix DaGenix reopened this Aug 16, 2013
@DaGenix DaGenix mentioned this pull request Aug 17, 2013
@DaGenix
Copy link
Author

DaGenix commented Aug 17, 2013

I rebased all of these changes into #8272 to be easier on bors. Closing this ticket as all the changes are in that pull request now.

@DaGenix DaGenix closed this Aug 17, 2013
flip1995 pushed a commit to flip1995/rust that referenced this pull request Mar 24, 2022
More `transmute_undefined_repr` fixes

fixes: rust-lang#8498
fixes: rust-lang#8501
fixes: rust-lang#8503

changelog: Allow `transumte_undefined_repr` between fat pointers and `(usize, usize)`
changelog: Allow `transumte_undefined_repr` when one side is a union
changelog: Fix `transumte_undefined_repr` on tuples with one non-zero-sized type.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants