diff options
author | bors <bors@rust-lang.org> | 2023-04-18 03:11:18 +0000 |
---|---|---|
committer | bors <bors@rust-lang.org> | 2023-04-18 03:11:18 +0000 |
commit | 386025117a6b7cd9e7f7c96946793db2ec8aa24c (patch) | |
tree | 5dcd9f994eae1d84603e81d686c6db9db389d5d1 /tests/codegen/src-hash-algorithm/src-hash-algorithm-sha256.rs | |
parent | e279f902f31af1e111f2a951781c9eed82f8c360 (diff) | |
parent | ad8d304163a8c0e8a20d4a1d9783734586273f4a (diff) | |
download | rust-386025117a6b7cd9e7f7c96946793db2ec8aa24c.tar.gz |
Auto merge of #110410 - saethlin:hash-u128-as-u64s, r=oli-obk
Implement StableHasher::write_u128 via write_u64
In https://github.com/rust-lang/rust/pull/110367#issuecomment-1510114777 the cachegrind diffs indicate that nearly all the regression is from this:
```
22,892,558 ???:<rustc_data_structures::sip128::SipHasher128>::slice_write_process_buffer
-9,502,262 ???:<rustc_data_structures::sip128::SipHasher128>::short_write_process_buffer::<8>
```
Which happens because the diff for that perf run swaps a `Hash::hash` of a `u64` to a `u128`. But `slice_write_process_buffer` is a `#[cold]` function, and is for handling hashes of arbitrary-length byte arrays.
Using the much more optimizer-friendly `u64` path twice to hash a `u128` provides a nice perf boost in some benchmarks.
Diffstat (limited to 'tests/codegen/src-hash-algorithm/src-hash-algorithm-sha256.rs')
0 files changed, 0 insertions, 0 deletions