diff options
author | Yves Orton <demerphq@gmail.com> | 2017-06-01 14:56:12 +0200 |
---|---|---|
committer | Yves Orton <demerphq@gmail.com> | 2017-06-01 16:16:16 +0200 |
commit | 6f019ba79e7ec30ad81de6ad4cce78b8f8f9ba91 (patch) | |
tree | 1e91c833e70145928a9470beac212c36f2607c67 /sbox32_hash.h | |
parent | 4d8c782364ae966f53263102f1850382d4aeef7a (diff) | |
download | perl-6f019ba79e7ec30ad81de6ad4cce78b8f8f9ba91.tar.gz |
Restore "Tweak our hash bucket splitting rules"
This reverts commit e4343ef32499562ce956ba3cb9cf4454d5d2ff7f,
which was a revert of 05f97de032fe95cabe8c9f6d6c0a5897b1616194.
Prior to this patch we resized hashes when after inserting a key
the load factor of the hash reached 1 (load factor= keys / buckets).
This patch makes two subtle changes to this logic:
1. We split only after inserting a key into an utilized bucket,
2. and the maximum load factor exceeds 0.667
The intent and effect of this change is to increase our hash tables
efficiency. Reducing the maximum load factor 0.667 means that we should
have much less keys in collision overall, at the cost of some unutilized
space (2/3rds was chosen as it is easier to calculate than 0.7). On the
other hand, only splitting after a collision means in theory that we execute
the "final split" less often. Additionally, insertin a key into an unused
bucket increases the efficiency of the hash, without changing the worst
case.[1] In other words without increasing collisions we use the space
in our hashes more efficiently.
A side effect of this hash is that the size of a hash is more sensitive
to key insert order. A set of keys with some collisions might be one
size if those collisions were encountered early, or another if they were
encountered later. Assuming random distribution of hash values about
50% of hashes should be smaller than they would be without this rule.
The two changes complement each other, as changing the maximum load
factor decreases the chance of a collision, but changing to only split
after a collision means that we won't waste as much of that space we
might.
[1] Since I personally didnt find this obvious at first here is my
explanation:
The old behavior was that we doubled the number of buckets when the
number of keys in the hash matched that of buckets. So on inserting
the Kth key into a K bucket hash, we would double the number of
buckets. Thus the worse case prior to this patch was a hash
containing K-1 keys which all hash into a single bucket, and the post
split worst case behavior would be having K items in a single bucket
of a hash with 2*K buckets total.
The new behavior says that we double the size of the hash once inserting
an item into an occupied bucket and after doing so we exceeed the maximum
load factor (leave aside the change in maximum load factor in this patch).
If we insert into an occupied bucket (including the worse case bucket) then
we trigger a key split, and we have exactly the same cases as before.
If we insert into an empty bucket then we now have a worst case of K-1 items
in one bucket, and 1 item in another, in a hash with K buckets, thus the
worst case has not changed.
Diffstat (limited to 'sbox32_hash.h')
0 files changed, 0 insertions, 0 deletions