summaryrefslogtreecommitdiff
path: root/ext/Hash-Util
diff options
context:
space:
mode:
authorYves Orton <demerphq@gmail.com>2017-03-22 15:59:31 +0100
committerYves Orton <demerphq@gmail.com>2017-04-23 11:44:17 +0200
commit05f97de032fe95cabe8c9f6d6c0a5897b1616194 (patch)
treef0c829ea57a9a66e3c253310c7db15ca04958b30 /ext/Hash-Util
parenta4283faf7092ec370914ee3e4e7afeddd0115689 (diff)
downloadperl-05f97de032fe95cabe8c9f6d6c0a5897b1616194.tar.gz
Tweak our hash bucket splitting rules
Prior to this patch we resized hashes when after inserting a key the load factor of the hash reached 1 (load factor= keys / buckets). This patch makes two subtle changes to this logic: 1. We split only after inserting a key into an utilized bucket, 2. and the maximum load factor exceeds 0.667 The intent and effect of this change is to increase our hash tables efficiency. Reducing the maximum load factor 0.667 means that we should have much less keys in collision overall, at the cost of some unutilized space (2/3rds was chosen as it is easier to calculate than 0.7). On the other hand, only splitting after a collision means in theory that we execute the "final split" less often. Additionally, insertin a key into an unused bucket increases the efficiency of the hash, without changing the worst case.[1] In other words without increasing collisions we use the space in our hashes more efficiently. A side effect of this hash is that the size of a hash is more sensitive to key insert order. A set of keys with some collisions might be one size if those collisions were encountered early, or another if they were encountered later. Assuming random distribution of hash values about 50% of hashes should be smaller than they would be without this rule. The two changes complement each other, as changing the maximum load factor decreases the chance of a collision, but changing to only split after a collision means that we won't waste as much of that space we might. [1] Since I personally didnt find this obvious at first here is my explanation: The old behavior was that we doubled the number of buckets when the number of keys in the hash matched that of buckets. So on inserting the Kth key into a K bucket hash, we would double the number of buckets. Thus the worse case prior to this patch was a hash containing K-1 keys which all hash into a single bucket, and the post split worst case behavior would be having K items in a single bucket of a hash with 2*K buckets total. The new behavior says that we double the size of the hash once inserting an item into an occupied bucket and after doing so we exceeed the maximum load factor (leave aside the change in maximum load factor in this patch). If we insert into an occupied bucket (including the worse case bucket) then we trigger a key split, and we have exactly the same cases as before. If we insert into an empty bucket then we now have a worst case of K-1 items in one bucket, and 1 item in another, in a hash with K buckets, thus the worst case has not changed.
Diffstat (limited to 'ext/Hash-Util')
-rw-r--r--ext/Hash-Util/t/Util.t4
-rw-r--r--ext/Hash-Util/t/builtin.t10
2 files changed, 8 insertions, 6 deletions
diff --git a/ext/Hash-Util/t/Util.t b/ext/Hash-Util/t/Util.t
index 4a12fd1764..c52a8e4b88 100644
--- a/ext/Hash-Util/t/Util.t
+++ b/ext/Hash-Util/t/Util.t
@@ -606,9 +606,9 @@ ok(defined($hash_seed) && $hash_seed ne '', "hash_seed $hash_seed");
my $array1= bucket_array({});
my $array2= bucket_array({1..10});
is("@info1","0 8 0");
- is("@info2[0,1]","5 8");
+ like("@info2[0,1]",qr/5 (?:8|16)/);
is("@stats1","0 8 0");
- is("@stats2[0,1]","5 8");
+ like("@stats2[0,1]",qr/5 (?:8|16)/);
my @keys1= sort map { ref $_ ? @$_ : () } @$array1;
my @keys2= sort map { ref $_ ? @$_ : () } @$array2;
is("@keys1","");
diff --git a/ext/Hash-Util/t/builtin.t b/ext/Hash-Util/t/builtin.t
index 3654c9bc1a..0705f84206 100644
--- a/ext/Hash-Util/t/builtin.t
+++ b/ext/Hash-Util/t/builtin.t
@@ -26,13 +26,15 @@ is(used_buckets(%hash), 1, "hash should have one used buckets");
$hash{$_}= $_ for 2..7;
-like(bucket_ratio(%hash), qr!/8!, "hash has expected number of buckets in bucket_ratio");
-is(num_buckets(%hash), 8, "hash should have eight buckets");
+like(bucket_ratio(%hash), qr!/(?:8|16)!, "hash has expected number of buckets in bucket_ratio");
+my $num= num_buckets(%hash);
+ok(($num == 8 || $num == 16), "hash should have 8 or 16 buckets");
cmp_ok(used_buckets(%hash), "<", 8, "hash should have one used buckets");
$hash{8}= 8;
-like(bucket_ratio(%hash), qr!/16!, "hash has expected number of buckets in bucket_ratio");
-is(num_buckets(%hash), 16, "hash should have sixteen buckets");
+like(bucket_ratio(%hash), qr!/(?:8|16)!, "hash has expected number of buckets in bucket_ratio");
+$num= num_buckets(%hash);
+ok(($num == 8 || $num == 16), "hash should have 8 or 16 buckets");
cmp_ok(used_buckets(%hash), "<=", 8, "hash should have at most 8 used buckets");