| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The purpose is less machine instructions/faster code.
* S_hv_free_ent_ret() is always called with entry non-null: so change its
signature to reflect this, and remove a null check;
* Add some SvREFCNT_dec_NNs;
* In hv_clear(), refactor the code slightly to only do a SvREFCNT_dec_NN
within the branch where its already been determined that the arg is
non-null; also, use the _nocontext variant of Perl_croak() to save
a push instruction in threaded perls.
|
|
|
|
|
| |
In this instance, we know that av is not null, so no need to check
whether it is
|
|
|
|
|
|
|
| |
There is no point in modifying a struct just before freeing it.
This was my mistake, in commit 47f1cf7702, when I moved the code from
S_hfreeentries to hv_undef.
|
|
|
|
|
|
|
| |
This finishes the removal of register declarations started by
eb578fdb5569b91c28466a4d1939e381ff6ceaf4. It neglected the ones in
function parameter declarations, and didn't include things in dist, ext,
and lib, which this does include
|
|
|
|
|
|
| |
We do not want to resize the hash every time the bucket length is
too long. Nor do we want to pay the price of checking how long
the bucket length is when there is nothing we can do about it anyway.
|
|
|
|
|
| |
note: this failed to build in smoke-me eg.
http://perl.develop-help.com/raw/?id=131750
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch does the following:
*) Introduces multiple new hash functions to choose from at build
time. This includes Murmur-32, SDBM, DJB2, SipHash, SuperFast, and
One-at-a-time. Currently this is handled by muning hv.h. Configure
support hopefully to follow.
*) Changes the default hash to Murmur hash which is faster than the
old default One-at-a-time.
*) Rips out the old HvREHASH mechanism and replaces it with a
per-process random hash seed.
*) Changes the old PL_hash_seed from an interpreter value to a
global variable. This means it does not have to be copied during
interpreter setup or cloning.
*) Changes the format of the PERL_HASH_SEED variable to a hex
string so that hash seeds longer than fit in an integer are possible.
*) Changes the return of Hash::Util::hash_seed() from a number to a
string. This is to accomodate hash functions which have more bits than
can be fit in an integer.
*) Adds new functions to Hash::Util to improve introspection of hashes
-) hash_value() - returns an integer hash value for a given string.
-) bucket_info() - returns basic hash bucket utilization info
-) bucket_stats() - returns more hash bucket utilization info
-) bucket_array() - which keys are in which buckets in a hash
More details on the new hash functions can be found below:
Murmur Hash: (v3) from google, see
http://code.google.com/p/smhasher/wiki/MurmurHash3
Superfast Hash: From Paul Hsieh.
http://www.azillionmonkeys.com/qed/hash.html
DJB2: a hash function from Daniel Bernstein
http://www.cse.yorku.ca/~oz/hash.html
SDBM: a hash function sdbm.
http://www.cse.yorku.ca/~oz/hash.html
SipHash: by Jean-Philippe Aumasson and Daniel J. Bernstein.
https://www.131002.net/siphash/
They have all be converted into Perl's ugly macro format.
I have not done any rigorous testing to make sure this conversion
is correct. They seem to function as expected however.
All of them use the random hash seed.
You can force the use of a given function by defining one of
PERL_HASH_FUNC_MURMUR
PERL_HASH_FUNC_SUPERFAST
PERL_HASH_FUNC_DJB2
PERL_HASH_FUNC_SDBM
PERL_HASH_FUNC_ONE_AT_A_TIME
Setting the environment variable PERL_HASH_SEED_DEBUG to 1 will make
perl output the current seed (changed to hex) and the hash function
it has been built with.
Setting the environment variable PERL_HASH_SEED to a hex value will
cause that value to be used at the seed. Any missing bits of the seed
will be set to 0. The bits are filled in from left to right, not
the traditional right to left so setting it to FE results in a seed
value of "FE000000" not "000000FE".
Note that we do the hash seed initialization in perl_construct().
Doing it via perl_alloc() (via init_tls) causes problems under
threaded builds as the buffers used for reentrant srand48 functions
are not allocated. See also the p5p mail "Hash improvements blocker:
portable random code that doesnt depend on a functional interpreter",
Message-ID:
<CANgJU+X+wNayjsNOpKRqYHnEy_+B9UH_2irRA5O3ZmcYGAAZFQ@mail.gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
By defining NO_TAINT_SUPPORT, all the various checks that perl does for
tainting become no-ops. It's not an entirely complete change: it doesn't
attempt to remove the taint-related interpreter variables, but instead
virtually eliminates access to it.
Why, you ask? Because it appears to speed up perl's run-time
significantly by avoiding various "are we running under taint" checks
and the like.
This change is not in a state to go into blead yet. The actual way I
implemented it might raise some (valid) objections. Basically, I
replaced all uses of the global taint variables (but not PL_taint_warn!)
with an extra layer of get/set macros (TAINT_get/TAINTING_get).
Furthermore, the change is not complete:
- PL_taint_warn would likely deserve the same treatment.
- Obviously, tests fail. We have tests for -t/-T
- Right now, I added a Perl warn() on startup when -t/-T are detected
but the perl was not compiled support it. It might be argued that it
should be silently ignored! Needs some thinking.
- Code quality concerns - needs review.
- Configure support required.
- Needs thinking: How does this tie in with CPAN XS modules that use
PL_taint and friends? It's easy to backport the new macros via PPPort,
but that doesn't magically change all code out there. Might be
harmless, though, because whenever you're running under
NO_TAINT_SUPPORT, any check of PL_taint/etc is going to come up false.
Thus, the only CPAN code that SHOULD be adversely affected is code
that changes taint state.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
I don't see any reason not to set flags properly in this
branch. It doesn't look like any useful optimization.
It's probably even a bug, but probably it can only be hit from
a XS code. To hit the bug keysv should be provided, be UTF8
and not SvIsCOW_shared_hash, but with flags containing
HVhek_KEYCANONICAL.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
When %^H is copied on entering a new scope, if it happens to have been
tied it can die. This was resulting in leaks, because no protections
were added to handle that case.
The two things that were leaking were the new hash in hv_copy_hints_hv
and the new value (for an element) in newSVsv.
By fixing newSVsv itself, this also fixes any potential leaks when
other pieces of code call newSVsv on explosive values.
|
|
|
|
|
|
|
|
| |
The current iterator was leaking when a tied hash was freed or
undefined.
Since we already have a mechanism, namely HvLAZYDEL, for freeing
HvEITER when not referenced elsewhere, we can use that.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Perl caches SUPER methods inside packages named Foo::SUPER. But this
interferes with actual method calls on those packages (SUPER->foo,
foo::SUPER->foo).
The first time a package is looked up, it is vivified under the name
with which it is looked up. So *SUPER:: will cause that package
to be called SUPER, and *main::SUPER:: will cause it to be named
main::SUPER.
main->SUPER::isa used to be very sensitive to the name of the
main::FOO package (where the cache is kept). If it happened to be
called SUPER, that call would fail.
Fixing that bug (commit 3c104e59d83f) caused the CPAN module named
SUPER to fail, because SUPER->foo was now being treated as a
SUPER::method call. gv_fetchmeth_pvn was using the ::SUPER suffix to
determine where to look for the method. The package passed to it (the
::SUPER package) was being used to look for cached methods, but the
package with ::SUPER stripped off was being used for the rest of
lookup. 3c104e59d83f made main->SUPER::foo work by treating SUPER
as main::SUPER in that case. Mentioning *main::SUPER:: or doing a
main->SUPER::foo call before loading SUPER.pm also caused it to fail,
even before 3c104e59d83f.
Instead of using publicly-visible packages for internal caches, we
should be keeping them internal, to avoid such side effects.
This commit adds a new member to the HvAUX struct, where a hash of GVs
is stored, to cache super methods. I cannot simpy use a hash of CVs,
because I need GvCVGEN. Using a hash of GVs allows the existing
method cache code to be used.
This new hash of GVs is not actually a stash, as it has no HvAUX
struct (i.e., no name, no mro_meta). It doesn’t even need an @ISA
entry as before (which was only used to make isa caches reset), as it
shares its owner stash’s mro_meta generation numbers. In fact, the
GVs inside it have their GvSTASH pointers pointing to the owner stash.
In terms of memory use, it is probably the same as before. Every
stash and every iterated or weakly-referenced hash is now one pointer
larger than before, but every SUPER cache is smaller (no HvAUX, no
*ISA + @ISA + $ISA[0] + magic).
The code is a lot simpler now and uses fewer stash lookups, so it
should be faster.
This will break any XS code that expects the gv_fetchmeth_pvn to treat
the ::SUPER suffix as magical. This behaviour was only barely docu-
mented (the suffix was mentioned, but what it did was not), and is
unused on CPAN.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This removes most register declarations in C code (and accompanying
documentation) in the Perl core. Retained are those in the ext
directory, Configure, and those that are associated with assembly
language.
See:
http://stackoverflow.com/questions/314994/whats-a-good-example-of-register-variable-usage-in-c
which says, in part:
There is no good example of register usage when using modern compilers
(read: last 10+ years) because it almost never does any good and can do
some bad. When you use register, you are telling the compiler "I know
how to optimize my code better than you do" which is almost never the
case. One of three things can happen when you use register:
The compiler ignores it, this is most likely. In this case the only
harm is that you cannot take the address of the variable in the
code.
The compiler honors your request and as a result the code runs slower.
The compiler honors your request and the code runs faster, this is the least likely scenario.
Even if one compiler produces better code when you use register, there
is no reason to believe another will do the same. If you have some
critical code that the compiler is not optimizing well enough your best
bet is probably to use assembler for that part anyway but of course do
the appropriate profiling to verify the generated code is really a
problem first.
|
|
|
|
|
| |
I was confused by the earlier documentation. Thanks to Leon Timmermans
for clarifying, and to Vicent Pitt for most of the wording
|
|
|
|
|
| |
This updates the editor hints in our files for Emacs and vim to request
that tabs be inserted as spaces.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 104d7b69 made sv_clear free hashes iteratively rather than recursively;
however, my code didn't record the current hash index when freeing a
nested hash, which made the code go quadratic when freeing a large hash
with inner hashes, e.g.:
my $r; $r->{$_} = { a => 1 } for 1..10_0000;
This was noticeable on such things as CPAN.pm being very slow to exit.
This commit fixes this by squirrelling away the old hash index in the
now-unused SvMAGIC field of the hash being freed.
|
| |
|
|
|
|
|
|
|
| |
Commit 60edcf09a was supposed to make things less buggy, but putting
the ENTER/LEAVE in h_freeentries was a mistake, as both hv_undef and
hv_clear still access the hv after calling h_freeentries. Why it
didn’t crash for me is anyone’s guess.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
> > Actually, the simplest solution seem to be to put the av or hv on
> > the mortals stack in pp_aassign and pp_undef, rather than in
> > [ah]v_undef/clear.
>
> This makes me nervous. The tmps stack is typically cleared only on
> statement boundaries, so we run the risks of
>
> * user-visible delaying of freeing elements;
> * large tmps stack growth might be possible with
> certain types of loop that repeatedly assign to an array without
> freeing tmps (eg map? I think I fixed most map/grep tmps leakage
> a
> while back, but there may still be some edge cases).
>
> Surely an ENTER/SAVEFREESV/LEAVE inside pp_aassign is just as
> efficient,
> without any attendant risks?
>
> Also, although pp_aassign and pp_undef are now fixed, the
> [ah]v_undef/clear functions aren't, and they're part of the public API
> that can be called independently of pp_aassign etc. Ideally they
> should
> be fixed (so they don't crash in mid-loop), and their documentation
> updated to point out that on return, their AV/HV arg may have been
> freed.
This commit takes care of the first part; it changes pp_aassign to use
ENTER/SAVEFREESV/LEAVE and adds the same to h_freeentries (called both
by hv_undef and hv_clear), av_undef and av_clear.
It effectively reverts the C code part of 9f71cfe6ef2.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
It hasn’t been necessary since commit f50383f5. Before that it wasn’t
sufficient. See commit 5743f2a for details.
If a hash element is being deleted, S_hv_delete_common takes care of
this. If a hash is being freed or cleared, hv_undef or hv_clear takes
care of it.
|
|
|
|
| |
Changes four commits ago made this possible.
|
|
|
|
|
|
|
|
|
|
| |
When a hash element is deleted in void context, if the value is freed
before the hash entry, it is possible for a destructor to see the hash
in an inconsistent state--inconsistent in that it contains entries
that are about to be freed, with nothing to indicate that. So the
destructor itself could free the very same hash entry (e.g., by
freeing the hash), resulting in a double free, panic, or other
unpleasantness.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 71be2cb (inseparable changes from patch from perl5.003_13 to
perl5.003_14) added code to hv_free_ent to reset method caches if
necessary.
Later, a bug fix in the deletion code (f50383f5) made it necessary for
the value in the HE to be set to &PL_sv_placeholder, so it wouldn’t
free the sv just yet. So the method cache code (which by then had
changed from using PL_sub_generation to using mro_method_changed_in)
got repeated in S_hv_delete_common.
The only problem with all this is that hv_free_ent was the wrong place
to put it to begin with. If delete is called in non-void context,
the sv in it is not freed just yet, but returned; so hv_free_ent was
already being called with a HE pointing to &PL_sv_placeholder.
Commit f50383f5 only added the mro code for the void case, to avoid
changing existing behaviour when rearranging the code.
It turns out it needs to be done in S_hv_delete_common uncon-
ditionally.
Changing a test in t/op/method.t to use ()=delete instead of delete
makes it fail, because method caches end up going stale. See the test
in the diff.
|
| |
|
|
|
|
|
|
| |
When seeing whether the cop hint hash contains the given feature,
Perl_feature_is_enabled only needs to see whether the hint hash ele-
ment exists. It doesn’t need to turn it into a scalar.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Entries from a tied %^H were not being copied to inner compile-time
scopes, resulting in %^H appearing empty in BEGIN blocks, even
though the underlying he chain *was* being propagated properly (so
(caller)[10] at run time still worked.
I was surprised that, in writing tests for this, I produced another
crash. I thought I had fixed them with 95cf23680 and 7ef9d42ce. It
turns out that pp_helem doesn’t support hashes with null values.
(That’s a separate bug that needs fixing, since the XS API allows for
them.) For now, cloning the hh properly stops pp_helem from getting a
null value.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The test that was added in 95cf23680e tickled another bug in the same
code in Perl_hv_copy_hints_hv than the one it fixed, but not on the
committer’s machine.
Not only can a HE from a tied hash have a null entry, but it can also
have an SV for its key. Treating it as a hek and trying to read flags
from it may result in other code being told to free something it
shouldn’t because the SV, when looked at as a hek, appeared to have
the HVhek_FREEKEY flag.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When hv_iternext_flags is called on a tied hash, the hash entry (HE)
that it returns has no value. Perl_hv_copy_hints_hv, added in commit
5b9c067131, was assuming that it would have a value and calling
sv_magic on it, resulting in a crash.
Commit b50b205 made namespace::clean’s test suite crash, because
strict.pm started using %^H. It was already possible to crash
namespace::clean with other hh-using pragmata, like sort:
# namespace::clean 0.21 only uses ties in the absence of B:H:EOS
use Devel::Hide 'B::Hooks::EndOfScope';
use sort "stable";
use namespace::clean;
use sort "stable";
{;}
It was possible to trigger the crash with no modules like this:
package namespace::clean::_TieHintHash;
sub TIEHASH { bless[] }
sub STORE { $_[0][0]{$_[1]} = $_[2] }
sub FETCH { $_[0][0]{$_[1]} }
sub FIRSTKEY { my $a = scalar keys %{$_[0][0]}; each %{$_[0][0]} }
sub NEXTKEY { each %{$_[0][0]} }
package main;
BEGIN {
$^H{foo} = "bar";
tie( %^H, 'namespace::clean::_TieHintHash' );
$^H{foo} = "bar";
}
{ ; }
This commit puts in a simple null check before calling sv_magic. Tied
hint hashes still do not work, but they now only work as badly as in
5.8 (i.e., they don’t crash).
I don’t think tied hint hashes can ever be made to work properly, even
if we do make Perl_hv_copy_hints_hv copy the hash properly, because in
the scope where %^H is tied, the tie magic takes precedence over hint
magic, preventing the underlying he chain from being updated. So
hints set in that scope will just not stick.
|
|
|
|
|
|
| |
Now that SvOOK_on has a usable definition (i.e., it leaves the
NIOK flags alone), we can use it and remove the comments warning
against it.
|
|
|
|
|
|
|
| |
It says that allocate one block of memory with the HEK immediately
following the HE, so we can find the HEK from the HE. Of course we
can find the HEK from the HE, as the HE points to it. The two terms
were apparently transposed.
|
|
|
|
|
|
|
| |
The perldiag entry said ‘nonexistent’, which is correct. hv.c said
‘non-existent’, which is, well, questionable. They should be the
same, so I corrected hv.c. I also added the %s%s to the end in
perldiag.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 900ac0519e (5.11.0) sped up keys() on an empty hash by modify-
ing the iteration code not to loop through the buckets looking for an
entry if the number of keys is 0. Interestingly, it had no visible
affect on keys(), but it *did* have one on each(). Resetting the ite-
rator’s current bucket number (RITER) used to be done inside that loop
in hv_iternext. keys() always begins by resetting the iterator, so it
was unaffected. But each %empty will leave the iterator as-is. It
will be set on an empty hash if the last element was deleted while an
iterator was active. This has buggy side-effects.
$h{1} = 2;
each %h; # returns (1, 2)
delete $h{1};
each %h; # returns false; should reset iterator
$h{1}=2;
print each %h, "\n"; # prints nothing
Commit 3b37eb248 (5.15.0), which changed the way S_hfreeentries works.
(S_hfreentries is called by all operators that empty hashes, such as
%h=() and undef %h.) Now S_hfreentries does nothing if the hash is
empty. That change on its own should have been harmless, but the
result was that even %h=() won’t reset RITER after each() has put
things in an inconsistent state. This caused test failures in
Text::Tabulate.
So the solution, of course, is to complete the change made by
900ac0519e and reset the iterator properly in hv_iternext if the
hash is empty.
|
| |
|
|
|
|
| |
This interface is unfortunate, but it's there and in use.
|
| |
|
|
|
|
|
|
|
|
|
| |
Doing memEQ(str1, str2, len2) without checking the length
first will cause memEQ("forth","fort"...) to compare equal and
memEQ("fort","forth"...) to read unallocated memory.
This was only a potential future problem, as none of the callers reach
this branch.
|
|
|
|
|
|
|
|
|
|
|
| |
This adds a new static function to hv.c, hek_eq_pvn_flags,
which replaces several memEQs.
It also cleans up hv_name_set and has the relevant calls
to hv_common and friends made UTF-8 aware.
Finally, it changes share_hek() to modify the hash passed
in if the pv was modified when downgrading.
|
|
|
|
|
|
|
| |
Commit f50383f58 made the ‘HeVAL(entry) = &PL_sv_placeholder;’ in the
hash-element-deletion code unconditional. In doing so, it put it
after the if/else statement containing the SvREFCNT_dec. So the freed
SV was visible in the hash to destructors called by SvREFCNT_dec.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
This makes them consistent with other functions that put the basic
datum type first (like hv_*, sv_*, cophh_*).
Since fetch_cop_label is marked as experimental (M), this change
should be OK.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 7d6175e, which did a fix-up after commit e0171a1a3, which
introduced hfree_next_entry, did not account for the fact that
hfree_next_entry frees the hash iterator before removing and returning
the next value.
It changed the callers to check the number of keys to determine
whether anything else needed to be freed, which meant that
hfree_next_entry was called one time less than necessary on hashes
whose current iterator had been deleted and which consequently
appeared empty.
This fixes that.
I don’t know how to test it, but the string table warnings were caus-
ing test failures on VMS, so maybe that’s good enough.
|