summaryrefslogtreecommitdiff
path: root/hv.c
Commit message (Collapse)AuthorAgeFilesLines
* In Perl_refcounted_he_fetch_pvn(), eliminate nested ? : ternary operators.Nicholas Clark2012-01-171-8/+7
|
* Fix crash in hv_undefFather Chrysostomos2012-01-091-8/+9
| | | | | | | Commit 60edcf09a was supposed to make things less buggy, but putting the ENTER/LEAVE in h_freeentries was a mistake, as both hv_undef and hv_clear still access the hv after calling h_freeentries. Why it didn’t crash for me is anyone’s guess.
* Document that [ah]v_undef/clear may free the [ah]vFather Chrysostomos2012-01-091-2/+9
|
* Better fix for perl #107440Father Chrysostomos2012-01-091-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | > > Actually, the simplest solution seem to be to put the av or hv on > > the mortals stack in pp_aassign and pp_undef, rather than in > > [ah]v_undef/clear. > > This makes me nervous. The tmps stack is typically cleared only on > statement boundaries, so we run the risks of > > * user-visible delaying of freeing elements; > * large tmps stack growth might be possible with > certain types of loop that repeatedly assign to an array without > freeing tmps (eg map? I think I fixed most map/grep tmps leakage > a > while back, but there may still be some edge cases). > > Surely an ENTER/SAVEFREESV/LEAVE inside pp_aassign is just as > efficient, > without any attendant risks? > > Also, although pp_aassign and pp_undef are now fixed, the > [ah]v_undef/clear functions aren't, and they're part of the public API > that can be called independently of pp_aassign etc. Ideally they > should > be fixed (so they don't crash in mid-loop), and their documentation > updated to point out that on return, their AV/HV arg may have been > freed. This commit takes care of the first part; it changes pp_aassign to use ENTER/SAVEFREESV/LEAVE and adds the same to h_freeentries (called both by hv_undef and hv_clear), av_undef and av_clear. It effectively reverts the C code part of 9f71cfe6ef2.
* hv.c pod: 'Perl_sv_placeholder' should be 'PL_sv_placeholder'Karl Williamson2012-01-041-1/+1
|
* hv.c: Consistent spaces after dots in apidocsFather Chrysostomos2011-12-311-7/+8
|
* Remove method cache reset from hv.c:hv_free_entFather Chrysostomos2011-12-311-2/+0
| | | | | | | | | It hasn’t been necessary since commit f50383f5. Before that it wasn’t sufficient. See commit 5743f2a for details. If a hash element is being deleted, S_hv_delete_common takes care of this. If a hash is being freed or cleared, hv_undef or hv_clear takes care of it.
* squash some code in hv.c:S_hv_delete_commonFather Chrysostomos2011-12-311-8/+2
| | | | Changes four commits ago made this possible.
* [perl #100340] Free hash entries before values on deleteFather Chrysostomos2011-12-311-4/+5
| | | | | | | | | | When a hash element is deleted in void context, if the value is freed before the hash entry, it is possible for a destructor to see the hash in an inconsistent state--inconsistent in that it contains entries that are about to be freed, with nothing to indicate that. So the destructor itself could free the very same hash entry (e.g., by freeing the hash), resulting in a double free, panic, or other unpleasantness.
* Update method caches for non-void stash elem deletionsFather Chrysostomos2011-12-311-8/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 71be2cb (inseparable changes from patch from perl5.003_13 to perl5.003_14) added code to hv_free_ent to reset method caches if necessary. Later, a bug fix in the deletion code (f50383f5) made it necessary for the value in the HE to be set to &PL_sv_placeholder, so it wouldn’t free the sv just yet. So the method cache code (which by then had changed from using PL_sub_generation to using mro_method_changed_in) got repeated in S_hv_delete_common. The only problem with all this is that hv_free_ent was the wrong place to put it to begin with. If delete is called in non-void context, the sv in it is not freed just yet, but returned; so hv_free_ent was already being called with a HE pointing to &PL_sv_placeholder. Commit f50383f5 only added the mro code for the void case, to avoid changing existing behaviour when rearranging the code. It turns out it needs to be done in S_hv_delete_common uncon- ditionally. Changing a test in t/op/method.t to use ()=delete instead of delete makes it fail, because method caches end up going stale. See the test in the diff.
* hv.c: Make newHVhv work on tied hashesFather Chrysostomos2011-12-241-4/+8
|
* speed up feature-checking slightlyFather Chrysostomos2011-12-221-4/+10
| | | | | | When seeing whether the cop hint hash contains the given feature, Perl_feature_is_enabled only needs to see whether the hint hash ele- ment exists. It doesn’t need to turn it into a scalar.
* Copy hints from tied hh to inner compile scopesFather Chrysostomos2011-12-211-2/+2
| | | | | | | | | | | | | | Entries from a tied %^H were not being copied to inner compile-time scopes, resulting in %^H appearing empty in BEGIN blocks, even though the underlying he chain *was* being propagated properly (so (caller)[10] at run time still worked. I was surprised that, in writing tests for this, I produced another crash. I thought I had fixed them with 95cf23680 and 7ef9d42ce. It turns out that pp_helem doesn’t support hashes with null values. (That’s a separate bug that needs fixing, since the XS API allows for them.) For now, cloning the hh properly stops pp_helem from getting a null value.
* Fixing crash in hint.tFather Chrysostomos2011-12-201-4/+9
| | | | | | | | | | | | The test that was added in 95cf23680e tickled another bug in the same code in Perl_hv_copy_hints_hv than the one it fixed, but not on the committer’s machine. Not only can a HE from a tied hash have a null entry, but it can also have an SV for its key. Treating it as a hek and trying to read flags from it may result in other code being told to free something it shouldn’t because the SV, when looked at as a hek, appeared to have the HVhek_FREEKEY flag.
* [perl #106282] Don’t crash cloning tied %^HFather Chrysostomos2011-12-201-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When hv_iternext_flags is called on a tied hash, the hash entry (HE) that it returns has no value. Perl_hv_copy_hints_hv, added in commit 5b9c067131, was assuming that it would have a value and calling sv_magic on it, resulting in a crash. Commit b50b205 made namespace::clean’s test suite crash, because strict.pm started using %^H. It was already possible to crash namespace::clean with other hh-using pragmata, like sort: # namespace::clean 0.21 only uses ties in the absence of B:H:EOS use Devel::Hide 'B::Hooks::EndOfScope'; use sort "stable"; use namespace::clean; use sort "stable"; {;} It was possible to trigger the crash with no modules like this: package namespace::clean::_TieHintHash; sub TIEHASH { bless[] } sub STORE { $_[0][0]{$_[1]} = $_[2] } sub FETCH { $_[0][0]{$_[1]} } sub FIRSTKEY { my $a = scalar keys %{$_[0][0]}; each %{$_[0][0]} } sub NEXTKEY { each %{$_[0][0]} } package main; BEGIN { $^H{foo} = "bar"; tie( %^H, 'namespace::clean::_TieHintHash' ); $^H{foo} = "bar"; } { ; } This commit puts in a simple null check before calling sv_magic. Tied hint hashes still do not work, but they now only work as badly as in 5.8 (i.e., they don’t crash). I don’t think tied hint hashes can ever be made to work properly, even if we do make Perl_hv_copy_hints_hv copy the hash properly, because in the scope where %^H is tied, the tie magic takes precedence over hint magic, preventing the underlying he chain from being updated. So hints set in that scope will just not stick.
* Use SvOOK_onFather Chrysostomos2011-12-011-2/+1
| | | | | | Now that SvOOK_on has a usable definition (i.e., it leaves the NIOK flags alone), we can use it and remove the comments warning against it.
* Correct comment in hv.c:S_share_hek_flagsFather Chrysostomos2011-11-291-1/+1
| | | | | | | It says that allocate one block of memory with the HEK immediately following the HE, so we can find the HEK from the HE. Of course we can find the HEK from the HE, as the HE points to it. The two terms were apparently transposed.
* Correct spelling of double free warningFather Chrysostomos2011-11-221-1/+1
| | | | | | | The perldiag entry said ‘nonexistent’, which is correct. hv.c said ‘non-existent’, which is, well, questionable. They should be the same, so I corrected hv.c. I also added the %s%s to the end in perldiag.
* each() should not leave RITER set on empty hashFather Chrysostomos2011-11-051-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 900ac0519e (5.11.0) sped up keys() on an empty hash by modify- ing the iteration code not to loop through the buckets looking for an entry if the number of keys is 0. Interestingly, it had no visible affect on keys(), but it *did* have one on each(). Resetting the ite- rator’s current bucket number (RITER) used to be done inside that loop in hv_iternext. keys() always begins by resetting the iterator, so it was unaffected. But each %empty will leave the iterator as-is. It will be set on an empty hash if the last element was deleted while an iterator was active. This has buggy side-effects. $h{1} = 2; each %h; # returns (1, 2) delete $h{1}; each %h; # returns false; should reset iterator $h{1}=2; print each %h, "\n"; # prints nothing Commit 3b37eb248 (5.15.0), which changed the way S_hfreeentries works. (S_hfreentries is called by all operators that empty hashes, such as %h=() and undef %h.) Now S_hfreentries does nothing if the hash is empty. That change on its own should have been harmless, but the result was that even %h=() won’t reset RITER after each() has put things in an inconsistent state. This caused test failures in Text::Tabulate. So the solution, of course, is to complete the change made by 900ac0519e and reset the iterator properly in hv_iternext if the hash is empty.
* hv.c: comment typoFather Chrysostomos2011-11-051-1/+1
|
* document the behaviour of negative klen for hv_fetch and friendsTony Cook2011-10-071-12/+20
| | | | This interface is unfortunate, but it's there and in use.
* Cast to signed before negating, to avoid compiler warningsBrian Fraser2011-10-061-3/+3
|
* Fix thinko in hek_eq_pvn_flagsFather Chrysostomos2011-10-061-1/+1
| | | | | | | | | Doing memEQ(str1, str2, len2) without checking the length first will cause memEQ("forth","fort"...) to compare equal and memEQ("fort","forth"...) to read unallocated memory. This was only a potential future problem, as none of the callers reach this branch.
* hv.c: Stash-related UTF-8 cleanup.Brian Fraser2011-10-061-20/+55
| | | | | | | | | | | This adds a new static function to hv.c, hek_eq_pvn_flags, which replaces several memEQs. It also cleans up hv_name_set and has the relevant calls to hv_common and friends made UTF-8 aware. Finally, it changes share_hek() to modify the hash passed in if the pv was modified when downgrading.
* [perl #99660] Remove elems from hashes before freeing themFather Chrysostomos2011-09-211-2/+6
| | | | | | | Commit f50383f58 made the ‘HeVAL(entry) = &PL_sv_placeholder;’ in the hash-element-deletion code unconditional. In doing so, it put it after the if/else statement containing the SvREFCNT_dec. So the freed SV was visible in the hash to destructors called by SvREFCNT_dec.
* SVTYPEMASK must be cast to (svtype) when comparing to SvTYPE()Chip Salzenberg2011-08-231-1/+1
|
* SvREFCNT_dec already checks if the SV is non-NULLVincent Pit2011-08-021-1/+1
|
* Rename store/fetch_cop_label as cop_*Father Chrysostomos2011-07-161-7/+7
| | | | | | | | This makes them consistent with other functions that put the basic datum type first (like hv_*, sv_*, cophh_*). Since fetch_cop_label is marked as experimental (M), this change should be OK.
* Export store_cop_label for the perl compilerReini Urban2011-07-161-0/+18
|
* [perl #93454] Free deleted iterator when freeing hashFather Chrysostomos2011-06-241-2/+3
| | | | | | | | | | | | | | | | | | Commit 7d6175e, which did a fix-up after commit e0171a1a3, which introduced hfree_next_entry, did not account for the fact that hfree_next_entry frees the hash iterator before removing and returning the next value. It changed the callers to check the number of keys to determine whether anything else needed to be freed, which meant that hfree_next_entry was called one time less than necessary on hashes whose current iterator had been deleted and which consequently appeared empty. This fixes that. I don’t know how to test it, but the string table warnings were caus- ing test failures on VMS, so maybe that’s good enough.
* Completely free hashes containing nullsFather Chrysostomos2011-06-121-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes a regression introduced since 5.14.0, by commit e0171a1a3. The new Perl_hfree_next_entry function that that commit introduced returns the value of the hash element, or NULL if there are none left. If the value of the hash element is NULL, the two cases are indistin- guishable. Before e0171a1a3, all the hash code took null values into account. mro_package_moved took advantage of that, stealing values out of a hash and leaving it to the freeing code to delete the elements. The two places that call Perl_hfree_next_entry (there was only one, S_hfreeentries, with commit e0171a1a3, but the following commit, 104d7b699c, made sv_clear call it, too) were not accounting for NULL values’ being returned, and could terminate early, resulting in mem- ory leaks. One could argue that the perl core should not be assigning nulls to HeVAL, but HeVAL is part of the public API and there could be CPAN code assigning NULL to it, too. So the safest approach seems to be to modify Perl_hfree_next_entry’s callers to check the number of keys and not to attribute a signifi- cance to a returned NULL.
* Allow restricted hashes containing COWs to be clearedFather Chrysostomos2011-06-041-1/+2
|
* Allow COW values to be deleted from restricted hashesFather Chrysostomos2011-06-041-1/+2
| | | | I wonder how many other things a604c75 broke....
* Perl_refcounted_he_inc() needs a dVAR to compile with -DPERL_GLOBAL_STRUCTNicholas Clark2011-05-221-0/+1
|
* [perl #85026] Deleting the current iterator in void contextTon Hospel2011-05-191-10/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Looking at the delete code I see another strange thing. If the delete of the current iterator is done with the G_DISCARD flag, the corres- ponding value is not freed and survives until the lazy deleted entry gets removed on the next hash iteration. This is easily demonstrated like this: perl -wle ' sub DESTROY { print "DESTROY" } %a=(a=>bless[]); each %a; delete $a{a}; print "END" ' This prints: END DESTROY notice the difference with: perl -wle ' sub DESTROY { print "DESTROY" } %hash = (a => bless[]); each %hash; $dummy = delete $hash{a}; $dummy = 0; print "END" ' This prints: DESTROY END This is easily solved by always replacing the deleted entry value with &PL_sv_placeholder. It actually simplifies the code a bit except for the fact that the mro_method_changed from free_hash_ent now has to be done in hv_delete
* [perl #85026] deleting elements in a HASH iteratorTon Hospel2011-05-191-2/+10
| | | | | | | | | | | | | | | | | | | | | | Internally a perl HASH is an array of single linked chains of entries. Deleting an element means removing the correct chain entry by replacing the pointer to the removed entry with a pointer to the next entry and then freeing the deleted entry However, if the deleted element is the current entry the deleted entry is kept after removing it from the chain and the LAZYDEL flag is set. Only on the next iteration is the element actually removed and the iterator is set to the next entry. However, if you delete the current iterator and then delete the next element in the same chain the "next" pointer of the iterator is not updated because the iterator is not on the chain anymore. That means that when the next iteration looks up the iterator next pointer it will point to the freed memory of the second element. This patch fixes the places where the delete is done. Drawback is that you may never forget to do the lazydel fixup in at any place where the entry chain gets shortened.
* improve api entries for hv_clear() and hv_undef()David Mitchell2011-05-191-2/+7
| | | | | | The =apidoc entries for hv_clear() and hv_undef() were a bit spartan. Make it clearer what the two functions actually do, and the relationship between them.
* ensure hash iterator gets deletedDavid Mitchell2011-05-191-6/+3
| | | | | | | | | The recent commits to make sv_clear() iterative when freeing a hash, introduced a bug. If the hash only has one key, and that becomes the iterator, and is then deleted; then when the hash is freed, the LAZYDEL feature is skipped, and the iterated hash value fails to get deleted. The fix is simple: check for LAZYDEL before return is keys == 0.
* make hv freeing iterative rather than recursiveDavid Mitchell2011-05-191-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | make sv_clear() iteratively free the elements of a hash, rather than recursing. This stops freeing deeply nested hash of hash structures from blowing the stack. This commit is relatively straightfoward, now that a) the infrastruure is already in place in sv_clear to iteratively free AVs; b) the long sequence of commits leading up to this has provided us with the hfree_next_entry() function, which returns just the next sv in the hash that needs freeing. When starting to free a new hash, we have to save the old value of iter_sv somewhere; we do this by sticking it in the unused SvSTASH slot of the HV. This slot shouldn't get messed with, since, but this time we've already called the destructor on this object, and we should have a refcount of zero, so no destructor should be able to see us to rebless us. Ideally we'd like to be able to save the old index into HvARRAY when freeing a new HV, but I couldn't think of anywhere to hide it. So we get sub-optimal scanning of the parent's HvARRAY when freeing hashes of hashes.
* add hfree_next_entry(), hv_free_ent_ret()David Mitchell2011-05-191-54/+86
| | | | | | | | | | | | | Move body of hfreeentries()' central loop into a new function, hfree_next_entry(); leaving hfreeentries() as a simple loop that calls hfree_next_entry() until there are no entries left. This will in future allow sv_clear() to free a hash iteratively rather than recursively. Similarly, turn hv_free_ent() into a thin wrapper around a new function, hv_free_ent_ret(), which doesn't free HeVAL(), but rather just returns the SV instead.
* remove 'hfreeentries failed to free hash' panicDavid Mitchell2011-05-191-6/+1
| | | | | | | | | | | | | | | | | | | Currently perl attempts to clear a hash 100 times before panicking. So for example, if a naughty destructor keeps adding things back into the hash, this will eventually panic. Note that this can usually only occur with %h=() or undef(%h), since when freeing a hash, there's usually no reference to the hash that a destructor can use to mess with the hash. Remove this limit (so it may potentially loop forever). My reasoning is that (a) if the user wants to keep adding things back into the hash, who are we to stop her? (b) as part of of the process of making sv_clear() non-recursive when freeing hashes, I'm trying to reduce the amount of state that must be maintained between each iteration. Note that arrays currently don't have a limit.
* hv_free_ent: free SV as last actionDavid Mitchell2011-05-191-1/+1
| | | | | | | Move the freeing of the SV from near the beginning to the end of hv_free_ent(); i.e. free the HE before the SV. Ideally, this should cause no change in behaviour, but will make using an iterative HV freeing scheme easier.
* S_hfreeentries: collapse two loopsDavid Mitchell2011-05-191-56/+48
| | | | | | | | | S_hfreeentries() has two nested infinite loops: the inner one does one sweep through all buckets, freeing all entries in each bucket; the outer loop repeats the process if new keys have been added in the meantime. Collapse these two into a single 'while (keys) {}' loop. Should be functionally the same, but simpler.
* hfreeentries zeros HvARRAY now, so skip manualDavid Mitchell2011-05-191-12/+1
| | | | | | | Formerly, hv_clear() and hv_undef() zeroed the contents of HvARRAY after calling hfreeentries(), but that sub now zeroes each elements as a by-product of its deleting algorithm. So we can skip the Zero().
* make hash emptying non-atomicDavid Mitchell2011-05-191-167/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, when empting a hash of its elements (e.g. via undef(%h), or %h=()), HvARRAY field is temporarily zeroed, so that any destructors called on the freed elements see an empty hash. Change this so that they see any remaining elements. Thus, %h=() becomes more like C<delete $h{$_} for keys %h>. The atomic behaviour was introduced (by me) in 2003 with commit 2f86008e34264, to fix RT #3096. This concerned element destructors that messed with the hash being undeffed, causing unrefed var errors and the like. At the time, simply setting HvARRAY to null for the duration seemed like a simple fix. However, it didn't take account of destructors adding new elements to the list, thus re-creating HvARRAY. This was subsequently fixed. Then, the HvAUX structure was invented, which meant that optional hash fields were hidden away at the end of HvARRAY. This meant that hfreeentries() acquired a whole bunch of extra code to copy these fields around between the original HvARRAY and the new HvARRAY and then back again, and temporarily squirrelling the backref array in backref magic rather than in HvAUX. In short, hfreeentries() became a 200 line sprawling mess. This commit reduces it back to 70, and makes everything conceptually simpler. It does however change user-level visible behaviour (back to pre-2003), but note that the new behaviour now matches the behaviour that arrays have always had (i.e. destructors see a partially-emptied array). Note that backref magic for HVs is now always stored in HvAUX
* unify PERL_HASH and PERL_HASH_INTERNALDavid Mitchell2011-05-191-23/+15
| | | | | | | | | | | | | | | | | | Those two macros expand into two large, almost identical chunks of code. The only difference between the two is the source of the hash seed. So parameterize this into a new PERL_HASH_INTERNAL_() macro. Also, there are a couple of places in hv.c that do the rough equivalent of if (HvREHASH(hv)) key = PERL_HASH_INTERNAL(...) else key = PERL_HASH(...) which incorporates two complete macro expansions into the code. Reorganise them to be key = PERL_HASH_INTERNAL_(..., HvREHASH(hv))
* simplify hv_clearDavid Mitchell2011-05-191-11/+10
| | | | | | | | | | | | | | | | | | | | | | Don't use a goto when an else will do i.e. replace if (..) { A goto reset } B reset: with if (..) { A } else { B }
* Clean: Actually use HvUSEDKEYS() instead of HvKEYS()Michael Witten2011-05-181-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This: commit 8aacddc1ea3837f8f1a911d90c644451fc7cfc86 Author: Nick Ing-Simmons <nik@tiuk.ti.com> Date: Tue Dec 18 15:55:22 2001 +0000 Tidied version of Jeffrey Friedl's <jfriedl@yahoo.com> restricted hashes - added delete of READONLY value inhibit & test for same - re-tabbed p4raw-id: //depot/perlio@13760 essentially deprecated HvKEYS() in favor of HvUSEDKEYS(); this is explained in line 144 (now 313) of file `hv.h': /* * HvKEYS gets the number of keys that actually exist(), and is provided * for backwards compatibility with old XS code. The core uses HvUSEDKEYS * (keys, excluding placeholdes) and HvTOTALKEYS (including placeholders) */ This commit simply puts that into practice, and is equivalent to running the following (at least with a35ef416833511da752c4b5b836b7a8915712aab checked out): git grep -l HvKEYS | sed /hv.h/d | xargs sed -i s/HvKEYS/HvUSEDKEYS/ Notice that HvKEYS is currently just an alias for HvUSEDKEYS: $ git show a35ef416833511da752c4b5b836b7a8915712aab:hv.h | sed -n 318p #define HvKEYS(hv) HvUSEDKEYS(hv) According to `make tests': All tests successful.
* Eliminate C variables unused since 4d0fbddde6c5dcb9 refactored HvFILL()Nicholas Clark2011-05-181-6/+2
| | | | | | | | | | | | | | 3 functions had C variables previously used to track whether the number of hash chains have any entries. 4d0fbddde6c5dcb9 refactored the hash implementation to calculated this on demand, instead of tracking changes to it on hash updates. That change missed eliminating those variables, as gcc prior to 4.6.0 didn't actually warn that they weren't used, because (technically) they aren't unused - they are assigned to, but never read. gcc (at least 4.3.2 and 4.6.0) generates identical object code with/without this change, implying that its optimiser correctly eliminates the code. Other optimisers may be similar, in which case there's no runtime saving from this change.
* Followup to 088225f/[perl #88132]: packages ending with :Father Chrysostomos2011-04-151-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 088225f was not sufficient to fix the regression. It still exists for packages whose names end with a single colon. I discovered this when trying to determine why RDF::Trine was crashing with 5.14-to-be. In trying to write tests for it, I ended up triggering the same crash that RDF::Trine is having, but in a different way. In the end, it was easier to fix about three or four bugs (depending on how you count them), rather than try to fix only the regression that #88132 deals with (isa caches not updating when packages ending with colons are aliased), as they are all intertwined. The changes are as follows: Concerning the if (!(flags & ~GV_NOADD_MASK)...) statement in gv_stashpvn: Normally, gv_fetchpvn_flags (which it calls and whose retval is assigned to tmpgv) returns NULL if it has not been told to add anything and if the gv requested looks like a stash gv (ends with ::). If the number of colons is odd (foo:::), that code path is bypassed, so gv_stashpvn returns a GV without a hash. So gv_stashpvn tries to used that NULL hash and crashes. It should instead return NULL, to be consistent with the two-colon case. Blindly assigning a name to a stash does not work if the stash has multiple effective names. A call to mro_package_moved is required as well. So what gv_stashpvn was doing was insufficient. The parts of the mro code that check for globs or stash elems that contain stashes by looking for :: at the end of the name now take into account that the name might consist of a single : instead.