summaryrefslogtreecommitdiff
path: root/hv.c
Commit message (Collapse)AuthorAgeFilesLines
* each() should not leave RITER set on empty hashFather Chrysostomos2011-11-051-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 900ac0519e (5.11.0) sped up keys() on an empty hash by modify- ing the iteration code not to loop through the buckets looking for an entry if the number of keys is 0. Interestingly, it had no visible affect on keys(), but it *did* have one on each(). Resetting the ite- rator’s current bucket number (RITER) used to be done inside that loop in hv_iternext. keys() always begins by resetting the iterator, so it was unaffected. But each %empty will leave the iterator as-is. It will be set on an empty hash if the last element was deleted while an iterator was active. This has buggy side-effects. $h{1} = 2; each %h; # returns (1, 2) delete $h{1}; each %h; # returns false; should reset iterator $h{1}=2; print each %h, "\n"; # prints nothing Commit 3b37eb248 (5.15.0), which changed the way S_hfreeentries works. (S_hfreentries is called by all operators that empty hashes, such as %h=() and undef %h.) Now S_hfreentries does nothing if the hash is empty. That change on its own should have been harmless, but the result was that even %h=() won’t reset RITER after each() has put things in an inconsistent state. This caused test failures in Text::Tabulate. So the solution, of course, is to complete the change made by 900ac0519e and reset the iterator properly in hv_iternext if the hash is empty.
* hv.c: comment typoFather Chrysostomos2011-11-051-1/+1
|
* document the behaviour of negative klen for hv_fetch and friendsTony Cook2011-10-071-12/+20
| | | | This interface is unfortunate, but it's there and in use.
* Cast to signed before negating, to avoid compiler warningsBrian Fraser2011-10-061-3/+3
|
* Fix thinko in hek_eq_pvn_flagsFather Chrysostomos2011-10-061-1/+1
| | | | | | | | | Doing memEQ(str1, str2, len2) without checking the length first will cause memEQ("forth","fort"...) to compare equal and memEQ("fort","forth"...) to read unallocated memory. This was only a potential future problem, as none of the callers reach this branch.
* hv.c: Stash-related UTF-8 cleanup.Brian Fraser2011-10-061-20/+55
| | | | | | | | | | | This adds a new static function to hv.c, hek_eq_pvn_flags, which replaces several memEQs. It also cleans up hv_name_set and has the relevant calls to hv_common and friends made UTF-8 aware. Finally, it changes share_hek() to modify the hash passed in if the pv was modified when downgrading.
* [perl #99660] Remove elems from hashes before freeing themFather Chrysostomos2011-09-211-2/+6
| | | | | | | Commit f50383f58 made the ‘HeVAL(entry) = &PL_sv_placeholder;’ in the hash-element-deletion code unconditional. In doing so, it put it after the if/else statement containing the SvREFCNT_dec. So the freed SV was visible in the hash to destructors called by SvREFCNT_dec.
* SVTYPEMASK must be cast to (svtype) when comparing to SvTYPE()Chip Salzenberg2011-08-231-1/+1
|
* SvREFCNT_dec already checks if the SV is non-NULLVincent Pit2011-08-021-1/+1
|
* Rename store/fetch_cop_label as cop_*Father Chrysostomos2011-07-161-7/+7
| | | | | | | | This makes them consistent with other functions that put the basic datum type first (like hv_*, sv_*, cophh_*). Since fetch_cop_label is marked as experimental (M), this change should be OK.
* Export store_cop_label for the perl compilerReini Urban2011-07-161-0/+18
|
* [perl #93454] Free deleted iterator when freeing hashFather Chrysostomos2011-06-241-2/+3
| | | | | | | | | | | | | | | | | | Commit 7d6175e, which did a fix-up after commit e0171a1a3, which introduced hfree_next_entry, did not account for the fact that hfree_next_entry frees the hash iterator before removing and returning the next value. It changed the callers to check the number of keys to determine whether anything else needed to be freed, which meant that hfree_next_entry was called one time less than necessary on hashes whose current iterator had been deleted and which consequently appeared empty. This fixes that. I don’t know how to test it, but the string table warnings were caus- ing test failures on VMS, so maybe that’s good enough.
* Completely free hashes containing nullsFather Chrysostomos2011-06-121-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes a regression introduced since 5.14.0, by commit e0171a1a3. The new Perl_hfree_next_entry function that that commit introduced returns the value of the hash element, or NULL if there are none left. If the value of the hash element is NULL, the two cases are indistin- guishable. Before e0171a1a3, all the hash code took null values into account. mro_package_moved took advantage of that, stealing values out of a hash and leaving it to the freeing code to delete the elements. The two places that call Perl_hfree_next_entry (there was only one, S_hfreeentries, with commit e0171a1a3, but the following commit, 104d7b699c, made sv_clear call it, too) were not accounting for NULL values’ being returned, and could terminate early, resulting in mem- ory leaks. One could argue that the perl core should not be assigning nulls to HeVAL, but HeVAL is part of the public API and there could be CPAN code assigning NULL to it, too. So the safest approach seems to be to modify Perl_hfree_next_entry’s callers to check the number of keys and not to attribute a signifi- cance to a returned NULL.
* Allow restricted hashes containing COWs to be clearedFather Chrysostomos2011-06-041-1/+2
|
* Allow COW values to be deleted from restricted hashesFather Chrysostomos2011-06-041-1/+2
| | | | I wonder how many other things a604c75 broke....
* Perl_refcounted_he_inc() needs a dVAR to compile with -DPERL_GLOBAL_STRUCTNicholas Clark2011-05-221-0/+1
|
* [perl #85026] Deleting the current iterator in void contextTon Hospel2011-05-191-10/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Looking at the delete code I see another strange thing. If the delete of the current iterator is done with the G_DISCARD flag, the corres- ponding value is not freed and survives until the lazy deleted entry gets removed on the next hash iteration. This is easily demonstrated like this: perl -wle ' sub DESTROY { print "DESTROY" } %a=(a=>bless[]); each %a; delete $a{a}; print "END" ' This prints: END DESTROY notice the difference with: perl -wle ' sub DESTROY { print "DESTROY" } %hash = (a => bless[]); each %hash; $dummy = delete $hash{a}; $dummy = 0; print "END" ' This prints: DESTROY END This is easily solved by always replacing the deleted entry value with &PL_sv_placeholder. It actually simplifies the code a bit except for the fact that the mro_method_changed from free_hash_ent now has to be done in hv_delete
* [perl #85026] deleting elements in a HASH iteratorTon Hospel2011-05-191-2/+10
| | | | | | | | | | | | | | | | | | | | | | Internally a perl HASH is an array of single linked chains of entries. Deleting an element means removing the correct chain entry by replacing the pointer to the removed entry with a pointer to the next entry and then freeing the deleted entry However, if the deleted element is the current entry the deleted entry is kept after removing it from the chain and the LAZYDEL flag is set. Only on the next iteration is the element actually removed and the iterator is set to the next entry. However, if you delete the current iterator and then delete the next element in the same chain the "next" pointer of the iterator is not updated because the iterator is not on the chain anymore. That means that when the next iteration looks up the iterator next pointer it will point to the freed memory of the second element. This patch fixes the places where the delete is done. Drawback is that you may never forget to do the lazydel fixup in at any place where the entry chain gets shortened.
* improve api entries for hv_clear() and hv_undef()David Mitchell2011-05-191-2/+7
| | | | | | The =apidoc entries for hv_clear() and hv_undef() were a bit spartan. Make it clearer what the two functions actually do, and the relationship between them.
* ensure hash iterator gets deletedDavid Mitchell2011-05-191-6/+3
| | | | | | | | | The recent commits to make sv_clear() iterative when freeing a hash, introduced a bug. If the hash only has one key, and that becomes the iterator, and is then deleted; then when the hash is freed, the LAZYDEL feature is skipped, and the iterated hash value fails to get deleted. The fix is simple: check for LAZYDEL before return is keys == 0.
* make hv freeing iterative rather than recursiveDavid Mitchell2011-05-191-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | make sv_clear() iteratively free the elements of a hash, rather than recursing. This stops freeing deeply nested hash of hash structures from blowing the stack. This commit is relatively straightfoward, now that a) the infrastruure is already in place in sv_clear to iteratively free AVs; b) the long sequence of commits leading up to this has provided us with the hfree_next_entry() function, which returns just the next sv in the hash that needs freeing. When starting to free a new hash, we have to save the old value of iter_sv somewhere; we do this by sticking it in the unused SvSTASH slot of the HV. This slot shouldn't get messed with, since, but this time we've already called the destructor on this object, and we should have a refcount of zero, so no destructor should be able to see us to rebless us. Ideally we'd like to be able to save the old index into HvARRAY when freeing a new HV, but I couldn't think of anywhere to hide it. So we get sub-optimal scanning of the parent's HvARRAY when freeing hashes of hashes.
* add hfree_next_entry(), hv_free_ent_ret()David Mitchell2011-05-191-54/+86
| | | | | | | | | | | | | Move body of hfreeentries()' central loop into a new function, hfree_next_entry(); leaving hfreeentries() as a simple loop that calls hfree_next_entry() until there are no entries left. This will in future allow sv_clear() to free a hash iteratively rather than recursively. Similarly, turn hv_free_ent() into a thin wrapper around a new function, hv_free_ent_ret(), which doesn't free HeVAL(), but rather just returns the SV instead.
* remove 'hfreeentries failed to free hash' panicDavid Mitchell2011-05-191-6/+1
| | | | | | | | | | | | | | | | | | | Currently perl attempts to clear a hash 100 times before panicking. So for example, if a naughty destructor keeps adding things back into the hash, this will eventually panic. Note that this can usually only occur with %h=() or undef(%h), since when freeing a hash, there's usually no reference to the hash that a destructor can use to mess with the hash. Remove this limit (so it may potentially loop forever). My reasoning is that (a) if the user wants to keep adding things back into the hash, who are we to stop her? (b) as part of of the process of making sv_clear() non-recursive when freeing hashes, I'm trying to reduce the amount of state that must be maintained between each iteration. Note that arrays currently don't have a limit.
* hv_free_ent: free SV as last actionDavid Mitchell2011-05-191-1/+1
| | | | | | | Move the freeing of the SV from near the beginning to the end of hv_free_ent(); i.e. free the HE before the SV. Ideally, this should cause no change in behaviour, but will make using an iterative HV freeing scheme easier.
* S_hfreeentries: collapse two loopsDavid Mitchell2011-05-191-56/+48
| | | | | | | | | S_hfreeentries() has two nested infinite loops: the inner one does one sweep through all buckets, freeing all entries in each bucket; the outer loop repeats the process if new keys have been added in the meantime. Collapse these two into a single 'while (keys) {}' loop. Should be functionally the same, but simpler.
* hfreeentries zeros HvARRAY now, so skip manualDavid Mitchell2011-05-191-12/+1
| | | | | | | Formerly, hv_clear() and hv_undef() zeroed the contents of HvARRAY after calling hfreeentries(), but that sub now zeroes each elements as a by-product of its deleting algorithm. So we can skip the Zero().
* make hash emptying non-atomicDavid Mitchell2011-05-191-167/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, when empting a hash of its elements (e.g. via undef(%h), or %h=()), HvARRAY field is temporarily zeroed, so that any destructors called on the freed elements see an empty hash. Change this so that they see any remaining elements. Thus, %h=() becomes more like C<delete $h{$_} for keys %h>. The atomic behaviour was introduced (by me) in 2003 with commit 2f86008e34264, to fix RT #3096. This concerned element destructors that messed with the hash being undeffed, causing unrefed var errors and the like. At the time, simply setting HvARRAY to null for the duration seemed like a simple fix. However, it didn't take account of destructors adding new elements to the list, thus re-creating HvARRAY. This was subsequently fixed. Then, the HvAUX structure was invented, which meant that optional hash fields were hidden away at the end of HvARRAY. This meant that hfreeentries() acquired a whole bunch of extra code to copy these fields around between the original HvARRAY and the new HvARRAY and then back again, and temporarily squirrelling the backref array in backref magic rather than in HvAUX. In short, hfreeentries() became a 200 line sprawling mess. This commit reduces it back to 70, and makes everything conceptually simpler. It does however change user-level visible behaviour (back to pre-2003), but note that the new behaviour now matches the behaviour that arrays have always had (i.e. destructors see a partially-emptied array). Note that backref magic for HVs is now always stored in HvAUX
* unify PERL_HASH and PERL_HASH_INTERNALDavid Mitchell2011-05-191-23/+15
| | | | | | | | | | | | | | | | | | Those two macros expand into two large, almost identical chunks of code. The only difference between the two is the source of the hash seed. So parameterize this into a new PERL_HASH_INTERNAL_() macro. Also, there are a couple of places in hv.c that do the rough equivalent of if (HvREHASH(hv)) key = PERL_HASH_INTERNAL(...) else key = PERL_HASH(...) which incorporates two complete macro expansions into the code. Reorganise them to be key = PERL_HASH_INTERNAL_(..., HvREHASH(hv))
* simplify hv_clearDavid Mitchell2011-05-191-11/+10
| | | | | | | | | | | | | | | | | | | | | | Don't use a goto when an else will do i.e. replace if (..) { A goto reset } B reset: with if (..) { A } else { B }
* Clean: Actually use HvUSEDKEYS() instead of HvKEYS()Michael Witten2011-05-181-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This: commit 8aacddc1ea3837f8f1a911d90c644451fc7cfc86 Author: Nick Ing-Simmons <nik@tiuk.ti.com> Date: Tue Dec 18 15:55:22 2001 +0000 Tidied version of Jeffrey Friedl's <jfriedl@yahoo.com> restricted hashes - added delete of READONLY value inhibit & test for same - re-tabbed p4raw-id: //depot/perlio@13760 essentially deprecated HvKEYS() in favor of HvUSEDKEYS(); this is explained in line 144 (now 313) of file `hv.h': /* * HvKEYS gets the number of keys that actually exist(), and is provided * for backwards compatibility with old XS code. The core uses HvUSEDKEYS * (keys, excluding placeholdes) and HvTOTALKEYS (including placeholders) */ This commit simply puts that into practice, and is equivalent to running the following (at least with a35ef416833511da752c4b5b836b7a8915712aab checked out): git grep -l HvKEYS | sed /hv.h/d | xargs sed -i s/HvKEYS/HvUSEDKEYS/ Notice that HvKEYS is currently just an alias for HvUSEDKEYS: $ git show a35ef416833511da752c4b5b836b7a8915712aab:hv.h | sed -n 318p #define HvKEYS(hv) HvUSEDKEYS(hv) According to `make tests': All tests successful.
* Eliminate C variables unused since 4d0fbddde6c5dcb9 refactored HvFILL()Nicholas Clark2011-05-181-6/+2
| | | | | | | | | | | | | | 3 functions had C variables previously used to track whether the number of hash chains have any entries. 4d0fbddde6c5dcb9 refactored the hash implementation to calculated this on demand, instead of tracking changes to it on hash updates. That change missed eliminating those variables, as gcc prior to 4.6.0 didn't actually warn that they weren't used, because (technically) they aren't unused - they are assigned to, but never read. gcc (at least 4.3.2 and 4.6.0) generates identical object code with/without this change, implying that its optimiser correctly eliminates the code. Other optimisers may be similar, in which case there's no runtime saving from this change.
* Followup to 088225f/[perl #88132]: packages ending with :Father Chrysostomos2011-04-151-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 088225f was not sufficient to fix the regression. It still exists for packages whose names end with a single colon. I discovered this when trying to determine why RDF::Trine was crashing with 5.14-to-be. In trying to write tests for it, I ended up triggering the same crash that RDF::Trine is having, but in a different way. In the end, it was easier to fix about three or four bugs (depending on how you count them), rather than try to fix only the regression that #88132 deals with (isa caches not updating when packages ending with colons are aliased), as they are all intertwined. The changes are as follows: Concerning the if (!(flags & ~GV_NOADD_MASK)...) statement in gv_stashpvn: Normally, gv_fetchpvn_flags (which it calls and whose retval is assigned to tmpgv) returns NULL if it has not been told to add anything and if the gv requested looks like a stash gv (ends with ::). If the number of colons is odd (foo:::), that code path is bypassed, so gv_stashpvn returns a GV without a hash. So gv_stashpvn tries to used that NULL hash and crashes. It should instead return NULL, to be consistent with the two-colon case. Blindly assigning a name to a stash does not work if the stash has multiple effective names. A call to mro_package_moved is required as well. So what gv_stashpvn was doing was insufficient. The parts of the mro code that check for globs or stash elems that contain stashes by looking for :: at the end of the name now take into account that the name might consist of a single : instead.
* Clean: Move old comment to proper locationMichael Witten2011-03-191-6/+6
| | | | | | | | | | | | | | | | This: commit 0298d7b92741692bcf2e34c418a564332bb034e6: Date: Tue May 31 10:40:01 2005 +0000 Avoid updating a variable in a loop. Only calculate the number of links in a hash bucket chain if we really need it. p4raw-id: //depot/perl@24648 forgot to move a large comment to its new location; this new commit fixes that.
* Fix typos (spelling errors) in Perl sources.Peter J. Acklam) (via RT2011-01-071-1/+1
| | | | | | | | | # New Ticket Created by (Peter J. Acklam) # Please include the string: [perl #81904] # in the subject line of all future correspondence about this issue. # <URL: http://rt.perl.org/rt3/Ticket/Display.html?id=81904 > Signed-off-by: Abigail <abigail@abigail.be>
* Fix compiler warning in hv.c on MSWin32Jerry D. Hedden2010-12-011-1/+1
| | | | | | | Fixes the following seen in a Steve Hay smoke test: Compiler messages(MSWin32): ..\hv.c(1646) : warning C4244: 'initializing' : conversion from 'unsigned long ' to 'const char ', possible loss of data
* Fix memory leak in hfreeentriesFather Chrysostomos2010-11-301-16/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The change that made hfreeentries keep the name in place when iterat- ing (2d0d1ecc) caused this statement at the end of the loop to be a no-op for named hashes, because the HvARRAY is always present at the end of the loop (it contains the name): if (!HvARRAY(hv)) { /* Good. No-one added anything this time round. */ break; } So this line was added (by the same change) before the freeing of the linked lists: /* If there are no keys, there is nothing left to free. */ if (!((XPVHV*) SvANY(hv))->xhv_keys) break; But that means that this, immediately after the freeing of the linked lists and just before the if(!HvARRAY(hv)): if (array != orig_array) { Safefree(array); } was not being reached, resulting in a memory leak (that Nicholas Clark found). This is what would happen: On entering hfreeentries, orig_array would be assigned the value in HvARRAY. HvARRAY = original array orig_array = original array Then the main loop would be entered, which would assign HvARRAY to array: HvARRAY = original array orig_array = original array array = original array HvARRAY would be nulled out and assigned a new value by hv_auxinit: HvARRAY = first new array orig_array = original array array = original array Then the loop would repeat: HvARRAY = first new array orig_array = original array array = first new array Then the HvARRAY would once more be nulled and replaced via hv_auxinit: HvARRAY = second new array orig_array = original array array = first new array Then the if(no keys)break; statement would be reached, exit- ing the loop: HvARRAY = second new array orig_array = original array <nothing> = first new array So the first new array is never freed. This commit skips the allocation of an extra array at the beginning of the loop if there are no keys. Then it exits early at the same spot.
* Use newSVpvs_flags() instead of sv_2mortal(newSVpvs())Nicholas Clark2010-11-251-1/+1
| | | | And similarly for newSVpvn() for a known length.
* Convert xhv_name in struct xpvhv_aux to be a union of HEK* and HEK**Nicholas Clark2010-11-241-36/+47
| | | | | This avoids a lot of casting. Nothing outside the perl core code is accessing that member directly.
* Add flags param to hv_ename_*Father Chrysostomos2010-11-201-2/+4
| | | | | We will need this for making the API UTF8-aware in 5.16 or whenever.
* S_hfreeentries: keep OOK off unless adding something to auxFather Chrysostomos2010-11-201-1/+1
| | | | | This small optimisation allows hv_undef to skips its if(SvOOK()) block and all the checks inside it much of the time.
* Eliminate the newname param from mro_package_movedFather Chrysostomos2010-11-201-2/+2
| | | | Nothing is using this any more, as of the previous commit.
* Make hv_undef leave HvENAME aloneFather Chrysostomos2010-11-201-20/+66
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | unless called from sv_clear. This is necessary as and undeffed stash, though it nominally becomes just a plain hash and is not a stash any more, is still to be found in the symbol table. It may even be in multiple places. HvENAME’s raison d’être is to keep track of this. If the effective name is deleted, then things can get out of sync as the test in the commit demonstrates. This can cause problems if the hash is turned back into a stash. This does not change the deletion of the HvNAME, which is the only difference between hv_clear and hv_undef on stashes that is visible from Perl. caller still returns (unknown) or __ANON__::.... I tried to make this into several small commits, but each part of it breaks things without the other parts, so this is one big commit. These are the various parts: • hv_undef no longer calls mro_package_named directly, as it deletes the effective name of the stash. It must only be called on sub- stashes, so hfreeentries has been modified to do that. • hv_name_set, which has erased the HvENAME when passed a null arg for the value ever since effective names were added (a special case put it just for hv_undef), now leaves the HvENAME alone, unless the new HV_NAME_SETALL flag (set to 2 to allow for UTF8 in future) is passed. • hv_undef does not delete the name before the call to hfreeentries during global destruction. That extra name deletion was added when hfreeentries stopped hiding the name, as CVs won’t be anonymised properly if they see it. It does not matter where the CVs point if they are to be freed shortly. This is just a speed optimisation, as it allows the name and effective name to be deleted in one fell swoop. Deleting just the name (not the effective name) can require a memory allocation. • hv_undef calls mro_isa_changed_in as it used to (before it started using mro_package_moved), but now it happens after the entries are freed. Calling it first, as 5.13.6 and earlier versions did, was simply wrong. • Both names are deleted from PL_stashcache. I inadvertently switched it back and forth between the two names in previous commits. Since it needed to be accounted for, it made no omit it, as that would just complicate things. (I think PL_stashcache is buggy, though I have yet to come up with a test case.) • sv_clear now calls Perl_hv_undef_flags with the HV_NAME_SETALL flag, which is passed through to the second hv_name_set call, after hfreeentries. That determines whether the effective names are deleted. • The changes at the end of hv_undef consist of pussyfooting to avoid unnecessary work. They make sure that everything is freed that needs to be and nothing is freed that must not be.
* hv_undef .= _flagsFather Chrysostomos2010-11-201-1/+1
| | | | | | | Add flags param to hv_undef. There is no mathom, as the changes that this will support are by no means suitable for maint.
* Fix hv_name_set when there is an HvENAMEFather Chrysostomos2010-11-201-4/+7
| | | | | This code was completely wrong and could even crash. This is not cur- rently reached.
* Don’t skip mro_package_moved if the parent stash is renamedFather Chrysostomos2010-11-161-22/+10
| | | | | | | This stops S_hv_delete_common from skipping the call to mro_package_moved if the HvNAME of the stash containing the deleted glob is no longer valid, but the stash is still attached to some other part of the symbol table.
* Fix memory leak introduced by 2d0d1eccfcfeFather Chrysostomos2010-11-151-2/+2
| | | | | If HvENAME was set by a destructor, it needs to be freed as well. hv_name_set(whatever, NULL, ...) does that.
* Keep MRO caches around during hv_clearFather Chrysostomos2010-11-151-25/+33
| | | | | | | This allows it to delete PL_isarev entries. mro_isa_changed_in only deletes items mentioned in HvMROMETA(hv)->isa, so it must be present.
* [perl #79208] %stash:: = () anonymises CVsFather Chrysostomos2010-11-151-54/+54
| | | | | | | | | | | | | This keeps stash names visible during %foo:: = (). This fixes @ISA assignment inside a DESTROY method triggered by %foo:: = () and also lets existing CVs retain their pointers to the stash. So %foo:: = () is now equivalent to delete $foo::{$_} for keys %foo::
* Eliminate PL_dirtyFlorian Ragwitz2010-11-141-1/+1
| | | | | It now only exists as a compatibility macro for extensions that want to introspect it.
* Fix undef %Foo:: to update subclassesFather Chrysostomos2010-11-131-1/+6
| | | | | | | | | | | This is something I think I broke with 80ebaca, which made sure that isa linearisations were cached on subclasses after calls to mro_isa_changed_in (so the data could be used to delete isarev entries). The result is that hv_undef, which calls mro_isa_changed_in before deleting everything, was updating the subclasses’ isa caches while its own @ISA was still visible.