summaryrefslogtreecommitdiff
path: root/hv.c
diff options
context:
space:
mode:
authorDavid Mitchell <davem@iabyn.com>2011-05-10 17:24:29 +0100
committerDavid Mitchell <davem@iabyn.com>2011-05-19 14:49:43 +0100
commitee872193302939c724fd6c2c18071c621bfac6c4 (patch)
tree9649f1d0a1d70dd68afd4642f6b90f78fb405ff0 /hv.c
parent272e8453abcb0fceb34b1464670386e03a1f55bb (diff)
downloadperl-ee872193302939c724fd6c2c18071c621bfac6c4.tar.gz
remove 'hfreeentries failed to free hash' panic
Currently perl attempts to clear a hash 100 times before panicking. So for example, if a naughty destructor keeps adding things back into the hash, this will eventually panic. Note that this can usually only occur with %h=() or undef(%h), since when freeing a hash, there's usually no reference to the hash that a destructor can use to mess with the hash. Remove this limit (so it may potentially loop forever). My reasoning is that (a) if the user wants to keep adding things back into the hash, who are we to stop her? (b) as part of of the process of making sv_clear() non-recursive when freeing hashes, I'm trying to reduce the amount of state that must be maintained between each iteration. Note that arrays currently don't have a limit.
Diffstat (limited to 'hv.c')
-rw-r--r--hv.c7
1 files changed, 1 insertions, 6 deletions
diff --git a/hv.c b/hv.c
index e78f84f132..8b186de57d 100644
--- a/hv.c
+++ b/hv.c
@@ -1630,7 +1630,6 @@ S_clear_placeholders(pTHX_ HV *hv, U32 items)
STATIC void
S_hfreeentries(pTHX_ HV *hv)
{
- int attempts = 100;
STRLEN i = 0;
const bool mpm = PL_phase != PERL_PHASE_DESTRUCT && HvENAME(hv);
@@ -1689,12 +1688,8 @@ S_hfreeentries(pTHX_ HV *hv)
* re-allocated, HvMAX changed etc */
continue;
}
- if (i++ >= HvMAX(hv)) {
+ if (i++ >= HvMAX(hv))
i = 0;
- if (--attempts == 0) {
- Perl_die(aTHX_ "panic: hfreeentries failed to free hash - something is repeatedly re-creating entries");
- }
- }
} /* while */
}