| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
| |
In prehistoric times the first arg to many hash functions was called
tb rather than hv, and this was still reflected in some API notes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These provided a non-public API for the hash and array code to donate free
memory direct to the SV head allocation routines, instead of returning it
to the malloc system with free().
I assume that on some older mallocs this could offer significant benefits.
However, my benchmarking on a modern malloc couldn't detect any significant
effect (positive or negative) on removing the code. Its (continued) presence,
however, has downsides
a: slightly more code complexity
b: slightly larger interpreter structure
c: in the steady state, if net creation of SVs is zero, 1 chunk of allocated
but unused memory will exist (per thread)
So I think it best to remove it.
|
|
|
|
|
|
|
| |
Add hinthash_fetch(sv|pv[ns]) as a replacement for refcounted_he_fetch,
which is not API (and should not be). Also add caller_cx, as the correct
XS equivalent to caller(). Lots of modules seem to have copies of this,
so a proper API function will be more maintainable in future.
|
|
|
|
|
|
|
|
|
|
| |
From a suggestion from Ben Morrow.
The first argument used to be struct refcounted_he *, which exposed an
implementation detail - that the COP's labels are (now) stored in this way.
Google Code Search and an unpacked CPAN both fail to find any users of this
API, so the impact should be minimal.
|
|
|
|
|
|
|
| |
Instead pass in a COP, as suggested by Ben Morrow. Also add length and flags
parameters, and remove the comment suggesting this change. The underlying
storage mechanism can honour length and UTF8/not, so there is no harm in
exposing this one level higher.
|
|
|
|
|
|
|
| |
Convert get_arena() to be static, as now its only user is Perl_more_bodies().
Perl_get_arena() was not in the public API, and neither Google codesearch
nor an upacked CPAN show anything to be using it.
|
|
|
|
|
| |
Fix up the comments in and above some functions to clarify that backref
bagic for hVs may sometimes be moved back to HvAUX.
|
|
|
|
|
|
|
|
|
|
|
| |
Rather than creating an AV and pushing the backref onto it,
store a single backref directly in the mg_obj or xhv_backreferences
slot.
If the backref is an AV, then we skip this optimisation (although I don't
think at the moment, that an AV would ever be pointed to by some backref
magic). So the test of whether the optimisation is is in effect is whether
the thing in the slot is an AV or not.
|
|
|
|
| |
(so that I don't get so confused when I revisit this code in 5 years time)
|
|
|
|
|
|
| |
re-apply some of the small doc fixes and a couple of minor code tweaks that
were part of the reverted commit 044d8c24fa9214cf0fe9c6fc8a44e03f3f5374d7,
but which didn't need reverting
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit 044d8c24fa9214cf0fe9c6fc8a44e03f3f5374d7.
Conflicts:
hv.c
That commit tried to simply the xhv_backreferences processing, but
was totally wrong and broke ordinary weak refs to hashes (see #76716).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Each CV usually has a pointer, CvGV(cv), back to the GV that corresponds
to the CV's name (or to *foo::__ANON__ for anon CVs). This pointer wasn't
reference counted, to avoid loops. This could leave it dangling if the GV
is deleted.
We fix this by:
For named subs, adding backref magic to the GV, so that when the GV is
freed, it can trigger processing the CV's CvGV field. This processing
consists of: if it looks like the freeing of the GV is about to trigger
freeing of the CV too, set it to NULL; otherwise make it point to
*foo::__ANON__ (and set CvAONON(cv)).
For anon subs, make CvGV a strong reference, i.e. increment the refcnt of
*foo::__ANON__. This doesn't cause a loop, since in this case the
__ANON__ glob doesn't point to the CV. This also avoids dangling pointers
if someone does an explicit 'delete $foo::{__ANON__}'.
Note that there was already some partial protection for CvGV with
commit f1c32fec87699aee2eeb638f44135f21217d2127. This worked by
anonymising any corresponding CV when freeing a stash or stash entry.
This had two drawbacks. First it didn't fix CVs that were anonmous or that
weren't currently pointed to by the GV (e.g. after local *foo), and
second, it caused *all* CVs to get anonymised during cleanup, even the
ones that would have been deleted shortly afterwards anyway. This commit
effectively removes that former commit, while reusing a bit of the
actual anonymising code.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Each CV usually has a pointer, CvSTASH, back to the stash that it was
complied in. This pointer isn't reference counted, to avoid loops. Which
can leave it dangling if the stash is deleted.
There is already protection for the similar GvSTASH field in GVs: the
stash has an array of backrefs, xhv_backreferences, pointing to the GVs
whose GvSTASHes point to it, and which is used to zero all the GvSTASH
fields should the stash be deleted.
All this patch does is also add the CVs with CvSTASH to that stash's
backref list too.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When deleting a stash, make the algorithm
GvSTASH($_) = NULL for (@xhv_backreferences);
delete xhv_backreferences;
free each stash entry;
Previously the algorithm was
hide xhv_backreferences as ordinary backref magic;
free each stash entry:
this may trigger a sv_del_backref() for each GV being freed
delete @xhv_backreferences
The new method is:
* more efficient: one scan through @xhv_backreferences rather than lots of
calls to sv_del_backref(), removing elements one by one;
* makes the code simpler; the 'hide xhv_backreferences as backref magic'
hack no longer needs to be done
* removes a bug whereby GVs that had a refcnt > 1 (the usual case) were
left with a GvSTASH pointing to the freed stash; it's now NULL instead. I
couldn't think of a test for this.
There are two drawbacks:
* If the GV gets freed at the same time as the stash, the freeing code
sees the GV with a GVSTASH of NULL rather than still pointing to the
stash.
* As far as I can see, the only difference this currently makes is that
mro_method_changed_in() is no longer called by sv_clear(), but since we're
blowing away the whole stash anyway, method resolution doesn't really
bother us any more.
At some point in the future I might set GvSTASH to %__ANON__ rather than
NULL.
|
|
|
|
|
|
|
|
|
|
|
| |
Change from for() to do ... while() loops. Move variable initialisation to
variable declaration. Avoid needing to use the comma operator to allow
multiple statements in for(). Avoid using a continue statement where it
isn't actually needed to change flow control. Avoid relying on the optimiser
to know that the for loop conditional doesn't need testing on the first pass.
A current gcc's optimiser produces identical code despite these changes.
However, for the reasons given I consider the code to be much clearer.
|
| |
|
|
|
|
|
| |
Since de0a224a057997a65d38856f1981702fca5d7c18, xhv_keys and xhv_max
are the same type, so no casting needed
|
| |
|
|
|
|
|
|
| |
The assumption is that most chains of a hash are in use.
Suggestion and initial patch by Ruslan Zakirov.
|
|
|
|
|
| |
Add a function Perl_hv_fill to perform the count. This will save 1 IV per hash,
and on some systems cause struct xpvhv to become cache aligned.
|
| |
|
|
|
|
|
|
| |
Fix location identified by Father Chrysostomos, who also offered a patch, but
this patch is more efficient, as it avoids any allocation. Test code based on
his test example.
|
|
|
|
|
|
| |
Commits c3acb9e0760135dfd888c0ee1b415777d784aabc, 867fa1e2da145229b4db2c6e8d5b51700c15f114
and f0e67a1d29102aa9905aecf2b0f98449697d5af3 added or changed functions that now require a
dVAR declaration to compile with -DPERL_GLOBAL_STRUCT.
|
| |
|
|
|
|
|
| |
Replace ckWARN_d{,2,3,4}() && Perl_warner() with it, which trades reduced code
size for 1 more function call if warnings are not enabled.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently it calls newSVsv() always, which copies the value, but the immortal
SVs are used as much for their addresses as their values. You can't get the
immortals into HVs from Perl-space, except for PL_sv_placeholder, and any hash
with those will take the else block, where the call to Perl_hv_iternext_flags()
won't be returning placeholders anyway. Hence If XS code has gone to the
trouble to get the "impossible" in there, they had a reason for it.
I am assuming that Perl_hv_copy_hints_hv() should stay as-is, as it is
documented that only strings and integers are supported values for %^H.
|
|
|
|
|
| |
This will cope properly with Unicode package names. It also allows use of more
efficient perl API calls, avoiding any strlen()s.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
(instead of marking the SV as mortal.)
|
| |
|
| |
|
|
|
|
| |
Change its callers to take advantage of this.
|
|
|
|
|
|
| |
key is in canonical form - any key passed encoded in UTF-8 cannot be represented
as bytes, hence the downgrade check can be skipped. Use this internally for
shared hash key scalars, as they are always canonical.
|
|
|
|
|
| |
in UTF-8, would result in storing the wrong hash value in the hash, and hence
failing lookups. I guess not that much XS code precomputes hash values.
|
|
|
|
| |
AV * to HV *.
|
|
|
|
| |
from AV * to SV *.
|
|
|
|
|
| |
are dealing with is data for the current MRO. Instead the direct pointer "owns"
the (reference to the) data, with the hash pointer left as NULL to signal this.
|
|
|
|
|
|
|
| |
method resolution orders.
mro_linear_dfs becomes a hash holding the different MROs' private data.
mro_linear_c3 becomes a shortcut pointer to the current MRO's private data.
|
|
|
|
|
|
| |
Message-ID: <25940.1225611819@chthon>
Date: Sun, 02 Nov 2008 01:43:39 -0600
p4raw-id: //depot/perl@34698
|
|
|
|
|
| |
erroneous const in dump.c.
p4raw-id: //depot/perl@34675
|
|
|
| |
p4raw-id: //depot/perl@34629
|
|
|
| |
p4raw-id: //depot/perl@34618
|
|
|
| |
p4raw-id: //depot/perl@34585
|
|
|
| |
p4raw-id: //depot/perl@34383
|
|
|
|
|
|
|
|
|
|
|
| |
de-duping hash used by S_mro_get_linear_isa_dfs(). Provide a new
function Perl_get_isa_hash() to lazily retrieve this. (Which could
actually be static if S_isa_lookup() and Perl_sv_derived_from()
moved into mro.c.) Make S_isa_lookup() use this lookup hash in place
of a linear walk of the linear isa. This should turn isa lookups from
O(n) to O(1), which should make heavy users of ->isa() faster.
(eg PPI, and hence Perl Critic).
p4raw-id: //depot/perl@34354
|