| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
| |
I suspect this leak also applies to any large character classes.
An HV created with newHV has a reference count of 1, so doing
newRV_inc on it will cause a leak.
|
|
|
|
|
|
|
|
| |
Under some circumstances it could cause a hash to point to a freed
element. But the hash itself was leaking, so it caused on problems,
as no attempt to free its element again was made.
The next commit will stop the hash from leaking.
|
|
|
|
| |
It is only called from one spot.
|
|
|
|
|
|
| |
Before croaking, we need to free any SVs we might have allocated tem-
porarily. Also, Simple_vFAIL does not free the regular expression.
For that we need vFAIL.
|
| |
|
|
|
|
|
| |
In the next commit, it will need to access other variables around its
call site.
|
|
|
|
|
| |
note: this failed to build in smoke-me eg.
http://perl.develop-help.com/raw/?id=131750
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
If we have just created an SV, it has a reference count of 1, so using
newRV_inc on it will create a leak. So we need to use newRV_noinc and
do SvREFCNT_inc in those cases where the SV is not new.
This has leaked since v5.17.3-117-g87367d5.
|
|
|
|
| |
They have leakd since v5.17.0-54-g1b08e05.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
local $_[0] puts the current $_[0] on to the savestack and gives the
array a brand new SV. If the array is not marked REAL, it holds no
reference counts on its elements. @_ is surreal by default.
The localisation code was making @_ hold a reference count on its new
element. The restore code was assuming it had a reference count, so
everything worked out if $_[0] was not modified after localisation.
But if the array is surreal, then modifications to it will assume that
it does *not* hold a reference count on $_[0]. So doing shift, or
@_=undef would cause the new element to leak.
Also, taking a reference to the array (\@_) will trigger, making
the reference count of all elements increeas, likwies leaking the
new element.
Since there is only one REAL flag, which indicates that all elements
of the array are reference-counted, we cannot have some elements ref-
erence-counted and some not (which local $_[0] does), and have every-
thing behave correctly.
So the only solution is to reify arrays before localising
their elements.
|
| |
|
|
|
|
|
| |
Otherwise *_{ARRAY} returned from a sub will be kenotic, in the lit-
eral sense. (If you don’t understand that, just look at the test.)
|
|
|
|
|
|
| |
These have been supported since *foo{THING} was added in perl 5.005.
If only I had known about these sooner.... I could have been writing
*$AUTOLOAD{NAME} all this time!
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reduce liveness of various variables. srand relies on nothing but a THX
so do that first and don't calculate a local SP yet. Put the EXTEND near SP
calculation. This might refresh SP. MAXARG access PL_op. The POPs balances
the stack. var sv is kept in a volatile, it is out of scope after the SvNV.
Through the SvNV, only 2 autos/regs are carried my_perl and SP.
dTARGET again uses PL_op but it and PUSHs and PUTBACK make no calls. SP is
out of scope after the PUTBACK. In macro Drand01 is the next call. XPUSHn
uses SvSETMAGIC. This is replaced with sv_setnv_mg since rand is not hot.
Now TARG is out of scope. Only my_perl remains in scope. NORMAL uses
PL_op for the 3rd and final time. All x86 32bit C stack usage was for
saving nonvol regs and 2 auto NVs (asm limitations prevent from keeping
it in a normal reg I guess)(VC2003). Caching PL_op->op_next at dTARGET to
avoid a 3rd PL_op dereference was tried with
SV * targ; (op_next = NORMAL), (GETTARGET), (PUSHs(TARG)), (PUTBACK);
to allow reordering, but it generated a stack auto on Visual C even
though 1 non vol reg (edi) was available. Another untried idea would be to
declare a OP * and set it to PL_op, then do the op_next and op_targ derefs,
but with the above mess, PL_op was derefed once by compiler optimization
but still the stack var was used. Also maintainability contributed to
scrapping the idea. The 1.0 assigns are reordered by comp in a way
that all of them happen after the SvNV but before the next call, which
is rand(), so they are fine order of execution wise. The comment about
SP or TARG means either dTARGET is done before the SvNV and SP is out of
scope before SvNV but SV * targ has to be saved across the SvNV, or SP is
saved across the SvNV. A choice was made save SP since the C code would
be more complicated, probably with gotos and initialing sv to NULL if
dTARGET/PUTBACK had to happen after conditional MAXARG/TOPs
unconditionally, but SvNV is conditonally called. The improvements in
this commit are not specific to x86-32 but all platforms and cpus.
The function dropped from 0xFF to 0xEB for me.
|
|
|
|
| |
SvIsCOW returns a flag which will turn into 0 if truncated to 8 bits.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I changed it to cache the DESTROY method in SvSTASH(stash), instead
of amagic tables, for the sake of speed. But I made no distinction
between ‘no cache’ and ‘no DESTROY method’. So classes with no
DESTROY method became as slow as perl 5.6.
To solve that, I’m using an adjusted pointer (following the example
of warnings.h) to mean ‘intentionally blank’.
I also fixed two instances of the DESTROY cache not being updated,
introduced by that commit.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch does the following:
*) Introduces multiple new hash functions to choose from at build
time. This includes Murmur-32, SDBM, DJB2, SipHash, SuperFast, and
One-at-a-time. Currently this is handled by muning hv.h. Configure
support hopefully to follow.
*) Changes the default hash to Murmur hash which is faster than the
old default One-at-a-time.
*) Rips out the old HvREHASH mechanism and replaces it with a
per-process random hash seed.
*) Changes the old PL_hash_seed from an interpreter value to a
global variable. This means it does not have to be copied during
interpreter setup or cloning.
*) Changes the format of the PERL_HASH_SEED variable to a hex
string so that hash seeds longer than fit in an integer are possible.
*) Changes the return of Hash::Util::hash_seed() from a number to a
string. This is to accomodate hash functions which have more bits than
can be fit in an integer.
*) Adds new functions to Hash::Util to improve introspection of hashes
-) hash_value() - returns an integer hash value for a given string.
-) bucket_info() - returns basic hash bucket utilization info
-) bucket_stats() - returns more hash bucket utilization info
-) bucket_array() - which keys are in which buckets in a hash
More details on the new hash functions can be found below:
Murmur Hash: (v3) from google, see
http://code.google.com/p/smhasher/wiki/MurmurHash3
Superfast Hash: From Paul Hsieh.
http://www.azillionmonkeys.com/qed/hash.html
DJB2: a hash function from Daniel Bernstein
http://www.cse.yorku.ca/~oz/hash.html
SDBM: a hash function sdbm.
http://www.cse.yorku.ca/~oz/hash.html
SipHash: by Jean-Philippe Aumasson and Daniel J. Bernstein.
https://www.131002.net/siphash/
They have all be converted into Perl's ugly macro format.
I have not done any rigorous testing to make sure this conversion
is correct. They seem to function as expected however.
All of them use the random hash seed.
You can force the use of a given function by defining one of
PERL_HASH_FUNC_MURMUR
PERL_HASH_FUNC_SUPERFAST
PERL_HASH_FUNC_DJB2
PERL_HASH_FUNC_SDBM
PERL_HASH_FUNC_ONE_AT_A_TIME
Setting the environment variable PERL_HASH_SEED_DEBUG to 1 will make
perl output the current seed (changed to hex) and the hash function
it has been built with.
Setting the environment variable PERL_HASH_SEED to a hex value will
cause that value to be used at the seed. Any missing bits of the seed
will be set to 0. The bits are filled in from left to right, not
the traditional right to left so setting it to FE results in a seed
value of "FE000000" not "000000FE".
Note that we do the hash seed initialization in perl_construct().
Doing it via perl_alloc() (via init_tls) causes problems under
threaded builds as the buffers used for reentrant srand48 functions
are not allocated. See also the p5p mail "Hash improvements blocker:
portable random code that doesnt depend on a functional interpreter",
Message-ID:
<CANgJU+X+wNayjsNOpKRqYHnEy_+B9UH_2irRA5O3ZmcYGAAZFQ@mail.gmail.com>
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
DESTROY has been cached in overload tables since
perl-5.6.0-2080-g32251b2, making it 4 times faster than before (over-
load tables are faster than method lookup).
But it slows down symbol lookup on stashes with overload tables,
because overload tables use magic, and SvRMAGICAL results in calls to
mg_find on every hash lookup.
By reusing SvSTASH(stash) to cache the DESTROY method (if the stash
is unblessed, of course, as most stashes are), we can avoid making
all destroyable stashes magical and also speed up DESTROY lookup
slightly more.
The results:
• 10% increase in stash lookup speed after destructors. That was just
testing $Foo::{x}. Other stash lookups will have other overheads
that make the difference less impressive.
• 5% increase in DESTROY lookup speed. I was using an empty DESTROY
method to test this, so, again, real DESTROY methods will have more
overhead and less speedup.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
OpenVMS releases of the last decade or so have had this capability
but it's still not the default. But it's a reasonable default for
Perl, so enable it in our initialization code.
Unfortunately this feature does not work unless extended parse has
been enabled in the process before invoking Perl. Enabling
extended parse in our initialization code doesn't do any good
because DCL has already parsed the arguments before we get there.
So we will be limited to documenting that things work better with
extended parse and that the test suite assumes it's enabled.
|
| |
|
|
|
|
|
| |
Karl checked and it seems it actually works on EBCDIC as well
as on ASCII.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Sayeth Karl:
In the _cp macros, the final test can be simplified:
/*** GENERATED CODE ***/
#define is_VERTWS_cp(cp) \
( ( 0x0A <= cp && cp <= 0x0D ) || ( 0x0D < cp && \
( 0x85 == cp || ( 0x85 < cp && \
( 0x2028 == cp || ( 0x2028 < cp && \
0x2029 == cp ) ) ) ) ) )
That 0x2028 < cp can be omitted and it will still mean the same thing.
And So Be It.
|
|
|
|
|
|
|
|
| |
When cloning stacks (e.g. for fake fork), the stack is cloned
by copying the stack AV pointed to by PL_curstackinfo; but the
AvFILL on that AV may not be up to date, resulting in the top N
items of the stack not being cloned. Fix by saving PL_stack_sp
back into AvFILL(PL_curstack) before cloning
|
| |
|
|
|
|
|
|
| |
Fixes a possible crash (manifested when running with the page heap enabled)
when running after clearing PATH, which at least one test in op/taint.t
does.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
without usedl, the warnings are like:
Subroutine DynaLoader::dl_error redefined at (eval 1) line 2
... warnings about every other DynaLoader function
Subroutine DynaLoader::dl_error redefined at (eval 2) line 2
with usedl, only dl_error is defined, so the other warnings disappear,
since the regexp expected two new-lines between the dl_error warnings
the test failed.
The change makes one of the newlines optional.
|
| |
|
|
|
|
|
|
| |
save_freeop and SAVEFREEOP are never used in expressions only statements.
Using PL_Xpv is never ideal. For me .text section dropped from 0xC1DFF to
0xC1DBF after applying this.
|
| |
|
|
|
|
|
|
| |
When invoking the debugger recursively, pp_dbstate needs to push a new
pad (like pp_entersub) so that DB::DB doesn’t stomp on the lexical
variables belonging to the outer call.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It’s where the subroutine is defined, not the current package,
that matters.
#!perl -l
sub { my $x = 3; foo(); print $x }->();
sub foo { package DB; eval q"$x = 42" }
__END__
3
#!perl -l
sub { my $x = 3; DB::foo(); print $x }->();
package DB;
sub foo { package main; eval q"$x = 42"; }
__END__
42
|
|
|
|
| |
With three times we still get false positives
|
| |
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Refactor the code in this hot function. Chiefly, the if/else chain
was replaced with a single switch statment, and various bits of code were
tidied up, duplicate code eliminated, local vars added to avoid repeated
evaluation of expressions etc; along with big whitespace changes to fix up
indentation etc afterwards.
With these changes, these trivial benchamarks run about 7% faster:
$x++ for @a; # @a has 30_000 elements
$x++ for 1..30_000;
while this one stayed about the same, presumably due to the relatively
costly overhead of sv_inc():
$x++ for 'aaa' .. 'zzz';
|
| |
| |
| |
| |
| |
| |
| |
| | |
reindent the LAZYSV block to be consistent with the other two;
move the comments to be on the same line as the case statements,
and fix the indent on the RETPUSHYES.
Only whitespace/moving comment changes; nothing functional
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
only whitespace changes
|