summaryrefslogtreecommitdiff
path: root/pp_hot.c
Commit message (Collapse)AuthorAgeFilesLines
* pp_hot.c: Silence some MS VC warningsKarl Williamson2019-04-121-2/+4
| | | | These are bogus warnings.
* Avoid leak in multiconcat with overloading.David Mitchell2019-02-051-4/+9
| | | | | | | | | | | | | | | | | | RT #133789 In the path taken through pp_multiconcat() when one or more args have side-effects such tieing or overloading, multiconcat has to decide whether to just return the result of all the concatting as-is, or to first assign it to an expression or variable if the op includes an implicit assign (such as $lex = x.y.z or $a[0] = x.y.z). The code was getting this right for those two cases, and was also getting it right for the append cases ($lex .= x.y.z and $a[0] .= x.y.z), which don't need assigns. But for the bare case (x.y.z) it was assigning to the op's targ as well as returning the value. Hence leaking a reference until destruction of the sub and its pad. This commit stops the assign in that last case.
* Eliminate AMGf_set flagDavid Mitchell2019-02-051-1/+1
| | | | | | | | | | | | | | | | | | | | | I added this flag a few years ago when I revamped the overload macros tryAMAGICbin() etc. It allowed two different classes of macros to share the same functions (Perl_try_amagic_un/Perl_try_amagic_bin) by indicating what type of action is required. However, the last few commits have made those two functions able to robustly always determine whether its an assign-type action ($x op= $y or $lex = $x op $x) or a plain set-result-on-stack operation ($x op $y). So eliminate this flag. Note that this makes the ops which have the AMGf_set flag hard-coded infinitesimally slower, since Perl_try_amagic_bin no longer skips the checks for assign-ness. But compared with the overhead of having already called the overload method, this is is trivial. On the plus side, it makes the code smaller and easier to understand.
* Eliminate SvPADMY tests from overload codeDavid Mitchell2019-02-051-3/+3
| | | | | | | | | | | | | | | | | | | | A couple of places in the overload code do SvPADMY(TARG) to decide whether this is a normal op like ($x op $y), where the targ will have SVs_PADTMP set, or a lexical assignment like $lex = ($x op $y) where the assign has been optimised away and the op is expected to directly assign to the targ which it thinks is a PADTMP but is really $lex. Since the SVs_PADMY flag was eliminated a while ago, SvPADMY() is just defined as !(SvFLAGS(sv) & SVs_PADTMP). Thus the overload code is relying on the absence of a PADTMP flag in the target to deduce that the OPpTARGET_MY optimisation is in effect. This seems to work (at least for the code in the test suite), but can't be regarded as robust. This commit removes each SvPADMY() test and replaces it with the twin if ( (PL_opargs[PL_op->op_type] & OA_TARGLEX) && (PL_op->op_private & OPpTARGET_MY)) tests.
* PERL_OP_PARENT is always defined, stop testing for itTony Cook2019-01-251-2/+0
| | | | | | | | PERL_OP_PARENT is the new reality, leaving the pre-processor checks is more confusing that anything else. I left the test in perl.c for consistency with the other checks in that code.
* optimize IV -> UV conversionsTomasz Konojacki2018-11-211-2/+2
| | | | | | | | | | | | | | | | | | | | | | | This commit replaces all instances of code that looks like this: uv = (iv == IV_MIN) ? (UV)iv : (UV)(-iv) with simpler and more optimal: uv = -(UV)iv While -iv indeed results in an undefined behaviour when iv == IV_MIN, -(UV)iv is perfectly well defined and does the right thing. C standard guarantees that the result of (UV)iv (for negative iv) is equal to iv + UV_MAX + 1 (see 6.3.1.3, paragraph 2 in C11). It also guarantees that the result of -uv is UV_MAX - uv + 1 (6.2.5, paragraph 9). That means that the result of -(UV)iv is UV_MAX - (iv + UV_MAX + 1) + 1 which is equal to -iv for *all* possible negative values of iv. [perl #133677]
* fix 'for reverse @array' bug on AIXDavid Mitchell2018-10-171-2/+2
| | | | | | | | | | | | | | | RT #133558 Due to what appears to be a compiler bug on AIX (or perhaps it's undefined behaviour which happens to work on other platforms), this line of code in pp_iter(): inc = 1 - (PL_op->op_private & OPpITER_REVERSED); was setting inc to 4294967295 rather than to the expected -1 (inc was a 64-bit signed long). Fix it with a couple of judicious (IV) casts (which ought to be a NOOP).
* RT#133131: pp_hot.c: deoptimise pp_iter() when non-standard OP_AND op_ppaddrAaron Crane2018-04-211-7/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 7c114860c0fa8ade5e00a4b609d2fbd11d5a494c introduced an optimisation in pp_iter(). Before the optimisation, pp_iter() pushed either &PL_SV_yes or &PL_sv_no to the stack, and returned the op_next in the obvious way. The optimisation takes advantage of the fact that the op_next of an OP_ITER always points to an OP_AND node, so pp_iter() now directly jumps to either the op_next or the op_other of the OP_AND as appropriate. The commit message for the optimisation also says this: It's possible that some weird optree-munging XS module may break this assumption. For now I've just added asserts that the next op is OP_AND with an op_ppaddr of Perl_pp_and; if that assertion fails, it may be necessary to convert pp_iter()s' asserts into conditional statements. However, Devel::Cover does change the op_ppaddr of the ops it can see, so the assertions on op_ppaddr were being tripped when Devel::Cover was run under a -DDEBUGGING Perl. But even if the asserts didn't trip, skipping the OP_AND nodes would prevent Devel::Cover from determining branch coverage in the way that it wants. This commit converts the asserts into conditional statements, as outlined in the commit message above, and undoes the optimisation when the op_ppaddr doesn't match.
* rmv/de-dup static const char array "strings"Daniel Dragan2018-03-071-3/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MSVC due to a bug doesn't merge identicals between .o'es or discard these vars and their contents. MEM_WRAP_CHECK_2 has never been used outside of core according to cpan grep MEM_WRAP_CHECK_2 was removed on the "have PERL_MALLOC_WRAP" branch in commit fabdb6c0879 "pre-likely cleanup" without explination, probably bc it was unused. But MEM_WRAP_CHECK_2 was still left on the "no PERL_MALLOC_WRAP" branch, so remove it from the "no" side for tidyness since it was a mistake to leave it there if it was removed from the "yes" side of the #ifdef. Add MEM_WRAP_CHECK_s API, letter "s" means argument is string or static. This lets us get rid of the "%s" argument passed to Perl_croak_nocontext at a couple call sites since we fully control the next and only argument and its guaranteed to be a string literal. This allows merging of 2 "Out of memory during array extend" c strings by linker now. Also change the 2 op.h messages into macros which become string literals at their call sites instead of "read char * from a global char **" which was going on before. VC 2003 32b perl527.dll section size before .text name DE503 virtual size .rdata name 4B621 virtual size after .text name DE503 virtual size .rdata name 4B5D1 virtual size
* pp_multiconcat: correctly honour stringifyDavid Mitchell2018-02-191-8/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | RT #132793, RT #132801 In something like $x .= "$overloaded", the $overloaded stringify method wasn't being called. However, it turns that the existing (pre-multiconcat) behaviour is also buggy and inconsistent. That behaviour has been restored as-is. At some future time, these bugs might be addressed. Here are some comments from the new tests added to overload.t: Since 5.000, any OP_STRINGIFY immediately following an OP_CONCAT is optimised away, on the assumption that since concat will always return a valid string anyway, it doesn't need stringifying. So in "$x", the stringify is needed, but on "$x$y" it isn't. This assumption is flawed once overloading has been introduced, since concat might return an overloaded object which still needs stringifying. However, this flawed behaviour is apparently needed by at least one module, and is tested for in opbasic/concat.t: see RT #124160. There is also a wart with the OPpTARGET_MY optimisation: specifically, in $lex = "...", if $lex is a lexical var, then a chain of 2 or more concats *doesn't* optimise away OP_STRINGIFY: $lex = "$x"; # stringifies $lex = "$x$y"; # doesn't stringify $lex = "$x$y$z..."; # stringifies
* pp_multiconcat: eliminate/rename dsv/dsv_pv varsDavid Mitchell2018-02-191-40/+33
| | | | | | | | | | | After the previous commit, dsv is always the same as targ, so relace all uses of 'dsv' with 'targ', and rename the 'dsv_pv' var to 'targ_pv'. Also rename a limited scope var from 'targ_pv' to targ_buf' as that name is now being used for dsv_pv. Should be no functional changes.
* redo magic/overload handing in pp_multiconcatDavid Mitchell2018-02-191-285/+202
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The way pp_multiconcat handles things like tieing and overloading doesn't work very well at the moment. There's a lot of code to handle edge cases, and there are still open bugs. The basic algorithm in pp_multiconcat is to first stringify (i.e. call SvPV() on) *all* args, then use the obtained values to calculate the total length and utf8ness required, then do a single SvGROW and copy all the bytes from all the args. This ordering is wrong when variables with visible side effects, such as tie/overload, are encountered. The current approach is to stringify args up until such an arg is encountered, concat all args up until that one together via the normal fast route, then jump to a special block of code which concats any remaining args one by one the "hard" way, handling overload etc. This is problematic because we sometimes need to go back in time. For example in ($undef . $overloaded), we're supposed to call $overloaded->concat($undef, reverse=1) so to speak, but by the time of the method call, we've already tried to stringify $undef and emitted a spurious 'uninit var' warning. The new approach taken in this commit is to: 1) Bail out of the stringify loop under a greater range of problematical variable classes - namely we stop when encountering *anything* which might cause external effects, so in addition to tied and overloaded vars, we now stop for any sort of get magic, or any undefined value where warnings are in scope. 2) If we bail out, we throw away any stringification results so far, and concatenate *all* args the slow way, even ones we're already stringified. This solves the "going back in time" problem mentioned above. It's safe because the only vars that get processed twice are ones for which the first stringification could have no side effects. The slow concat loop now uses S_do_concat(), which is a new static inline function which implements the main body of pp_concat() - so they share identical code. An intentional side-effect of this commit is to fix three tickets: RT #132783 RT #132827 RT #132595 so tests for them are included in this commit. One effect of this commit is that string concatenation of magic or undefined vars will now be slower than before, e.g. "pid=$$" "value=$undef" but they will probably still be faster than before pp_multiconcat was introduced.
* move body of pp_concat() to S_do_concat()David Mitchell2018-02-191-6/+21
| | | | | | | Create an inline static function which implements the body of pp_concat(), then replace pp_concat()'s body with a call to it. Shortly, we'll use this function in pp_multiconcat() too.
* Fix ary shifting when sparse ary is passed to subFather Chrysostomos2018-02-181-13/+25
| | | | | | | | | | | | | | | | | | | | | | | This commit fixes #132729 in the specific case where a nonexistent element within a sparse array is passed to a subroutine. Prior to this commit, some_sub($sparse_array[$n]) where $n <= $#sparse_array and the element does not exist, would exhi- bit erroneous behaviour if some_sub shifted or unshifted the original @sparse_array. Any ‘holes’ (nonexistent elements) in the array would show up in @_ as deferred element (defelem) scalars, magic scalars that remember their index in the array. This index is not updated and gets out of synch when the array is shifted. This commit fixes the bug for elements within the array by using the new ‘nonelem’ magic introduced a few commits ago. It stores within the array a magic scalar that is marked as being nonexistent. It also reduced the number of scalars that need to be created if such a sub call happens repeatedly.
* Fix two bugs when calling &xsub when @_ has holesFather Chrysostomos2018-02-181-1/+1
| | | | | | | | | | | | | | | | | | | | This fixes #132729 in the particular instance where an XSUB is called via ampersand syntax when @_ has ‘holes’, or nonexistent ele- ments, as in: @_ = (); $_[1] = 1; &xsub; This means that if the XSUB or something it calls unshifts @_, the first argument passed to the XSUB will now refer to $_[1], not $_[0]; i.e., as of this commit it is correctly shifted over. Previously, a ‘defelem’ was used, which is a magical scalar that remembers its index in the array, independent of whether the array was shifted. In addition, the old code failed to mortalize the defelem, so this commit fixes a memory leak with the new ‘non-elem’ mechanism (a spe- cially-marked element stored in the array itself).
* ‘Nonelems’ for pushing sparse array on the stackFather Chrysostomos2018-02-181-2/+2
| | | | | | | | | | | | | | | | | To avoid having to create deferred elements every time a sparse array is pushed on to the stack, store a magic scalar in the array itself, which av_exists and refto recognise as not existing. This means there is only a one-time cost for putting such arrays on the stack. It also means that deferred elements that live long enough don’t start pointing to the wrong array entry if the array gets shifted (or unshifted/spliced) in the mean time. Instead, the scalar is already in the array, so it cannot lose its place. This fix only applies when the array as a whole is pushed on to the stack, but it could be extended in future commits to apply to other places where we currently use deferred elements.
* Follow-up to fd77b29b3be4Father Chrysostomos2018-01-211-2/+0
| | | | | | | As Zefram pointed out, I left in a piece of code that caused one branch to continue to behave as before. The change was ineffective and the tests happened to be written in such a way as to take the other branch.
* Don’t vivify elems when putting array on stackFather Chrysostomos2018-01-191-3/+11
| | | | | | | | | | 6661956a2 was a little too powerful, and, in addition to fixing the bug that @_ did not properly alias nonexistent elements, also broke other uses of nonexistent array elements. (See the tests added.) This commit changes it so that putting @a on the stack does not vivify all ‘holes’ in @a, but creates defelem (deferred element) scalars, but only in lvalue context.
* vivify array elements when putting them on stackZefram2018-01-161-3/+5
| | | | | | | | | | | When the elements of an array are put on the stack, by a padav, rv2av, or padrange op, null pointers in the AvARRAY were being pushed as &PL_sv_undef, which was OK in rvalue contexts but caused misbehaviour in lvalue contexts. Change this to vivify these elements. There's no attempt here to limit the vivification to lvalue contexts: the existing op flags aren't enough to detect \(@a), and attempting to catch all cases where a new flag needs to be set would be an error-prone process. Fixes [perl #8910].
* pp_multiconcat(): fix win32 compiler warningDavid Mitchell2018-01-021-1/+1
| | | | | | | | | | | | pp_hot.c(930) : warning C4146: unary minus operator applied to unsigned type, result still unsigned Negating an unsigned STRLEN (aka Size_t) string length in this case will never encounter a situation where the value is too big to be negated, because at that point we have both the string and a grown buffer of at least equal size in memory simultaneously, so targ_len < Size_t_MAX/2. So just cast away the warning.
* s/// in boolean context: simplify return valueDavid Mitchell2017-12-191-2/+3
| | | | | | | | | | | Normally s/// returns a count of the number of iterations, but as an optimisation, in boolean context it returns PL_sv_yes/PL_sv_zero instead. But in the places where it decides which immortal var to return, the number of iterations is always > 0, so PL_sv_zero never gets returned. So skip testing whether iters > 0 and always just return PL_sv_yes. (In non-boolean scalar context, it still returns the iteration count as before.)
* avoid tainting boolean return value of s///David Mitchell2017-12-191-2/+2
| | | | | | | | | | RT #132385 s/// normally returns an integer count, but sometimes for efficiency it will return a boolean instead (PL_sv_yes/PL_sv_zero). In these cases, don't try to taint the return value, since it will die with 'Modification of a read-only value'.
* s///: return boolean in not-in-place branchDavid Mitchell2017-12-191-1/+4
| | | | | | | | | | A while back, s/// (and other ops) were optimised to return PL_sv_yes/PL_sv_zero rather than an iteration count in boolean context. This optimisation was missed in one place in pp_subst(): the 'can modify in place' branch was done, but the other branch was missed. This commit fixes that.
* pp_multiconcat() Use faster UTF-8 variant countingKarl Williamson2017-12-131-7/+3
|
* stop using &PL_sv_yes as no-op methodZefram2017-12-051-10/+0
| | | | | | | | | | | Method lookup yields a fake method for ->import or ->unimport if there's no actual method, for historical reasons so that "use" doesn't barf if there's no import method. This fake method used to be &PL_sv_yes being used as a magic placeholder, recognised specially by pp_entersub. But &PL_sv_yes is a string, which we'd expect to serve as a symbolic CV ref. Change method lookup to yield an actual CV with a body in this case, and remove the special case from pp_entersub. This fixes the remaining part of [perl #126042].
* $overloaded .= $x: don't stringify $xDavid Mitchell2017-11-281-0/+13
| | | | | | | | | | RT #132385 This is a variant of the ($ref . $overloaded) bug which was fixed with v5.27.5-195-gb3ab0375cb. Basically, when the overloaded concat method is called, it should pass $x as-is, rather than as "$x". This fixes PDL-2.018
* MULTICONCAT - use distinct TMPS for const overloadDavid Mitchell2017-11-201-13/+2
| | | | | | | | | | | | | | | | | | | | Because OP_MULTICONCAT optimises away any const SVs, they have to be recreated if a concat overload method is called. Up until now (for efficiency) the same SvTEMP was used to create each const TEMP. This caused problems if an overload method saved a ref to the argument. This is easily fixed by not reusing the TEMP (and the extra inefficiency is small compared to the overall burden of calling out to an overloaded method). With this patch, the following test code changes from getting "BB" to getting "AB": my @a; use overload '.' => sub { push @a, \$_[1]; $_[0] }; my $o = bless []; my $x = $o . "A" . $o . 'B'; is "${$a[0]}${$a[2]}", "AB", "RT #132385";
* fix tainting of s/// with overloaded replacementZefram2017-11-191-3/+1
| | | | | | | | | | | | | | The substitution code was trying to track the taintedness of the replacement string itself, but it didn't account for the replacement being an untainted object with overloading that returns a tainted stringification. It looked at the taintedness of the object value, not realising that taint could arise during the string concatenation per se. Change the taint checks to look at the actual TAINT_get flag after string concatenation. This may falsely ascribe to the replacement taint that actually came from somewhere else, but the end result is the same anyway: there's no visible behaviour that distinguishes taint specifically from the replacement. Also remove a related taint check that seems to be not needed at all. Fixes [perl #115266].
* change OP_MULTICONCAT nargs from UV to SSize_tDavid Mitchell2017-11-131-4/+4
| | | | | | | Change it from unsigned to unsigned since it makes the SP-adjusting code in pp_multiconcat easier without hitting undefined behaviour (RT #132390); and change its size from UV to SSize_t since it represents the number of args on the stack.
* rename op_aux field from 'size' to 'ssize'David Mitchell2017-11-131-12/+12
| | | | | | | This part of the op_aux union was added for OP_MULTICONCAT; its actually of type SSize_t, so rename it to ssize to better reflect that it's signed. This should make no functional difference.
* pp_multiconcat: don't stringify LHS overload argDavid Mitchell2017-11-041-0/+35
| | | | | | | | | | | | | | | | | | | | | | RT #132385 In something like $a1 . $a2 where $a2 is overloaded, the concat overload method was being called like concat($a2, "$a1", 1); (The 1 indicated that the args are reversed). This commit changes it so that it's called as concat($a2, $a1, 1); i.e. that the original arg is passed in rather than a stringified copy of it. This is important if for example $a1 is a ref.
* multiconcat: use append_utf8_from_native_byte()David Mitchell2017-11-021-8/+3
| | | | | | | | This small inline function does what my code was doing manually in a couple of places. Should be no functional difference, just makes the code tidier. Suggested by Karl.
* Add OP_MULTICONCAT opDavid Mitchell2017-10-311-0/+803
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allow multiple OP_CONCAT, OP_CONST ops, plus optionally an OP_SASSIGN or OP_STRINGIFY, to be combined into a single OP_MULTICONCAT op, which can make things a *lot* faster: 4x or more. In more detail: it will optimise into a single OP_MULTICONCAT, most expressions of the form LHS RHS where LHS is one of (empty) my $lexical = $lexical = $lexical .= expression = expression .= and RHS is one of (A . B . C . ...) where A,B,C etc are expressions and/or string constants "aAbBc..." where a,A,b,B etc are expressions and/or string constants sprintf "..%s..%s..", A,B,.. where the format is a constant string containing only '%s' and '%%' elements, and A,B, etc are scalar expressions (so only a fixed, compile-time-known number of args: no arrays or list context function calls etc) It doesn't optimise other forms, such as ($a . $b) . ($c. $d) ((($a .= $b) .= $c) .= $d); (although sub-parts of those expressions might be converted to an OP_MULTICONCAT). This is partly because it would be hard to maintain the correct ordering of tie or overload calls. The compiler uses heuristics to determine when to convert: in general, expressions involving a single OP_CONCAT aren't converted, unless some other saving can be made, for example if an OP_CONST can be eliminated, or in the presence of 'my $x = .. ' which OP_MULTICONCAT can apply OPpTARGET_MY to, but OP_CONST can't. The multiconcat op is of type UNOP_AUX, with the op_aux structure directly holding a pointer to a single constant char* string plus a list of segment lengths. So for "a=$a b=$b\n"; the constant string is "a= b=\n", and the segment lengths are (2,3,1). If the constant string has different non-utf8 and utf8 representations (such as "\x80") then both variants are pre-computed and stored in the aux struct, along with two sets of segment lengths. For all the above LHS types, any SASSIGN op is optimised away. For a LHS of '$lex=', '$lex.=' or 'my $lex=', the PADSV is optimised away too. For example where $a and $b are lexical vars, this statement: my $c = "a=$a, b=$b\n"; formerly compiled to const[PV "a="] s padsv[$a:1,3] s concat[t4] sK/2 const[PV ", b="] s concat[t5] sKS/2 padsv[$b:1,3] s concat[t6] sKS/2 const[PV "\n"] s concat[t7] sKS/2 padsv[$c:2,3] sRM*/LVINTRO sassign vKS/2 and now compiles to: padsv[$a:1,3] s padsv[$b:1,3] s multiconcat("a=, b=\n",2,4,1)[$c:2,3] vK/LVINTRO,TARGMY,STRINGIFY In terms of how much faster it is, this code: my $a = "the quick brown fox jumps over the lazy dog"; my $b = "to be, or not to be; sorry, what was the question again?"; for my $i (1..10_000_000) { my $c = "a=$a, b=$b\n"; } runs 2.7 times faster, and if you throw utf8 mixtures in it gets even better. This loop runs 4 times faster: my $s; my $a = "ab\x{100}cde"; my $b = "fghij"; my $c = "\x{101}klmn"; for my $i (1..10_000_000) { $s = "\x{100}wxyz"; $s .= "foo=$a bar=$b baz=$c"; } The main ways in which OP_MULTICONCAT gains its speed are: * any OP_CONSTs are eliminated, and the constant bits (already in the right encoding) are copied directly from the constant string attached to the op's aux structure. * It optimises away any SASSIGN op, and possibly a PADSV op on the LHS, in all cases; OP_CONCAT only did this in very limited circumstances. * Because it has a holistic view of the entire concatenation expression, it can do the whole thing in one efficient go, rather than creating and copying intermediate results. pp_multiconcat() goes to considerable efforts to avoid inefficiencies. For example it will only SvGROW() the target once, and to the exact size needed, no matter what mix of utf8 and non-utf8 appear on the LHS and RHS. It never allocates any temporary SVs except possibly in the case of tie or overloading. * It does all its own appending and utf8 handling rather than calling out to functions like sv_catsv(). * It's very good at handling the LHS appearing on the RHS; for example in $x = "abcd"; $x = "-$x-$x-"; It will do roughly the equivalent of the following (where targ is $x); SvPV_force(targ); SvGROW(targ, 11); p = SvPVX(targ); Move(p, p+1, 4, char); Copy("-", p, 1, char); Copy("-", p+5, 1, char); Copy(p+1, p+6, 4, char); Copy("-", p+10, 1, char); SvCUR(targ) = 11; p[11] = '\0'; Formerly, pp_concat would have used multiple PADTMPs or temporary SVs to handle situations like that. The code is quite big; both S_maybe_multiconcat() and pp_multiconcat() (the main compile-time and runtime parts of the implementation) are over 700 lines each. It turns out that when you combine multiple ops, the number of edge cases grows exponentially ;-)
* pp_hot.c: simplify cpp conditionalsAaron Crane2017-10-211-8/+4
|
* Make pp_multideref handle local $::{subref}Father Chrysostomos2017-10-081-1/+1
| | | | Based on a patch by Nicholas R.
* [perl #129916] Allow sub-in-stash outside of mainFather Chrysostomos2017-10-081-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The sub-in-stash optimization introduced in 2eaf799e only applied to subs in the main stash, not in other stashes, due to a problem with the logic in newATTRSUB. This comment: Also, we may be called from load_module at run time, so PL_curstash (which sets CvSTASH) may not point to the stash the sub is stored in. explains why we need the PL_curstash != CopSTASH(PL_curcop) check. (Perl_load_module will fail without it.) But that logic does not work properly at compile time (when PL_curcop == &PL_compiling). The value of CopSTASH(&PL_compiling) is never actually used. It is always set to the main stash. So if we check that PL_curstash != CopSTASH(PL_curcop) and forego the optimization in that case, we will never optimize subs outside of the main stash. What we really need is to check IN_PERL_RUNTIME && PL_curstash != opSTASH(PL_curcop). I.e., forego the optimization at run time if the stashes differ. That is what this commit implements. One observable side effect of this change is that deleting a stash element no longer anonymizes the CV if the CV had no GV that it was depending on to provide its name. Since the main thing in such situa- tions is that we do not get a crash, I think this change (arguably an improvement) is acceptable.) ----------- A bit of explanation of various other changes: gv.c:require_tie_mod needed a bit of help, since it could not handle sub refs in stashes. To keep localisation of stash elements working the same way, local($Stash::{foo}) now upgrades a coderef to a full GV before the localisation. (Changes in two pp*.c files and in scope.c:save_gp.) t/op/stash.t contains a test that makes sure that perl does not crash when a GV with a CV pointing to it gets deleted. This commit tweaks the test so that it continues to test that. (There has to be a GV for the test to test what it is meant to test.) Similarly with t/uni/caller.t and t/uni/stash.t. op.c:rv2cv_op_cv with the _MAYBE_NAME_GV flag was returning the cal- ling GV in those cases where a GV-less sub is called via a GV. E.g., *main = \&Foo::foo; main(). This meant that errors like ‘Not enough arguments’ were giving the wrong sub name. newATTRSUB was not calling mro_method_changed_in when storing a sub as an RV. gv_init needs to arrange for the new GV to have the file and line num- ber corresponding to the sub in it. These are taken from CvSTART, which may be off by a few lines, but is the closest we have to the place the sub was declared.
* (perl #131746) avoid undefined behaviour in Copy() etcTony Cook2017-09-041-1/+2
| | | | | | | | | | | | | | | | | These functions depend on C library functions which have undefined behaviour when passed NULL pointers, even when passed a zero 'n' value. Some compilers use this information, ie. assume the pointers are non-NULL when optimizing any following code, so we do need to prevent such unguarded calls. My initial thought was to add conditionals to each macro to skip the call to the library function when n is zero, but this adds a cost to every use of these macros, even when the n value is always true. So instead I added asserts() which will give us a much more visible indicator of such broken code and revealed the pp_caller and Glob.xs issues also patched here.
* add a stack extend check to pp_entersub for XS subsTony Cook2017-08-311-0/+15
| | | | | | This allows us to report the XSUB involved by name (or at least by filename if it's anonymous) in the likely case that it was an XSUB that failed to extend the stack.
* (perl #128263) handle PL_last_in_gv being &PL_sv_undefTony Cook2017-08-311-4/+1
| | | | | | | | | | | | rv2gv will return &PL_sv_undef when it can't get a GV, previously this could cause an assertion failure in mg.c My original fix for this changed each op that deals with GVs for I/O to set PL_last_in_gv to NULL if there was no io object in the GV, but this changes other behaviour as noted by FatherC. Also partly reverts 84ee769f, which unnecessarily did the same for readline(), so now we're consistent.
* make scalar(keys(%lexical)) less slow.David Mitchell2017-07-271-0/+18
| | | | | | | | | | | | | | | | | | | | | | A recent commit in this branch made OP_PADHV / OP_RV2HV in void/scalar context, followed by OP_KEYS, optimise away the OP_KEYS op and set the OPpPADHV_ISKEYS or OPpRV2HV_ISKEYS flag on the OP_PADHV / OP_RV2HV op. However, in scalar but non-boolean context with OP_PADHV, this actually makes it slower, because the OP_KEYS op has a target, while the OP_PADHV op doesn't, thus it has to create a new mortal each time to return the integer value. This commit fixes that by, in the case of scalar padhv, retaining the OP_KEYS node (although still not keeping it in the execution path), then at runtime using that op's otherwise unused target. This only works on PERL_OP_PARENT builds (now the default) as the OP_KEYS is the parent of the OP_PADHV, so would be hard to find at runtime otherwise. This commit also fixes pp_padhv/pp_rv2hv in void context - formerly it was needlessly pushing a scalar-valued count like scalar context.
* hv_pushkv(): handle keys() and values() tooDavid Mitchell2017-07-271-1/+1
| | | | | | | | | | | | | The newish function hv_pushkv() currently just pushes all key/value pairs on the stack. i.e. it does the equivalent of the perl code '() = %h'. Extend it so that it can handle 'keys %h' and values %h' too. This is basically moving the remaining list-context functionality out of do_kv() and into hv_pushkv(). The rationale for this is that hv_pushkv() is a pure HV-related function, while do_kv() is a pp function for several ops including OP_KEYS/VALUES, and expects PL_op->op_flags/op_private to be valid.
* S_padhv_rv2hv_common(): reorganise codeDavid Mitchell2017-07-271-28/+29
| | | | | | | | | | | | | | | | | | | | There are three main booleans in play here: * whether the hash is tied; * whether we're in boolean context; * whether we're implementing 'keys %h' Reorganise the if-tree logic for these up to 8 permutations to make the code simpler. In particular, make it so that all these are done in only one place: * call HvUSEDKEYS(); * call magic_scalarpack(); * push an integer return value, either as TARG or mortal The functionality should be unchanged, except that now 'scalar(%h)', where %h isn't tied, will return an integer value using the targ if available rather than always creating a new mortal.
* S_padhv_rv2hv_common(): unroll hv_scalar() callsDavid Mitchell2017-07-271-4/+9
| | | | | | | | | This function makes a couple of calls to hv_scalar(), which does one of two things depending on whether hash is tied or not. Since in S_padhv_rv2hv_common() we've already determined whether the hash is tied, just include the relevant part(s) of hv_scalar() directly. The code will be reorganised shortly.
* simplify keys(%tied_hash) in boolean context.David Mitchell2017-07-271-3/+6
| | | | | | | | | | | | | | | | | | Previously something like if (keys %tied_hash) { ... } would have called FIRSTKEY(), followed by NEXTKEY() x N. Now, it just calls SCALAR() once if present, and if not, falls back to calling just FIRSTKEY() once. i.e. it only needs to determine whether at least one key is present. The behaviour of of 'keys(%tied) in boolean context now matches that of '(%tied) in boolean context. See http://nntp.perl.org/group/perl.perl5.porters/245463.
* S_pushav(): tail call optimiseDavid Mitchell2017-07-271-11/+11
| | | | | Make it return PL_op->op_next so that (some of) its callers can be tail-call optimised, if the compiler supports such a thing.
* pp_padav(): use S_pushav()David Mitchell2017-07-271-19/+4
| | | | | The previous commit harmonised the two functions, so its ok to use S_pushav() now.
* harmonise S_pushav() and pp_padav()David Mitchell2017-07-271-9/+3
| | | | | | | | | | | | | | | | | | | | | | | | These two functions contain a similar block of code to push an array onto the stack. However they have some slight differences, which this commit removes. This will allow padav() to call S_pushav() in the next commit. The two differences are: 1) S_pushav() when pushing elements of a magical array, calls mg_get() on each element. This is to ensure that e.g. in sub f { /..../; @+ } when the elements of @+ are returned, they are set *before* the current pattern goes out of scope. However, since probably v5.11.5-132-gfd69380 and v5.13.0-22-g2d961f6, the mg_get is no longer required. 2) S_pushav() uses the SvRMAGICAL() test to decide whether its unsafe to access AvARRAY directly; pp_padav() uses SvMAGICAL(). The latter seems too severe, so I've changed it to SvRMAGICAL().
* create Perl_hv_pushkv() functionDavid Mitchell2017-07-271-4/+3
| | | | | | | | | | | | | | | | ...and make pp_padhv(), pp_rv2hv() use it rather than using Perl_do_kv() Both pp_padhv() and pp_rv2hv() (via S_padhv_rv2hv_common()), outsource to Perl_do_kv(), the list-context pushing/flattening of a hash onto the stack. Perl_do_kv() is a big function that handles all the actions of keys, values etc. Instead, create a new function which does just the pushing of a hash onto the stack. At the same time, split it out into two loops, one for tied, one for normal: the untied one can skip extending the stack on each iteration, and use a cheaper HeVAL() instead of calling hv_iterval().
* Give OP_RV2HV a targDavid Mitchell2017-07-271-4/+14
| | | | | | | OP_RV2AV already has one; its not clear why OP_RV2HV didn't. Having one means that in scalar context it can return an int value without having to create a mortal. Ditto when its doing 'keys %h' via OPpRV2HV_ISKEYS.
* add S_padhv_rv2hv_common() functionDavid Mitchell2017-07-271-76/+64
| | | | | | | | | This STATIC INLINE function extracts out a chunk of common code from pp_padhv() and pp_rv2hv() (well, from pp_rv2av() actually, since that handles OP_RV2HV too). Should be no functional changes, except that now in void context, 'keys %h' doesn't leave any rubbish on the stack.