| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This warning was introduced in db54010671d6c27faf667d658073743b14cd9b58.
and is about comparing signed and unsigned results. This commit casts
both operands to ptrdiff_t which is likely the widest signed type
available on the platform. This can fail if the one of the operands is
greater than PTRDIFF_MAX. But lots of other things can fail in that
case as well. As the reply from Tomasz Konojacki in the thread starting
with http://nntp.perl.org/group/perl.perl5.porters/251541 points out,
compilers are now assuming that no object is larger than PTRDIFF_MAX,
and if they can assume that, so can we.
|
|
|
|
|
|
| |
which evaluates to SSize_t in the unlikely event that there isn't a
ptrdiff_t on the platform. ptrdiff_t is safer than ssize_t, as the
latter need not contain any negative number besides -1.
|
|
|
|
|
| |
The ANYOF regnode type (generated by bracketed character classes and
\p{}) was allocating too much space because the argument field was being counted twice
|
| |
|
|
|
|
| |
Its tests were modified in commit 1ddd2f5f.
|
| |
|
|
|
|
| |
(Existing blead customizations are no longer needed.)
|
| |
|
|
|
|
|
|
|
| |
[DELTA]
2.32 13/09/2018 (CBERRY)
- Fix absolute path handling on VMS
|
|
|
|
| |
Both these have been released to CPAN recently.
|
|
|
|
|
|
|
|
|
| |
Since anything with quadmath should be recent enough to have
vsnprintf() I've fallen back to that.
Calls to PerlIO_debug() in core don't include floating point values
and I expect it to be unlikely outside of core - if it is needed
they'll just have to use double formatting/types.
|
|\ |
|
| | |
|
| | |
|
|/ |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
With -Dquadmath C++ builds, the calls to log10() and ldexp() would
cause ambiguous overloaded function errors, since all of log10(float),
log10(double) and log10(long double) were canidates for a
log10(__float128) call. Similarly for ldexp().
signbit() had a different problem, two of the tests in ext/POSIX/t/math.t
failed with the default signbit() macro, presumably because the
__float128 was being converted to a long double, since the macro in
math.h didn't special case for __float128.
|
|
|
|
| |
Instead use the end pointer passed in.
|
| |
|
| |
|
|
|
|
|
|
| |
And, in passing, group upper- and lower-case switches properly.
For: RT 133044. Thanks to contributor KES.
|
| |
|
|
|
|
|
|
|
|
|
| |
Documentation should refer to getgr(), per report from Elizabeth
Mattijsen.
Increment $VERSION.
For: RT 133217
|
|
|
|
|
|
| |
Per recommendation by Elizabeth Mattijsen.
For: RT 133217
|
|
|
|
| |
For: RT 133217
|
|
|
|
| |
Spotted by Axel Beckert
|
| |
|
|
|
|
|
|
|
| |
Commit 1a69c9a77a no longer uses S_invlist_set_len in ext/re/re_comp.c,
but didn't adjust embed.fnc accordingly. This patch moves that function
into the #ifndef PERL_EXT_RE_BUILD block in embed.fnc. It also
includes regenerated embed.h and proto.h files.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Constant folding sets PL_warnhook to PERL_WARNHOOK_FATAL, which is
&PL_sv_placeholder, an undef SV.
If warn() is called while constant folding, invoke_exception_hook()
attempts to use the value of a non-NULL PL_warnhook as a CV, which
caused an undefined value warning.
invoke_exception_hook() now treats a PL_warnhook of PERL_WARNHOOK_FATAL
the same as NULL, falling back to the normal warning handling which
throws an exception to abort constant folding.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
RT #133441
TL;DR:
(($lex = expr1.expr2) .= expr3) was being misinterpreted as
(expr1 . expr2 . expr3) when the ($lex = expr1) subtree had had the
assign op optimised away by the OPpTARGET_MY optimisation.
Full details.
S_maybe_multiconcat() looks for suitable chains of OP_CONCAT to convert
into a single OP_MULTICONCAT.
Part of the code needs to distinguish between (expr . expr) and
(expr .= expr). This didn't used to be easy, as both are just OP_CONCAT
ops, but with the OPf_STACKED set on the second one. But...
perl also used to optimise ($a . $b . $c) into ($a . $b) .= $c, to
reuse the padtmp returned by the $a.$b concat. This meant that an
OP_CONCAT could have the OPf_STACKED flag on even when it was a '.'
rather than a '.='.
I disambiguated these cases by seeing whether the top op in the LHS
expression had the OPf_MOD flag set too - if so, it implies '.='.
This fails in the specific case where the LHS expression is a
sub-expression which is assigned to a lexical variable, e.g.
($lex = $a+$b) .= $c.
Initially the top node in the LHS expression above is OP_SASSIGN, with
OPf_MOD set due to the enclosing '.='. Then the OPpTARGET_MY
optimisation kicks in, and the ($lex = $a + $b) part of the optree is
converted from
sassign sKPRMS
add[t4] sK
padsv[a$] s
padsv[$b] s
padsv[$lex] s
to
add[$lex] sK/TARGMY
padsv[a$] s
padsv[$b] s
which is all fine and dandy, except that the top node of that optree no
longer has the OPf_MOD flag set, which trips up S_maybe_multiconcat into
no longer spotting that the outer concat is a '.=' rather than a '.'.
Whether the OPpTARGET_MY optimising code should copy the OPf_MOD from
the being-removed sassign op to its successor is an issue I won't
address here. But in the meantime, the good news is that for 5.28.0
I added the OPpCONCAT_NESTED private flag, which is set whenever
($a . $b . $c) is optimised into ($a . $b) .= $c. This means that it's
no longer necessary to inspect the OPf_MOD flag of the first child to
disambiguate the two cases. So the fix is trivial.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
mkpath can be called multiple ways, but most aren't supported by very
old versions of File::Path. A prerequisite could be added on a newer
version of the module that that supports the new call signature, but
this introduces a circular dependency. While theoretically this
dependency should be resolvable, since the File::Spec prereq listed in
File::Path is version 0, some toolchains (in particular older CPAN.pm)
will fail to do so.
There isn't any particular advantage to using the new call signature, so
a simple solution is to adjust the test to use the older style.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There were a few problems:
- the purpose of recur_sv wasn't clear, I believe I understand it
now from looking at where recur_sv was actually being used.
Frankly the logic of the code itself was hard to follow, apparently
only counting a level if the recur_sv was equal to the current
SV.
Fixed by adding some documentation to recur_sv in the context
structure. The logic has been re-worked (see below) to hopefully
make it more understandable.
- the conditional checks for inc/decrementing recur_depth didn't
match between the beginnings and ends of the store_array() and
store_hash() handlers didn't match, since recur_sv was both
explicitly modified by those functions and implicitly modified
in their recursive calls to process elements.
Fixing by storing the starting value of cxt->recur_sv locally
testing against that instead of against the value that might be
modified recursively.
- the checks in store_ref(), store_array(), store_l?hash() were
over complex, obscuring their purpose.
Fixed by:
- always count a recursion level in store_ref() and store the
RV in recur_sv
- only count a recursion level in the array/hash handlers if
the SV didn't match.
- skip the check against cxt->entry, if we're in this code
we could be recursing, so we want to detect it.
- (after the other changes) the recursion checks in store_hash()/
store_lhash() only checked the limit if the SV didn't match the
recur_sv, which horribly broke things.
Fixed by:
- Now only make the depth increment conditional, and always
check against the limit if one is set.
|
|\
| |
| |
| |
| |
| |
| |
| | |
Improve the code and macros in S_regmatch() to make opening and closing
captures (groups) more consistent and simpler.
Shouldn't make any changes to behaviour apart from improved debugging
output.
|
| |
| |
| |
| |
| | |
There's lots of confusion here, especially about lastparen - some of
the docs are just plain wrong.
|
| |
| |
| |
| | |
(and tweak the debugging output of CLOSE_CAPTURE())
|
| |
| |
| |
| |
| |
| | |
Every use of the CLOSE_CAPTURE() macro is followed by the setting of
lastparen and lastcloseparen, so include these actions in the macro
itself.
|
| |
| |
| |
| |
| | |
This macro includes debugging output, so by using it rather than
setting rex->offs[paren].start/end directly, you get better debugging.
|
| |
| |
| |
| |
| |
| | |
Make its index and start+end values into parameters. This will shortly
allow its use in other places, bringing consistent code and debug logging
to the whole of S_regmatch().
|
| |
| |
| |
| |
| |
| |
| |
| | |
Move this macro to earlier in the file to be with the other functions
and macros which deal with setting and restoring captures.
No changes (functional or textual) apart from the physical moving of the
13 lines.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The (?n) mechanism allows you to 'gosub' to a subpattern delineated by
capture n. For 1-char-width repeats, such as a+, \w*?, (\d)*, then
currently the code checks whether it's in a gosub each time it attempts
to start executing the B part of A*B, regardless of whether the A is
in a capture.
This commit moves the GOSUB check to within the capture-only variant
(CURLYN), which then directly just looks for one instance of A and
returns. This moves the check away from more frequently called code
paths.
|
| |
| |
| |
| |
| |
| |
| | |
specifically, the code path wasn't being exercised where the gosub
goes to a capture which is a 1-char wide *non-greedy* repeat, such as
/ ... (\d)*? ... (?1) ... /
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are currently two similar backtracking states for simple
non-greedy pattern repeats:
CURLY_B_min
CURLY_B_min_known
the latter is a variant of the former for when the character which must
follow the repeat is known, e.g. /(...)*?X.../, which allows quick
skipping to the next viable position.
The code for the two cases:
case CURLY_B_min_fail:
case CURLY_B_min_known_fail:
share a lot of similarities. This commit merges the two states into a
single CURLY_B_min state, with an associated single CURLY_B_min_fail
fail state.
That one code block can handle both types, with a single
if (ST.c1 == CHRTEST_VOID) ...
test to choose between the two variant parts of the code.
This makes the code smaller and more maintainable, at the cost of one
extra test per backtrack.
|
| |
|
|
|
|
|
|
| |
This paragraph can lead to ambiguity because it uses at example `for`
keyword but then says: Perl executes a foreach statement more rapidly
than it would the equivalent **for** loop.
|