| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
Mostly in comments and docs, but some in diagnostic messages and one
case of 'or die die'.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The following code:
sub f ($x,$y) {
study;
}
used to compile as:
a <1> leavesub[1 ref] K/REFC,1 ->(end)
- <@> lineseq KP ->a
1 <;> nextstate(main 5 p:5) v:%,fea=7 ->2
2 <+> argcheck(2,0) v ->3
3 <;> nextstate(main 3 p:5) v:%,fea=7 ->4
4 <+> argelem(0)[$x:3,5] v/SV ->5
5 <;> nextstate(main 4 p:5) v:%,fea=7 ->6
6 <+> argelem(1)[$y:4,5] v/SV ->7
- <;> ex-nextstate(main 5 p:5) v:%,fea=7 ->7
7 <;> nextstate(main 5 p:6) v:%,fea=7 ->8
9 <1> study sK/1 ->a
- <1> ex-rv2sv sK/1 ->9
8 <$> gvsv(*_) s ->9
Following this commit, it compiles as:
a <1> leavesub[1 ref] K/REFC,1 ->(end)
- <@> lineseq KP ->a
- <1> ex-argcheck vK/1 ->7
- <@> lineseq vK ->-
1 <;> nextstate(main 5 p:5) v:%,fea=7 ->2
2 <+> argcheck(2,0) v ->3
3 <;> nextstate(main 3 p:5) v:%,fea=7 ->4
4 <+> argelem(0)[$x:3,5] v/SV ->5
5 <;> nextstate(main 4 p:5) v:%,fea=7 ->6
6 <+> argelem(1)[$y:4,5] v/SV ->7
- <;> ex-nextstate(main 5 p:5) v:%,fea=7 ->-
7 <;> nextstate(main 5 p:6) v:%,fea=7 ->8
9 <1> study sK/1 ->a
- <1> ex-rv2sv sK/1 ->9
8 <#> gvsv[*_] s ->9
All the ops associated with the signature have been put in their own
subtree, with an extra NULL ex-argcheck op "on top". The op on top
serves two purposes: first, it makes it easier for Deparse.pm etc to
spot siganure code; secondly, it may at some point in the future be
upgraded to OP_SIGNATURE when signatures get optimised. It's of type
ex-argcheck only because when being created it needs to be an op type
that's in class UNOP_AUX so that the created op will be suitable for
later optimising, and making it an ex-type associated with signatures
helps flag it as such.
There should be no functional changes apart from the shape of the
optree.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This op is of class OP_UNOP_AUX, Ops of this class have an op_aux pointer
which typically points to a variable-length malloced array of IVs,
UVs, etc. However in the specific case of OP_ARGCHECK the data stored
in the aux struct is fixed. So this commit casts the aux pointer to a
struct containing the relevant fields (number of parameters etc), rather
than referring to them as aux[0], aux[1] etc. This makes the code more
readable.
Should be no functional changes.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This exposes a GRAMSUBSIGNATURE top-level production from perly.y for
toke.c to consume, which parses a subroutine signature, inside the
parens.
This needed a small change to the existing rules, to pull out a rule
that handles all of the insides of the parens but not the parens
themselves.
(h/t to ilmari for the suggestion)
|
|
|
|
| |
No significant changes in the generated code since 3.0.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When doing a subparse, the tokenizer opens up the subexpression
with a '(' token, expecting the logic in yylex() to generate a ')'
once it sees the end of the subparsed string.
If the subparsed string includes an unmatched ')', this could confuse
the parser into finishing the expression and effectively exiting the
subparse without cleaning up (including popping the context)
To avoid that, create special bracketing tokens only generated for
a subparse and update the grammar to use those tokens in the cases
they might be used in a subparse.
This is an updated version of the patch that moves the FUNC rule
to where the original rule was and adds a test for non-regexp
subparses.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The items pushed onto the parser stack can be one of several types:
ival, opval, pval etc. The only remaining use of pval is when a "label:"
is encountered.
When an error occurs during parsing, ops on the parse stack get
automatically reaped these days as part of the OP slab mechanism;
but bare strings (pvals) still leak.
Convert this one remaining pval into an opval, making the toker return
an OP_CONST with an SV holding the label.
Since newSTATEOP() still expects a raw string for the label, the parser
just grabs the value returned by the toker and makes a copy of the
string from it, then immediately frees the OP_CONST and its associated
SV.
The leak was showing up in ext/XS-APItest/t/stmtasexpr.t, which expects
to parse a statement where labels are banned.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
RT #132760
A recent commit (v5.27.7-212-g894f226) moved subroutine attributes back
before the subroutine's signature: e.g.
sub foo :prototype($$) ($a, $b) { ... } # 5.18 and 5.28 +
sub foo ($a, $b) :prototype($$) { ... } # 5.20 .. 5.26
This change means that any code still using an attribute following the
signature is going to trigger a syntax error. However, the error, followed
by error recovery and further warnings and errors, is very unfriendly and
gives no indication of the root cause. This commit introduces a new error,
"Subroutine attributes must come before the signature".
For example, List::Lazy, the subject of the ticket, failed to compile
tests, with output like:
Array found where operator expected at blib/lib/List/Lazy.pm line 43,
near "$$@)" (Missing operator before @)?)
"my" variable $step masks earlier declaration in same statement at
blib/lib/List/Lazy.pm line 44.
syntax error at blib/lib/List/Lazy.pm line 36, near ") :"
Global symbol "$generator" requires explicit package name (did you
forget to declare "my $generator"?) at blib/lib/List/Lazy.pm line 38.
Global symbol "$state" requires explicit package name (did you forget
to declare "my $state"?) at blib/lib/List/Lazy.pm line 39.
Global symbol "$min" requires explicit package name (did you forget to
declare "my $min"?) at blib/lib/List/Lazy.pm line 43.
Global symbol "$max" requires explicit package name (did you forget to
declare "my $max"?) at blib/lib/List/Lazy.pm line 43.
Global symbol "$step" requires explicit package name (did you forget
to declare "my $step"?) at blib/lib/List/Lazy.pm line 43.
Invalid separator character '{' in attribute list at
blib/lib/List/Lazy.pm line 44, near "$step : sub "
Global symbol "$step" requires explicit package name (did you forget
to declare "my $step"?) at blib/lib/List/Lazy.pm line 44.
But following this commit, it now just outputs:
Subroutine attributes must come before the signature at
blib/lib/List/Lazy.pm line 36.
Compilation failed in require at t/append.t line 5.
BEGIN failed--compilation aborted at t/append.t line 5.
It works by:
1) adding a boolean flag (sig_seen) to the parser state to indicate that a
signature has been parsed;
2) at the end of parsing a signature, PL_expect is set to XATTRBLOCK
rather than XBLOCK.
Then if something looking like one or more attributes is encountered
by the lexer immediately afterwards, it scans it as if it were an
attribute, but then if sig_seen is true, it croaks.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that the parser rules have been split into separate rules for subs
under 'use feature "signatures"' and not, refine the rules to reflect the
different regimes. In particular:
1) no longer include 'proto' in the signature variants: as it happens the
toker would never return a proto THING under signatures anyway, but
removing it from the grammar makes it clearer what's expected and not
expected.
2) Remove 'subsignature' from non-sig rules: what used to happen before
was that outside of 'use feature "signatures"', it might still try to
parse a signature, e.g.
$ perl5279 -we 'sub f :lvalue ($$@) { $x = 1 }'
Illegal character following sigil in a subroutine signature at -e line
1, near "($"
syntax error at -e line 1, near "$$@"
Now it's just a plain syntax error.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently the toker returns a SUB or ANONSUB token at the beginning
of a sub (or BEGIN etc). Change it so that in the scope of
'use feature "signatures"', it returns a SIGSUB / ANON_SIGSUB token
instead.
Then in perly.y, duplicate the 2 rules containing SUB / ANONSUB
to cope with these two new tokens.
The net effect of this is to use different rules in the parser for
parsing subs when signatures are in scope.
Since the two sets of rules have just been cut and pasted, there should
be no functional changes yet, but that will change shortly.
|
|
|
|
|
|
|
| |
This moves a block of code out from perly.y into its own function,
because it will shortly be needed in more than one place.
Should be no functional changes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
RT #132141
Attributes such as :lvalue have to come *before* the signature to ensure
that they're applied to any code block within the signature; e.g.
sub f :lvalue ($a = do { $x = "abc"; return substr($x,0,1)}) {
....
}
So this commit moves sub attributes to come before the signature. This is
how they were originally, but they were swapped with v5.21.7-394-gabcf453.
This commit is essentially a revert of that commit (and its followups
v5.21.7-395-g71917f6, v5.21.7-421-g63ccd0d), plus some extra work for
Deparse, and an extra test.
See:
RT #123069 for why they were originally swapped
RT #132141 for why that broke :lvalue
http://nntp.perl.org/group/perl.perl5.porters/247999
for a general discussion about RT #132141
|
|
|
|
|
|
|
|
|
| |
Where an arrow is omitted between subscripts, if a parenthesised
subscript is followed by a braced one, PL_expect was getting set to
XBLOCK due to code intended for "foreach (...) {...}". This broke
bareword autoquotation, and the parsing of operators following the
braced subscript. Alter PL_expect from XBLOCK to XOPERATOR following
a parenthesised subscript. Fixes [perl #8045].
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The pumpking has determined that the CPAN breakage caused by changing
smartmatch [perl #132594] is too great for the smartmatch changes to
stay in for 5.28.
This reverts most of the merge in commit
da4e040f42421764ef069371d77c008e6b801f45. All core behaviour and
documentation is reverted. The removal of use of smartmatch from a couple
of tests (that aren't testing smartmatch) remains. Customisation of
a couple of CPAN modules to make them portable across smartmatch types
remains. A small bugfix in scope.c also remains.
|
|
|
|
|
| |
"whereis" is like "whereso" except that it performs an implicit
smartmatch.
|
|
|
|
|
| |
The names of ops, context types, functions, etc., all change in accordance
with the change of keyword.
|
| |
|
| |
|
|
|
|
|
|
| |
ASSIGNOP includes mutators like += as well as basic assignment
NPD
|
|
|
|
| |
then run ./regen_perly.pl to update perly files
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit f5727a1c71878a34f6255eb1a506c0b21af7d36f tried to make yada-yada
be parsed consistently as a term expression, but actually things are
more complicated than that. The tokeniser didn't accept yada-yada in
the right contexts to make it usable as an expression, and changing
that would require decisions on resolving ambiguities between yada-yada
and flip-flop. It's also documented as being a statement rather than
an expression, though with some incorrect information about ambiguities.
Overall it looks more like the intent was for yada-yada to be a statement.
This commit makes it grammatically treated as such, and also fixes up
the dubious parts of the documentation. [perl #132150]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In places where a rule contains multiple code blocks, ensure that
$$ is assigned a valid value at the end of midrule blocks, so that
valgrind ./perl -Dpv ...
doesn't display zillions of
Conditional jump or move depends on uninitialised value
errors, when perl tries to display the parse stack.
I've only done the various newish top-level grammar entries - these all
seemed to have the same defect, while from a quick glance elsewhere in the
file, it seemed like older rules already do this.
|
|
|
|
|
|
|
|
| |
yytoken is a translated (via lookup table) version of parser->yychar.
So we only need to recalculate it when yychar changes (usually by
assigning the result of yylex() to it). This means when multiple
reductions are done without shifting another token, we skip the extra
overhead each time.
|
|
|
|
|
|
| |
A few regen scripts aren't run by "make regen", either because they depend
on an external tool, or they must be run by the Perl just built. So they
must be run manually.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When parsing the special ${var{subscript}} syntax, the lexer notes
that the } matching the ${ will be a fake bracket, and should
be ignored.
In the case of ${p{};sub p}() the first syntax error causes tokens to
be popped, such that the } following the sub declaration ends up being
the one treated as a fake bracket and ignored.
The part of the lexer that deals with sub declarations treats a ( fol-
lowing the sub name as a prototype (which is a single term) if signa-
tures are disabled, but ignores it and allows the rest of the lexer to
treat it as a parenthesis if signatures are enabled.
Hence, the part of the parser (perly.y) that parses signatures knows
that a parenthesis token can only come after a sub if signatures are
enabled, and asserts as much.
In the case of an error and tokens being discarded, a parenthesis may
come after a sub name as far as the parser is concerned, even though
there was a } in between that got discarded. The sub part of the
lexer, of course did not see the parenthesis because of the interven-
ing brace, and did not treat it as a prototype. So we get an asser-
tion failure.
The simplest fix is to loosen up the assertion and allow for anomalies
after errors. It does not hurt to go ahead and parse a signature at
this point, even though the feature is disabled, because there has
been a syntax error already, so the parsed code will never run, and
the parsed sub will not be installed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When I moved subroutine signature processing into perly.y with
v5.25.3-101-gd3d9da4, I added a new lexer PL_expect state, XSIGVAR.
This indicated, when about to parse a variable, that it was a signature
element rather than a my variable; in particular, it makes ($,...)
be toked as the lone sigil '$' rather than the punctuation variable '$,'.
However this is a bit heavy-handled; so instead this commit adds a
new allowed pseudo-keyword value to PL_in_my: as well as KEY_my, KEY_our and
KEY_state, it can now be KEY_sigvar. This is a less intrusive change
to the lexer.
|
|
|
|
|
|
| |
The code snippets in perly.y are #included in a C function that has a
‘parser’ local variable. Local variables require less machine code
(especially under threads).
|
|
|
|
|
| |
assigning a char from an I32 gives a warning. Add an explicit cast
as we know the int only ever holds a char.
|
|
|
|
|
|
|
|
|
| |
During the course of parsing end exection, these values get stored
as ints and UVs, then used as SSize_t.
Standardise on IVs instead. Technically they can never be negative, but
their final use is as indices into AVs, which is SSize_t, so it's
easier to standardise on a signed value throughout.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
e.g.
a slurpy parameter may not have a default value
=>
A slurpy parameter may not have a default value
Also, split the "Too %s arguments for subroutine" diagnostic into
separate "too few" and "too many" entries in perldiag.
Based on suggestions by Father Chrysostomos.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently subroutine signature parsing emits many small discrete ops
to implement arg handling. This commit replaces them with a couple of ops
per signature element, plus an initial signature check op.
These new ops are added to the OP tree during parsing, so will be visible
to hooks called up to and including peephole optimisation. It is intended
soon that the peephole optimiser will take these per-element ops, and
replace them with a single OP_SIGNATURE op which handles the whole
signature in a single go. So normally these ops wont actually get executed
much. But adding these intermediate-level ops gives three advantages:
1) it allows the parser to efficiently generate subtrees containing
individual signature elements, which can't be done if only OP_SIGNATURE
or discrete ops are available;
2) prior to optimisation, it provides a simple and straightforward
representation of the signature;
3) hooks can mess with the signature OP subtree in ways that make it
no longer possible to optimise into an OP_SIGNATURE, but which can
still be executed, deparsed etc (if less efficiently).
This code:
use feature "signatures";
sub f($a, $, $b = 1, @c) {$a}
under 'perl -MO=Concise,f' now gives:
d <1> leavesub[1 ref] K/REFC,1 ->(end)
- <@> lineseq KP ->d
1 <;> nextstate(main 84 foo:6) v:%,469762048 ->2
2 <+> argcheck(3,1,@) v ->3
3 <;> nextstate(main 81 foo:6) v:%,469762048 ->4
4 <+> argelem(0)[$a:81,84] v/SV ->5
5 <;> nextstate(main 82 foo:6) v:%,469762048 ->6
8 <+> argelem(2)[$b:82,84] vKS/SV ->9
6 <|> argdefelem(other->7)[2] sK ->8
7 <$> const(IV 1) s ->8
9 <;> nextstate(main 83 foo:6) v:%,469762048 ->a
a <+> argelem(3)[@c:83,84] v/AV ->b
- <;> ex-nextstate(main 84 foo:6) v:%,469762048 ->b
b <;> nextstate(main 84 foo:6) v:%,469762048 ->c
c <0> padsv[$a:81,84] s ->d
The argcheck(3,1,@) op knows the number of positional params (3), the
number of optional params (1), and whether it has an array / hash slurpy
element at the end. This op is responsible for checking that @_ contains
the right number of args.
A simple argelem(0)[$a] op does the equivalent of 'my $a = $_[0]'.
Similarly, argelem(3)[@c] is equivalent to 'my @c = @_[3..$#_]'.
If it has a child, it gets its arg from the stack rather than using $_[N].
Currently the only used child is the logop argdefelem.
argdefelem(other->7)[2] is equivalent to '@_ > 2 ? $_[2] : other'.
[ These ops currently assume that the lexical var being introduced
is undef/empty and non-magival etc. This is an incorrect assumption and
is fixed in a few commits' time ]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently the signature of a sub (i.e. the '($a, $b = 1)' bit) is parsed
in toke.c using a roll-your-own mini-parser. This commit makes
the signature be part of the general grammar in perly.y instead.
In theory it should still generate the same optree as before, except
that an OP_STUB is no longer appended to each signature optree: it's
unnecessary, and I assume that was a hangover from early development of
the original signature code.
Error messages have changed somewhat: the generic 'Parse error' has
changed to the generic 'syntax error', with the addition of ', near "xyz"'
now appended to each message.
Also, some specific error messages have been added; for example
(@a=1) now says that slurpy params can't have a default vale, rather than
just giving 'Parse error'.
It introduces a new lexer expect state, XSIGVAR, since otherwise when
the lexer saw something like '($, ...)' it would see the identifier
'$,' rather than the tokens '$' and ','.
Since it no longer uses parse_termexpr(), it is no longer subject to the
bug (#123010) associated with that; so sub f($x = print, $y) {}
is no longer mis-interpreted as sub f($x = print($_, $y)) {}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The enum value "WORD" can apparently clash with a enum value in windows
headers. It used to be worked around in windows by #defining YYTOKENTYPE,
which caused the
enum yytokentype {
...
WORD = ...;
}
in perly.h to be skipped, while still using the
#define WORD ...
which appears later in the same file.
In bison 3.x, the auto-generated perl.h no longer includes the #defines,
so this workaround no longer works.
Instead, change the name of the token from "WORD" to "BAREWORD".
|
|
|
|
|
|
|
|
|
|
| |
This applies to ‘my’, ‘our’, ‘state’ and ‘local’, and both to single
variable and lists of variables, in all their variations:
my \$a # equivalent to \my $a
my \($a,$b) # equivalent to \my($a, $b)
my (\($a,$b)) # same
my (\$a, $b) # equivalent to (\my $a, $b)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In Perl 5.000, the same token, LOCAL, was used for both ‘my’ and
‘local’, with a token value, passed to localize() as a second argu-
ment, to distinguish between them.
perl-5.003_07-9-g55497cf (inseparable changes from patch from
perl5.003_07 to perl5.003_08), for no apparent reason, split them into
two tokens, removing the token values and assigning values in perly.y
via $$ = 0 and $$ = 1. They still ultimately made their way through
the same grammar rule, as there was only one localize() call in
perly.y. The code still made sense.
perl-5.005_02-1816-g09bef84 (sub : attrlist) changed things, such that
the tokens are separate *and* they get separate token values assigned
to them. ‘local’ and ‘my’ no longer follow the same grammar rules
in perly.y, so there are separate localize() calls for the different
token types. Hence, the use of a token value to distinguish them does
not make sense. It just makes this more complicated that necessary.
So this commit removes the token values. Since the two token types
follow different paths through perly.y and have separate localize()
calls, we can hard-code the argument to localize() there, instead of
passing the value through from toke.c as a token value.
This does shrink toke.o slightly (for me it went from 876040 to
876000), and it makes this conceptually clearer.
|
|
|
|
|
| |
Sorry for the noisy patch. I can’t modify regen_perly.pl without
regenerating stuff, because the checksum changes.
|
| |
|
|
|
|
|
|
| |
Mainly it no longer generates some tables used for debugging.
This commit also adds a new define showing what bison version was used.
|
|
|
|
|
|
| |
Previously the assignment was hidden by the not op wrapped around the
condition, but newCONDOP() is sufficiently flexible that it isn't
needed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is dead code that used to allow
my $_;
...
given ($foo) {
# lexical $_ aliased to $foo here
}
Now that lexical $_ has been removed, remove the code. I've left the
signatures of the newFOO() functions unchanged; they just expect a target
of 0 to always be passed now.
|
|
|
|
|
|
| |
This token’s type is never used. We don’t bother setting the type,
either, in toke.c, so it will be garbage. Removing the type makes
it harder to use the garbage value by mistake in refactoring.
|
|
|
|
|
|
|
| |
These two tokens never use their value, and the value is not even set
in toke.c, which means it will contain a junk value from some previous
token. Removing the types prevents that junk value from being acci-
dentally used.
|
|
|
|
| |
Yay, the semicolons are back.
|
|
|
|
|
|
|
|
|
| |
This moves signatures before attributes in the grammar by
creating separate branches for the prototype and signatures
cases, so that the introduced block and the fact that signatures
do not allow for declarations can be handled properly.
Tests and regen_perly to follow.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These two operators were being translated into subst("","") and
tr("","") by the lexer. Then pmruntime in op.c would take apart the
resulting list op. Instead of constructing a list op only to take it
apart again, feed the replacement part to pmruntime separately. We
can achieve this by introducing a new token ('/') that the parser rec-
ognizes as introducing a replacement.
If we had followed this approach to begin with, then bug #123542 would
never have happened.
(Actually, it seems the parser did know about the replacement part to
begin with, but it changed in perl-5.8.0-4047-g131b3ad to fix some
overloading problems.)
|
|
|
|
|
|
| |
ck_spair also applies lvalue context to the kid ops, so we just end up
calling op_lvalue twice on the same ops. It’s harmless (being idempo-
tent), but wasteful.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Formats need the same logic applied in 34b54951568, to avoid this:
Input:
{
my $x;
format STDOUT =
@
$x
.
}
__END__
$ pbpaste|./perl -Ilib -MO=Deparse
{
my $x;
}
format STDOUT =
@
$x
.
__DATA__
- syntax OK
That $x in the format is now global, not lexical.
Also, we need to make sure the sequence numbers are correct. The
statement after the format stopped having a higher sequence number
than CvOUTSIDE_SEQ(format) in 8635e3c238f, because the ‘pending’
sequence number set aside for the nextstate op created just after com-
pile-time block exit (which nextstate op comes before the block) was
not actually being used by newFORM (unlike newATTRSUB), so the follow-
ing statement was using that number.
|