| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
This was failing in smokes on AIX
|
|
|
|
|
|
|
|
|
|
| |
These tests fail on the EN_US.UTF-8 locale.
Some fail due to a bug fixed in later AIX (lib/locale.t,
t/run/locale.t) and the other due to an apparent bug in the locale
itself.
https://perl5.test-smoke.org/report/5034327
|
|
|
|
|
|
|
|
| |
This reverts commit c56d7fa9134de66efe85a2fd70b28069c2629e0d.
Also un-TODO's the new test for this issue.
Fixes #21044
|
|
|
|
| |
Regression test for #21044
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(aka README.win32)
The fix was in
commit 034a96a9c8546c2e080a802babba5ed9bc6c7798
Author: Elvin Aslanov <rwp.primary@gmail.com>
Date: Wed Apr 19 15:17:19 2023 +0200
Add MS Build Tools links
Add new section on Microsoft Build Tools, improve formatting.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The main exceptions being dist/, ext/, and Configure related
files, which will be updated in a subsequent commit. Files in the cpan/
directory are also omitted as they are not owned by the core.
'#define' has seven characters, so following it with a \t makes it look
like '#define ' when it is not, which then frustrates attempts to find
where a given define is. If you *know* then you do a
git grep -P 'define\s+WHATEVER'
but if don't or you forget, you can get very confused trying to find
where a given define is located. This fixes all such cases so they
actually are 'define WHATEVER' instead.
If this patch is getting in your way with blame analysis then view it
with the -w option to blame.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Brace expansion is not available in a POSIX shell, is handled
slightly differently by the shells that do support it, and is
unlikely to work when the underlying implementation for Perl's
glob() function is not a Unix shell. So instead of doing:
{foo,bar,baz}/*.t
just accumulate the results of simpler glob operations:
foo/*.t
bar/*.t
baz/*.t
This also allows us to dispense with the recursive function
_extract_tests() and its fancy dispatch based on reference type;
we would only ever be calling it with a simple string argument,
so we might as well just call glob() directly.
|
|
|
|
|
|
|
|
|
| |
Otherwise the test suite dies with:
Can't read op/hook.DIR/require.t.
The problem never arose until 93f6f9654a81b66c4 added another
directory one level down from those immediately under t/.
|
|
|
|
|
|
|
|
|
| |
ppport.h is pod, but the link to it, removed by this commit, is broken,
resulting in a 404 "Raptor not found" from
https://perldoc.perl.org/Devel::PPPort#SEE-ALSO
This commit changes the mention of the file from a link to a F<>.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Karl complained about some of the wrapping logic we use for expressions.
This tweaks the rules in a number of different ways in an attempt to
produce more legible expressions. For instance if we have a complex
expression with different parenthesized sub expressions, then try to put
each sub expression on its own line. A previous patch ensures that we
put shorter sub expressions first, and this patch builds on that to put
each sub expression on its own line.
We also use different logic to wrap the expressions, with the end result
that each line should have the same number of defined() operations on it
(with the exception of the last). We also try harder to line up
logical operators and defined() functions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch uses a collection of heuristics to skip test files which
would die on a perl compiled with -DNO_TAINT_SUPPORT but without
-DSILENT_NO_TAINT_SUPPORT.
-DNO_TAINT_SUPPORT disables taint support in a "safe" way, such that if
you try to use taint mode with the -t or -T options an exception will be
thrown informing you that the perl you are using does not support taint.
(The related setting -DSILENT_NO_TAINT_SUPPORT disables taint support
but causes the -t and -T options to be silently ignored.)
The error from using -t and -T is thrown very early in the process
startup and there is no way to "gracefully" handle it and convert it
into something else, for instance to skip a test file which contains it.
This patch generally fixes our code to skip these tests.
* Make t/TEST and t/harness check shebang lines and use filename checks
to filter out tests that use -t or -T. Primarily this is the result of
checking their shebang line, but some cpan/ files are excluded by
name, either from a very short list of exclusions, or because their
file name contains the word "taint". Non-cpan test files were fixed
individually as noted below.
* test.pl - make run_multiple_progs() skip test cases based on the
switches that are part of the test definition. This function is
used in a great deal of our internal tests, so it fixes a lot of
tests in one go.
* XS-APITest/t/call.t, t/run/switchDX.t, lib/B/Deparse.t - Skip a small
set of tests in each file.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Our checks on the define info we expose via Internals::V(), especially
the sorted part, did not really work properly as it only checked defines
that are actually exposed in our standard builds. Many of the defines
that are exposed in this list are special cases that would not be
enabled in a normal build we test under CI, and indeed prior to this
patch it was possible for us to produce unsorted output if certain
defines were enabled.
This patch adds checks that reads the actual code. It checks that the
define and the string are the same, and it checks that strings would be
output in sorted order assuming every define was enabled.
There are two historical exceptions where the string we show and the
define use internally are different, but we work around these two cases
with as special case hash.
|
|
|
|
|
| |
Net-Ping is in dist/ which means we are upstream, so there should
not be any customized files.
|
|
|
|
|
|
|
|
|
|
|
| |
On HPUX `nm globals.o` produces output like this (without the indent):
[5] | 2420| 2|OBJT |GLOB |0| .rodata|PL_Yes
So change the $define qr// to accommodate it.
We also have to TODO some of the tests, as HPUX seems to export
everything.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On HPUX none of the usual methods for doing high precision %g seem to
work:
Checking for an efficient way to convert floats to strings.
Trying sprintf...
sprintf() found.
sprintf length mismatch: Expected 55, got 38
...But sprintf didn't work as I expected.
Trying gconvert...
gconvert NOT found.
Trying gcvt...
gcvt() found.
gcvt length mismatch: Expected 55, got 38
...But gcvt didn't work as I expected.
*** WHOA THERE!!! ***
None of ( sprintf gconvert gcvt) seemed to work properly. I'll use sprintf.
So we can safely TODO these tests for now.
See: https://github.com/Perl/perl5/issues/20953#issuecomment-1478744988
and: https://github.com/Perl/perl5/issues/20953#issuecomment-1483814118
Fixes: #20953
And also some issues in: #20959
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
state file
Currently we only store state if we are running parallel tests, so if
you run the tests in series we do not store data on how long they took,
and we can't use that information in a follow up parallel test run.
We also do not allow the state file to be customized to be outside of
the repo, so git clean -dfx wipes it. This means you can't keep your
test data over time, which can be a bit annoying.
We also currently construct the state object twice during setup,
resulting in two (useless) warnings when the state file is missing,
which also doubles the time to set up the tests because the yaml file
gets read twice, and not very efficiently either.
This patch changes the logic so that we initialize the state object only
once during startup, and we use the state file if we are going to run
tests, parallel or not, provided the user does not explicitly disable it
(see below). The order that tests are run is affected only when the
tests are run in parallel.
It also allows the user to specify where the state file should live,
with the $ENV{PERL_TEST_STATE_FILE} environment variable, which can be
set to 0 or the empty string to disable use of the state file if needed.
We also take care to silence the warning about an empty state file,
except in the case where the user has overriden the file name with the
$ENV{PERL_TEST_STATE_FILE}.
Lastly this patch disables loading the state data /at all/, when
the dump_tests option is invoked. There is no need nor point to
load the state data when we are simply going to dump out the list
of tests we will run.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Several cases that used to be simple assignment ops with lexical
variables have been optimized in some way:
- $foo = undef is now a single OP_UNDEF with special flags
- $foo = ... is now a single OP_PADSV_STORE
- $foo[0] = ... is now a single OP_AELEMFASTLEX_STORE
This is mostly transparent to users, except for "Use of uninitialized
value" warnings, which previously mentioned the name of the undefined
variable, but don't do so anymore in blead.
This commit teaches find_uninit_var() about the new ops, so error
messages for these ops can mention variable names again.
Fixes #20945.
|
|
|
|
|
|
|
|
|
|
| |
and t/TEST
Also that we test everything expected in MANIFEST.
Also includes some fixups to t/TEST to deal with the fact
that List/Util is not anymore the name of a distribution
even though it is the name of an extension. Same for Cwd.
|
|
|
|
| |
Less cryptic and repetitive code.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On win32 we glob the arguments passed into @ARGV. *However*, this was
done in an unsafe way that could result in @ARGV being empty if 'lib'
was not in @INC prior to execution. Also it was being done in an eval
STRING to avoid loading File::Glob unnecessarily, but with no error
checking of the eval.
In fact this logic was firing much too early, before option parsing, and
before @INC was set up properly.
This patch moves this logic to much later, after any options are parsed
out and after @INC is set up, which should reduce or eliminate the
chance it dies. It also reworks the logic so that if the eval does die
that the entire script dies as well.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were bundling something that claimed to be 3.14 but which was not.
This removes the customization info and sync with a rereleased 3.15
which is the same as the actual 3.14 but with a version bump to keep
cmp_version.t happy.
This is the change log 3.15 and 3.14:
3.15 2023-03-20
- Release for updating bleadperl to avoid cmp_version.t trouble. No code
changes.
3.14 2022-05-22
- Remove broken link in Net::FTP manpage. [Mike Blackwell]
- Fix EBCDIC detection. [Karl Williamson, PR#45]
- Fix non-deterministic output in libnet.cfg. [Sergei Trofimovich, PR#44]
- Fix TLS session reuse for dataconn with TLS 1.3 when using passive mode.
[Steffen Ullrich, PR#41]
|
| |
|
| |
|
|
|
|
|
| |
the "when" parameter is expected to be a version string of the form "5.\d+",
with no minor version.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This defines a new magic hash C<%{^HOOK}> which is intended to be used for
hooking keywords. It is similar to %SIG in that the values it contains
are validated on set, and it is not allowed to store something in
C<%{^HOOK}> that isn't supposed to be there. Hooks are expected to be
coderefs (people can use currying if they really want to put an object
in there, the API is deliberately simple.)
The C<%{^HOOK}> hash is documented to have keys of the form
"${keyword}__${phase}" where $phase is either "before" or "after"
and in this initial release two hooks are supported,
"require__before" and "require__after":
The C<require__before> hook is called before require is executed,
including any @INC hooks that might be fired. It is called with the path
of the file being required, just as would be stored in %INC. The hook
may alter the filename by writing to $_[0] and it may return a coderef
to be executed *after* the require has completed, otherwise the return
is ignored. This coderef is also called with the path of the file which
was required, and it will be called regardless as to whether the require
(or its dependencies) die during execution. This mechanism makes it
trivial and safe to share state between the initial hook and the coderef
it returns.
The C<require__after> hook is similar to the C<require__before> hook
however except that it is called after the require completes
(successfully or not), and its return is ignored always.
|
|
|
|
|
| |
I want to use these modules in other tests, so changing the name
makes sense.
|
|
|
|
| |
If we chomp we return 2177 tests, if we don't we return 1808 tests.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Historically we used to parse out the tests that we ran in t/harness
from the MANIFEST file. At some point this changed and we started
consulting the disk using globs. However because we do not use a
recursive search over the t/ directory it is quite possible that a new
directory of tests is added which actually never runs.
In https://github.com/Perl/perl5/pull/20637#discussion_r1137878155 Tony
C noticed that I had added a new test file t/op/hook/require.t which is
in a new subdirectory t/op/hook/ which was unknown to t/harness and thus
not actually being run by make test_harness. (This patch does NOT add
t/op/hoop to the list of directories to scan, I will do that in the PR.)
I then took the time to add code to detect if any other test files are
not being run, and it turns out that it is also the case for the new
t/class/ directory of tests and it is also the case for the tests for
test.pl itself, found in the t/test_pl directory.
This patch adds logic to detect if this happens and make t/harness die
if it finds a test file in the manifest which will not be detected by
the custom rules for finding test files that is used in t/harness. It
does not die if t/harness finds tests that are not in MANIFEST, that
should be detected by a different test.
The level of complexity in finding and deciding the tests files that we
should run, and the differences between t/TEST and t/harness is fairly
high. In the past Nicholas put some effort into unifying the logic, but
it seems since then we have drifted apart. Even though t/harness uses
t/TEST and the _tests_from_manifest() function, for some time now it
has only used it to find which extensions to test, not which test
files to run. I have *NOT* dug into whether t/TEST is also missing
test files that are in manifest. That can happen in a follow up patch.
Long term we should unify all of this logic so that t/TEST and t/harness
run the same test files always, and that we will always detect
discrepancies between the MANIFEST and the tests we are running. We do
not for instance test that they test the same things. :-) :-(
|
|
|
|
|
|
|
|
|
| |
Somewhere along the way we stopped testing test.pl itself. This fixes
that oversight, and repairs the tests to accomodate some of the changes
that should have been noticed.
This includes hardening the tests for Win32, which does not allow unlinking
a file that is open.
|
|
|
|
|
|
| |
This tests that we can convert a pattern embedded in a m// or s///
into a qr//, which is surprisingly useful! So useful I am a bit
surprised it didn't occur to anyone to expose this earlier.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This exposes the "last successful pattern" as a variable that can be
printed, or used in patterns, or tested for definedness, etc. Many regex
magical variables relate to PL_curpm, which contains the last successful
match. We never exposed the *pattern* directly, although it was
implicitly available via the "empty pattern". With this patch it is
exposed explicitly. This means that if someone embeds a pattern as a
match operator it can then be accessed after the fact much like a qr//
variable would be.
@ether asked if we had this, and I had to say "no", which was a shame as
obviously the code involved isn't very complicated (the docs from this
patch are far larger than the code involved!). At the very least
this can be useful for debugging and probably testing. It can also
be useful to test if the /is/ a "last successful pattern", by checking
if the var is defined.
|
|
|
|
| |
This makes debugging easier.
|
|
|
|
|
| |
this way we can avoid pushing every buffer, we only need to push
the nestroot of the ref.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backrefs to unclosed parens inside of a quantified group were not being
properly handled, which revealed we are not unrolling the paren state properly
on failure and backtracking.
Much of the code assumes that when we execute a "conditional" operation (where
more than one thing could match) that we need not concern ourself with the
paren state unless the conditional operation itself represents a paren, and
that generally opcodes only needed to concern themselves with parens to their
right. When you exclude backrefs from the equation this is broadly reasonable
(i think), as on failure we typically dont care about the state of the paren
buffers. They either get reset as we find a new different accepting pathway,
or their state is irrelevant if the overal match is rejected (eg it fails).
However backreferences are different. Consider the following pattern
from the tests
"xa=xaaa" =~ /^(xa|=?\1a){2}\z/
in the first iteration through this the first branch matches, and in fact
because the \1 is in the second branch it can't match on the first iteration
at all. After this $1 = "xa". We then perform the second iteration. "xa" does
not match "=xaaa" so we fall to the second branch. The '=?' matches, but sets
up a backtracking action to not match if the rest of the pattern does not
match. \1 matches 'xa', and then the 'a' matches, leaving an unmatched 'a' in
the string, we exit the quantifier loop with $1 = "=xaa" and match \z against
the remaining "a" in the pattern, and fail.
Here is where things go wrong in the old code, we unwind to the outer loop,
but we do not unwind the paren state. We then unwind further into the 2nds
iteration of the loop, to the '=?' where we then try to match the tail with
the quantifier matching the empty string. We then match the old $1 (which was
not unwound) as "=xaa", and then the "a" matches, and we are the end of the
string and we have incorrectly accpeted this string as matching the pattern.
What should have happend was when the \1 was resolved the second time it
should have returned the same string as it did when the =? matched '=', which
then would have resulted in the tail matching again, and etc, eventually
unwinding the entire pattern when the second iteration failed entirely.
This patch is very crude. It simply pushes the state of the parens and creates
an unwind point for every case where we do a transition to a B or _next
operation, and we make the corresponding _next_fail do the appropriate
unwinding. The objective was to achieve correctness and then work towards
making it more efficient. We almost certainly overstore items on the stack.
In the next patch we will keep track of the unclosed parens before the
relevant operators and make sure that they are properly pushed and
unwound at the correct times. In other words this is a first step patch
to make sure things are correct, the next patch will change it so we do
it quickly.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In /((a)(b)|(a))+/ we should not end up with $2 and $4 being set at
the same time. When a branch fails it should reset any capture buffers
that might be touched by its branch.
We change BRANCH and BRANCHJ to store the number of parens before the
branch, and the number of parens after the branch was completed. When
a BRANCH operation fails, we clear the buffers it contains before we
continue on.
It is a bit more complex than it should be because we have BRANCHJ
and BRANCH. (One of these days we should merge them together.)
This is also made somewhat more complex because TRIE nodes are actually
branches, and may need to track capture buffers also, at two levels.
The overall TRIE op, and for jump tries especially where we emulate
the behavior of branches. So we have to do the same clearing logic if
a trie branch fails as well.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was originally a patch which made somewhat drastic changes to how
we represent capture buffers, which Dave M and I and are still
discussing offline and which has a larger impact than is acceptable to
address at the current time. As such I have reverted the controversial
parts of this patch for now, while keeping most of it intact even if in
some cases the changes are unused except for debugging purposes.
This patch still contains valuable changes, for instance teaching CURLYX
and CURLYM about how many parens there are before the curly[1] (which
will be useful in follow up patches even if stricly speaking they are
not directly used yet), tests and other cleanups. Also this patch is
sufficiently large that reverting it out would have a large effect on
the patches that were made on top of it.
Thus keeping most of this patch while eliminating the controversial
parts of it for now seemed the best approach, especially as some of the
changes it introduces and the follow up patches based on it are very
useful in cleaning up the structures we use to represent regops.
[1] Curly is the regexp internals term for quantifiers, named after
x{min,max} "curly brace" quantifiers.
|
|
|
|
|
|
|
|
|
| |
ampersand
Tests with $& always pass in regexp_noamp.t (wrapper around regexp.t),
so when they are TODO tests it looks like a TODO pass when in fact it is
just an artifact of how we handle ampersand tests in this file. For these
cases we simply do not mark them as TODO anymore
|
| |
|
|
|
|
|
|
|
|
| |
I missed that $+ needs to do the parno_to_logical lookup.
We had tests for $^N, but not $+. This also fixes the code
for $^N to only do the lookup when the paren is not 0.
Fixes #20912
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The pod/perlfilter.pod document says "The original purpose of source
filters was to let you encrypt your program source to prevent casual
piracy.". The likening of copyright infringement to nautical hijacking
is wildly hyperbolic. Perl should not be spreading this line of
tendentious misinformation. Even without the hyperbole, it's misleading
to say that program encryption is aimed at preventing copyright
infringement: it doesn't actually impede copying of the whole file. The
things it really impedes are the understanding and editing of the
program, which are actions that are at most only loosely connected to
copyright infringement.
I suggest that the word "piracy" should be replaced with "reading",
which is both a more neutral term and a more accurate description of what
program encryption impedes. There's also a similar problem with the word
"cracker" later in the document.
The document also understates how fundamental it is that program
encryption can't fully prevent access to the real source code.
This patch fixes all of these problems.
[Note from the committer: this patch was submitted to perl5-porters via
perlbug, this message was extracted and moderately edited (mostly for
tense) for creating this patch. I also added the changes to
customized.dat, although I am not sure why that was necessary. - Yves]
|
|
|
|
|
|
|
|
|
| |
Using class attributes in the unit class syntax was a syntax error. This change makes the following two lines equivalent:
class B :isa(A) ;
class B :isa(A) { }
Addresses GH issue #20888.
|
|
|
|
|
|
|
| |
This test looks for a 'Use of freed value in iteration' warning, which
will soon disappear when this branch makes the stack reference counted.
Make the test more modifiable so that it can be made conditional on
build options.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In something like
for $package_var (....) { ... }
or more experimentally,
for \$lvref (....) { ... }
when entering the loop in pp_enteriter, perl would pop the GV/LVREF off
the stack, but didn't bump its refcount. Thus it was possible (if
unlikely) that it could be freed during the loop. In particular this
crashed:
$f = "foo";
for ${*$f} (1,2) {
delete $main::{$f}; # delete the glob *foo
...;
}
This will become more serious when the stack becomes refcounted, as
popping something off the stack will trigger a refcount decrement on it
and thus a possible immediate free of the GV.
This commit future-proofs for loops against this by ensuring that
the refcount of the SV referred to by cx->blk_loop.itervar_u.svp is
appropriately bumped up / down on entering / exiting the loop.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Prior to this patch SSCHECK() took a "needs" parameter, but did not
actually guarantee that the stack would be sufficiently large to
accomodate that many elements after the call. This was quite misleading.
Especially as SSGROW() would not do geometric preallocation, but
SSCHECK() would, so much of the time SSCHECK() would appear to be a
better choice, but not always.
This patch makes it so SSCHECK() is an alias for SSGROW(), and it makes
it so that SSGROW() also geometrically overallocates. The underlying
function that used to implement SSCHECK() savestack_grow() now calls the
savestack_grow_cnt() which has always implemented SSGROW(). Anything
in the internals that used to call SSCHECK() now calls SSGROW() instead.
At the same time the preallocation has been made a little bit more
aggressive, ensuring that we always allocate at least SS_MAXPUSH
elements on top of what was requested as part of the "known" size of the
stack, and additional SS_MAXPUSH elements which are not part of the
"known" size of the stack. This "hidden extra" is used to simply some of
the other macros which are used a lot internally. (I have beefed up the
comment explaining this as well.)
This fixes GH Issue #20826
|
|
|
|
|
|
| |
We have a bug where we can overflow the save-stack. This tests for it
in a TODO test. The next patch will fix it. Note the test will only fail
in debugging as it requires the assert() to be compiled in.
|