| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In order to completely compile Perl, many modules must have been parsed
and compiled, so if there is a full perl, we know that things basically
work. The purpose of bailing out is that if these supposedly very base
level functionality tests don't work, there's no point in continuing.
But over the years, tests of more esoteric functionality have been
added here, and if one of them doesn't work, it still could be that Perl
pretty much does work.
I believe it would be best to move such non-basic tests elsewhere, but
that's work, and hasn't bitten us much so far; this change lessens the
severity of the biting even more. Where it will really bite is if
things are so bad that a full perl binary can't be compiled, and we are
trying to figure out why using minitest.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Note:
Porting/core-cpan-diff refactored to use Archive::Tar
instead of Archive::Extract
|
|
|
|
|
|
| |
directory t/opbasic.
For RT #115838
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
The refactoring done in 84650816efdc42d6 was incomplete and left
a couple of VMS-specific instances of RESULT while replacing all
other occurrences with $result.
Spotted by Jim Cromie.
|
|
|
|
| |
Intended for testing 64-bit behavious
|
|
|
|
|
| |
If these are set, Parse-CPAN-Meta and other things that depend
on it may fail.
|
|
|
|
|
|
| |
This makes the order more consistent with test_harness, and moves the
"interesting" tests earlier. "interesting", in that these are more likely
to spot unexpected problems with the tested changes.
|
|
|
|
|
|
|
| |
Tie::File has not been changed on CPAN since 2003. It has meanwhile been
actively maintained in p5p.
Signed-off-by: Chris 'BinGOs' Williams <chris@bingosnet.co.uk>
|
| |
|
|
|
|
|
|
|
|
|
| |
running cachegrind leaves lots of intermediate files, delete them at
the end. Killing make test leaves them around, but this may be useful
for some debugging purposes.
Rework _find_tests($dir) into _find_files($patt,$dir) and wrapper,
to support existing uses and new one.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move --log-fd=3 option from unconditional invocation into VG_OPTS
default value. A future version of perf will understand --log-fd=3,
but other tools probably will not, with this we can accommodate them,
and the current version of perf.
Makefile.SH:
Set VALGRIND var conditionally, to allow cmdline override (this is
probably non-portable, will need review at least).
perl.valgrind.config target's test of $(VALGRIND) is simplified to use
$(VG_TEST), which defaults to its legacy value: ./perl -e 1
2>/dev/null. Setting it to '--help' is needed for perf, and would
also work to verify that valgrind is runnable, but current test is
slightly more comprehensive for valgrind, so Ive left that for user to
change in the environment.
t/TEST:
1. --log-fd=3 is in default, but can be overridden by setting VG_OPTS
2. several variable renames to clarify purpose
3. $toolnm to rename output file with flexible suffix,
ie: valgrind, cachegrind, perf-stat
4. add perf to cachegrind as a special case, avoid culling of valgrind
output files by their content
With above, and following env, make test.valgrind works:
# --log-fd isnt mainline yet.
VALGRIND=/home/jimc/projects/lx/linux-2.6/tools/perf/perf
VG_TEST=--help
VG_OPTS='stat --log-fd=3 -- '
$> make test.valgrind;
PERL_VALGRIND=1 VALGRIND='/home/jimc/projects/lx/linux-2.6/tools/perf/perf' ./runtests choose
t/base/cond....................................................ok
t/base/if......................................................ok
t/base/lex.....................................................ok
...
[jimc@groucho perl]$ cat t/base/*.perf-stat
Performance counter stats for './perl base/cond.t':
5.882071 task-clock # 0.850 CPUs utilized
1 context-switches # 0.000 M/sec
1 CPU-migrations # 0.000 M/sec
483 page-faults # 0.082 M/sec
4,688,843 cycles # 0.797 GHz
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
3,368,118 instructions # 0.72 insns per cycle
718,821 branches # 122.205 M/sec
48,053 branch-misses # 6.68% of all branches
0.006920536 seconds time elapsed
This patch will allow you to use released version of perf,
just drop the --log-fd from VG_OPTS. The tests will fail,
because perf will write to STDOUT, and foul the harness.
The following runs cachegrind, creates t/*/*.cachegrind files.
It is much slower than using perf-stat.
$> export VG_OPTS='--tool=cachegrind --log-fd=3 -- '
$> make test.valgrind
==25822== Cachegrind, a cache and branch-prediction profiler
==25822== Copyright (C) 2002-2009, and GNU GPL'd, by Nicholas Nethercote et al.
==25822== Using Valgrind-3.5.0 and LibVEX; rerun with -h for copyright info
==25822== Command: ./perl base/cond.t
==25822==
==25822==
==25822== I refs: 1,680,072
==25822== I1 misses: 8,129
==25822== L2i misses: 3,675
==25822== I1 miss rate: 0.48%
==25822== L2i miss rate: 0.21%
==25822==
==25822== D refs: 604,393 (400,033 rd + 204,360 wr)
==25822== D1 misses: 12,599 ( 8,838 rd + 3,761 wr)
==25822== L2d misses: 6,261 ( 2,966 rd + 3,295 wr)
==25822== D1 miss rate: 2.0% ( 2.2% + 1.8% )
==25822== L2d miss rate: 1.0% ( 0.7% + 1.6% )
==25822==
==25822== L2 refs: 20,728 ( 16,967 rd + 3,761 wr)
==25822== L2 misses: 9,936 ( 6,641 rd + 3,295 wr)
==25822== L2 miss rate: 0.4% ( 0.3% + 1.6% )
NB: The following almost works; t runs the 1st test 5 times, and
produces 1 statistics file, but it fails, because TEST sees multiple
leaders, (FAILED--seen duplicate leader) and exits immediately,
because it happens in t/base. A work-around is easy enough, but adds
yet another knob. TBD.
$> VALGRIND=perf VG_OPTS='stat -r5 --log-fd=3 --' make test.valgrind
Performance counter stats for './perl base/cond.t' (5 runs):
5.568965 task-clock # 0.833 CPUs utilized ( +- 1.82% )
0 context-switches # 0.000 M/sec ( +- 61.24% )
0 CPU-migrations # 0.000 M/sec ( +-100.00% )
478 page-faults # 0.086 M/sec ( +- 0.37% )
4,441,737 cycles # 0.798 GHz ( +- 1.84% )
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
3,183,574 instructions # 0.72 insns per cycle ( +- 2.30% )
669,241 branches # 120.173 M/sec ( +- 2.87% )
41,826 branch-misses # 6.25% of all branches ( +- 3.78% )
0.006688160 seconds time elapsed ( +- 1.49% )
This patch is really a proof-of-concept; perf tool has far more
capabilities than t/TEST can exploit well, but this is a start,
and makes perf foo experimentation easier.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In t/TEST, run times() before and after each testfile, and save diffs
into $timings{$testname}, currently containing $etms only.
When run as HARNESS_TIMER=../../perl make test, (also when HARNESS_TIMER=2 or more)
harness output now looks like this:
t/base/cond ................................................... ok 7 ms 0 ms 0 ms
t/base/if ..................................................... ok 4 ms 0 ms 0 ms
t/base/lex .................................................... ok 13 ms 0 ms 0 ms
t/base/num .................................................... ok 9 ms 10 ms 0 ms
t/base/pat .................................................... ok 4 ms 0 ms 10 ms
t/base/rs ..................................................... ok 14 ms 10 ms 0 ms
t/base/term ................................................... ok 20 ms 0 ms 10 ms
t/base/while .................................................. ok 8 ms 0 ms 10 ms
t/comp/bproto ................................................. ok 9 ms 10 ms 0 ms
The additional timing data is also written into the Storable file:
'perf' => {
'../cpan/Archive-Extract/t/01_Archive-Extract.t' => [
'3916.87417030334',
'1700',
'2380'
],
'../cpan/Archive-Tar/t/01_use.t' => [
'92.1041965484619',
'70.0000000000003',
'19.9999999999996'
],
...
The numbers are: elapsed time, user-time, system-time. The latter 2
are children-times from times(); self-times are those of the harness,
which are uninteresting.
They often dont add up (in naive sense); ET can be greater than sum of
others, especially if the process blocks on IO, or can be less than
others, if the process forks and both children are busy. Also, child
times have 10 ms resolution on Linux, other OS or kernel build options
may vary.
Calling times() in harness will likely also collect bogus child data
if 2 testfiles are run in parallel.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
if HARNESS_TIMER envar is an existing directory, write timings data
and various platform and configuration data to a Storable file.
Given a large collection of files, the variance of each test can be
determined.
The configuration data should be sufficient to compare different
builds done on the same box. The platform data will hopefully allow
meaningful comparison of tests done on similar boxes, with same or
other OS, compiler, memory, etc. Both are subject to change, for both
content and format, latter being less important because of the
normalization possible during analysis, if the data is there.
Harness output still looks the same:
t/porting/cmp_version ......................................... ok 757 ms
t/porting/diag ................................................ ok 1172 ms
t/porting/dual-life ........................................... ok 88 ms
t/porting/exec-bit ............................................ ok 86 ms
t/porting/filenames ........................................... ok 176 ms
t/porting/globvar ............................................. ok 99 ms
t/porting/maintainers ......................................... ok 501 ms
t/porting/manifest ............................................ ok 251 ms
t/porting/podcheck ............................................ ok 15013 ms
t/porting/regen ............................................... ok 1033 ms
t/porting/test_bootstrap ...................................... ok 36 ms
All tests successful.
u=11.67 s=5.07 cu=375.07 cs=84.26 scripts=2045 tests=471995
wrote storable file: ../../perf/2011-9-7-2-45.ttimes
The Storable file data looks like:
$VAR1 = {
'conf' => {
'byacc' => 'byacc',
'cc' => 'cc',
'cccdlflags' => '-fPIC',
'ccdlflags' => '-Wl,-E',
...
},
'host' => 'groucho.jimc.earth',
'perf' => {
'../cpan/Archive-Extract/t/01_Archive-Extract.t' => '3960.50214767456',
'../cpan/Archive-Tar/t/01_use.t' => '94.3360328674316',
'../cpan/Archive-Tar/t/02_methods.t' => '737.880945205688',
'../cpan/Archive-Tar/t/03_file.t' => '118.676900863647',
'../cpan/Archive-Tar/t/04_resolved_issues.t' => '130.842924118042',
...
|
| |
|
|
|
|
|
|
|
|
| |
Adding space between testfile name and ...... lets user double-click
on just the name, instead of getting all the dots too, reducing the
cmdline editing to resubmit the test manually. Space before ok/not-ok
allows easier parsing with split /\s/, $line. Both make output agree
more closely with that from Test::*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Correct XS parameter list, and therefore prototype, for
unimplemented-on-this-platform version of clock_nanosleep()
[rt.cpan.org #68700].
- Declare package variables with "our" rather than "use vars".
- Corresponding to "our" usage, check for minimum Perl version
5.006.
- Declare module dependencies.
- Remove $ENV{PERL_CORE} logic from test suite, which is no
longer desired in the core.
- Convert test suite to use Test::More.
- Factor out watchdog code from test suite.
- In test suite, be consistent about using fully-qualified form
of function names.
- Divide test suite into feature-specific scripts.
- Make ualarm timing test less vulnerable to delay-induced false
failure, from Dave Mitchell.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Declare package variables with "our" rather than "use vars".
* Corresponding to "our" usage, check for minimum Perl version
5.006.
* Remove $ENV{PERL_CORE} logic from test suite, which is no
longer desired in the core.
* In test suite, remove obsolete and now-incomplete handling of
unavailability of Test::More.
* Declare module dependencies.
|
| |
|
| |
|
|
|
|
|
|
| |
Set $ENV{PERL_CORE_MINITEST} based on defined &DynaLoader::boot_DynaLoader,
instead of relying on a -minitest parameter. &DynaLoader::boot_DynaLoader is
undefined in miniperl, defined in perl, for both -Dusedl and -Uusedl
|
|
|
|
|
|
| |
This ensures (reasonable) consistency with tests in cpan/, dist/ and ext/,
which set this to qw(../../lib ../../t), but are not from t/, hence don't have
t/ implicitly in @INC as '.'
|
|
|
|
| |
Randy Kobes passed away recently, so let's have p5p maintain it for now.
|
|
|
|
| |
Randy Kobes passed away recently, so let's have p5p maintain it for now.
|
|
|
|
|
| |
For modules that are not built, exclude tests in sub-directories under
/t. For example: cpan/Module-Build/t/actions/installdeps.t
|
|
|
|
| |
We can now set PERL_CORE again when running its tests.
|
|
|
|
|
| |
Lots of unnecessary test boilerplate has been removed, allowing us to remove the
dist from both %abs and %temp_no_core in t/TEST.
|
|
|
|
|
|
|
| |
This was only needed for testing in the core, when the core's tests all ran
the top level t/ directory. Without this getting in the way, we don't need
t/TEST and t/harness to run the tests with absolute paths in @INC. Testing in
the CPAN distribution is unaffected.
|
|
|
|
|
|
|
| |
This was only needed for testing in the core, when the core's tests all ran in
the top level t/ directory. Without this getting in the way, we don't need
t/TEST and t/harness to run the tests with absolute paths in @INC. Testing in
the CPAN distribution is unaffected.
|
| |
|
|
|
|
|
| |
This was forgotten in the move from cpan/ to dist/ in commit
c510e33d30368bc5440f1651f6b31f73d2354eba.
|
| |
|
| |
|
|
|
|
| |
Both Ken and David agree with this.
|
| |
|
|
|
|
|
| |
In fact, as t/harness requires t/TEST, simply get t/TEST to do it for
t/harness too.
|
|
|
|
| |
Signed-off-by: H.Merijn Brand <h.m.brand@xs4all.nl>
|
| |
|
| |
|
|
|
|
|
| |
Given that t/TEST already had code to add -I../lib when testing UTF-8 with
-utf8, do likewise for testing UTF-16 with -utf16.
|
|
|
|
|
| |
Explicitly turn paths absolute for the 33 extensions in cpan/ that fail tests
with relative paths.
|
|
|
|
|
| |
Also, as only tests in cpan/ are using %no_abs and %temp_no_core, only consult
these look-up hashes for tests in cpan/
|
|
|
|
|
|
| |
{} could be misparsed, ++ has a lot of internal implementation "magic" that we
don't need, but don't want to trip us up if it isn't working, and op= isn't
necessary when we already rely on the more general $a = $b op $c working.
|
| |
|
| |
|
| |
|