| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
Some tests benchmarked the code '$i++', which is so lightweight that it
could trigger the die "Timing is consistently zero in estimation loop"
in Benchmark.pm.
So make the code slightly more heavyweight.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Benchmark.t has been randomly failing test 15 in smokes for ages.
This is the one that checks that a loop run 3*N times burns approximately
3 times more CPU than when run just N times.
For the last month the test has included a calibration loop and test,
which does much the same thing, but without using any code from
Benchmark.pm. This has failed just as much, which confirms that its an
issue with the smoke host (such as a variable speed CPU or whatever),
rather than any flaw in the Benchmark.pm library logic.
So just remove the calibration loop and the dodgy test.
|
| |
|
| |
|
|
|
|
|
|
| |
As a temporary measure, make a calibration failure not only a skip but a
failed test too, so I can see whether the real test fails more often in
smokes than the calibration.
|
|
|
|
|
|
|
|
|
| |
when we compare the number of iterations done in 3 seconds with
3 * (the number of iterations done in 1 second), the comparison's
effective delta of toleration depended on which value was larger:
if a > b, it tested for a/b <= 1.666; if b > a, it tested for b/a < 1.4.
Make it consistently 1.4 (or 1+$DELTA to be precise).
|
|
|
|
|
| |
When the notorious test 15 fails, show the ratio of our earlier 3sec and
1sec calibration in the diag output.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
test 15 has been failing intermittently in smokes for ages now.
This does countit(3, ...) and countit(1, ...) and checks that the first
takes approx three times longer than the second.
This commit adds in near the beginning a crude timing loop that directly
uses times() rather than anything in the Benchmark module, and that does
approx 1s and 3s of loops, and if the results aren't consistent, sets a
global flag, $INCONSISTENT_CLOCK, that causes timing-sensitive tests to be
skipped.
For now it only skips test 15. If this is successful, I'll look to
expand it to other failing tests like 128/129.
|
|
|
|
|
|
|
|
|
|
| |
Currently we work out the CPU burned by a loop by summing user and sys
CPU. On the grounds that the burning we're interested in should have all
been carried out with user CPU and that therefore any sys CPU is a red
herring, ignore sys CPU for the infamous test 15.
Yes, this is clutching at straws. Still, diagnostics output may show
in future whether I was right!
|
|
|
|
|
| |
Commit bb6c6e4b8d10f2e460a7fe48e677d3d998a7f77d, which added
improved diagnostics, also broke the count estimate.
|
|
|
|
|
| |
use diag() to show absolutely all variables on failing the
notorious test 15. Maybe this will help nail it one day...
|
| |
|
|
|
|
|
|
| |
This function is called 6 times, each each call puts out about
15 tests, with the same set of descriptions, so output a note
at the start of each call showing where we're called from.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The test currently does a 3 sec run, then a 1 sec run, and
checks that the count from the first run is approximately three times
greater than that from the second run.
However, the countit() function doesn't guarantee that it will run for
exactly n seconds, so as well as returning how many iterations it did,
it also returns how much actual CPU time was used.
Make the test use that returned time value to scale the iteration counts
accordingly, before trying to compare them.
Hopefully this will reduce the number of spurious failed test 13's in
smokes (although after this commit it's now test 15).
|
|
|
|
|
|
|
|
|
| |
# New Ticket Created by (Peter J. Acklam)
# Please include the string: [perl #81890]
# in the subject line of all future correspondence about this issue.
# <URL: http://rt.perl.org/rt3/Ticket/Display.html?id=81890 >
Signed-off-by: Abigail <abigail@abigail.be>
|
|
|
|
|
|
|
|
| |
been failing smoke randomly.
Fix 1: Original code tests for less than but not =. I think that if these values are the same, the test should pass. I don't know the code well enough to be 100% sure. D
Fix 2: convert ok() to cmp_ok() for better diagnostic messages from smoke tests when they happen.
Fix 3: convert print to diag() so it will properly print through harness.
|
|
|
|
|
|
| |
From: Anno Siegel (via RT) <perlbug-followup@perl.org>
Message-ID: <rt-3.0.11-32327-99325.8.9408996026507@perl.org>
p4raw-id: //depot/perl@23473
|
|
|
| |
p4raw-id: //depot/perl@20556
|
|
|
|
|
|
|
|
|
|
|
|
| |
From: Radu Greab <rgreab@fx.ro>
Date: Thu, 07 Aug 2003 16:18:25 +0300 (EEST)
Message-Id: <20030807.161825.106541372.radu@yx.primIT.ro>
Subject: Re: [PATCH 5.8.1] Benchmark problem
From: Rafael Garcia-Suarez <rgarciasuarez@free.fr>
Date: Thu, 7 Aug 2003 15:48:38 +0200
Message-Id: <20030807154838.5d240dbb.rgarciasuarez@free.fr>
p4raw-id: //depot/perl@20546
|
|
|
|
|
| |
Message-ID: <20030803231235.GJ24350@windhund.schwern.org>
p4raw-id: //depot/perl@20463
|
|
|
| |
p4raw-id: //depot/perl@20016
|
|
|
|
|
| |
be a negative zero, -0).
p4raw-id: //depot/perl@19191
|
|
|
|
|
| |
Message-ID: <20020822041039.A2089@ucan.foad.org>
p4raw-id: //depot/perl@17774
|
|
|
|
|
|
|
| |
/export/home/nwc10/Even-Smoke/Smoke]
Message-ID: <20020513204738.GD310@Bagpuss.unfortu.net>
p4raw-id: //depot/perl@16583
|
|
|
|
|
| |
Message-ID: <20020413204303.GB12835@Bagpuss.unfortu.net>
p4raw-id: //depot/perl@15897
|
|
|
|
|
| |
Message-Id: <200203130025.TAA20113@mailhub1.stratus.com>
p4raw-id: //depot/perl@15226
|
|
|
| |
p4raw-id: //depot/perl@14592
|
|
|
|
|
| |
Message-ID: <20020113155833.C314@Bagpuss.unfortu.net>
p4raw-id: //depot/perl@14237
|
|
|
|
|
| |
Message-ID: <20011218225124.N21702@plum.flirble.org>
p4raw-id: //depot/perl@13767
|
|
|
|
|
| |
Message-ID: <20011218055818.GC4362@blackrider>
p4raw-id: //depot/perl@13754
|
|
|
| |
p4raw-id: //depot/perl@13693
|
|
|
|
|
| |
(We are missing Benchmark tests, then.)
p4raw-id: //depot/perl@12955
|
|
No doubt I made some mistakes like missed some files or
misnamed some files. The naming rules were more or less:
(1) if the module is from CPAN, follows its ways, be it
t/*.t or test.pl.
(2) otherwise if there are multiple tests for a module
put them in a t/
(3) otherwise if there's only one test put it in Module.t
(4) helper files go to module/ (locale, strict, warnings)
(5) use longer filenames now that we can (but e.g. the
compat-0.6.t and the Text::Balanced test files still
were renamed to be more civil against the 8.3 people)
installperl was updated appropriately not to install the
*.t files or the help files from under lib.
TODO: some helper files still remain under t/ that could
follow their 'masters'. UPDATE: On second thoughts, why
should they. They can continue to live under t/lib, and
in fact the locale/strict/warnings helpers that were moved
could be moved back. This way the amount of non-installable
stuff under lib/ stays smaller.
p4raw-id: //depot/perl@10676
|