| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
| |
RT #131124
In a couple of places in shared.xs, it calls sv_newmortal() with
a perl context different from that currently set by PERL_SET_CONTEXT().
If sv_newmortal() happens to trigger the malloc of a new SV HEAD arena,
then under PERL_TRACK_MEMPOOL, this will cause panics when the arena is
freed or realloced.
|
| |
|
| |
|
|
|
|
|
|
| |
See: https://rt.cpan.org/Ticket/Display.html?id=119529
Committer: Update version number in module's POD. Add perldelta entry.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Since v5.23.4-26-g0b057af made static a bunch a functions not used
outside their own src files, gcc has been complaining:
shared.xs:1172:1: warning: ‘Perl_sharedsv_unlock’ defined but not used [-Wunused-function]
Perl_sharedsv_unlock(pTHX_ SV *ssv)
So "delete" this function using '#if 0'
|
| |
|
|
|
|
|
|
|
| |
None of these symbols are exported on Win32 (listed in Makefile.PL with
EUMM's FUNCLIST), so they shouldn't be exported on Linux. Making them
static saves space in the SOs by removing symbol name strings, and removing
runtime plt/got indirection.
|
|
|
|
|
|
|
|
|
| |
Extracted from patch submitted by Lajos Veres in RT #123693.
This commit applies those patches to files under dist/ *other than* those
pertaining to Tie-File.
Update $VERSION in Dumper.pm and Storable.pm after re-applying patches from RT
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit 5bf4b3bf13bc4055684a48448b05920845ef7764.
On p5p-list, Steve Hay wrote on 2015-01-29:
"... these and other changes to Tie-File could break backwards compatibility.
The keys of %opt are passed in from user code, so we can't change the expected
key from "autodefer_threshhold" to "autodefer_threshold" without also asking
users to change their code, which is probably more hassle than it's worth."
Parts of the reverted commit will be re-committed from a new patch.
|
|
|
|
| |
Extracted from patch submitted by Lajos Veres in RT #123693.
|
|
|
|
|
| |
Followup to commit a8c717cfeb and commit 7105b7e7a5 and perl #123549 . This
should be C++ compatible even though it leaves some symbols non-static.
|
|
|
|
| |
This reverts commit 7105b7e7a5e49caa06b8d7ef71008838ec902227.
|
|
|
|
|
|
|
|
|
| |
This makes threads::shared have no non-NULL initialized RW static data.
Uninitialized and NULL filled RW data like PL_sharedsv_space and
prev_signal_hook remain, but on some OSes/CCs (Win32 with special tweaks),
this means that now the RW data section in threads::shared shared library
has no disk representation. Static the remaining RW vars to trim the
symbol table on non-Win32.
|
|
|
|
|
|
|
|
|
|
|
| |
When shrinking a shared array by setting $#shared = N,
any freed elements should trigger destructors if they are objects,
but they weren't.
This commit extends the work done by 7d585d2f3001 (which created tmp
proxys when abandoning elements of arrays and hashes) to the STORESIZE
method, which is what is triggered by $#a assignment (and indirectly by
undef @a).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
RT #122950
my @a : shared;
$#a = 3; # actually set it to 4
There was a simple off-by-one error in the XS code that handled the
STORESIZE tie method (confusing the array size and fill, which differ
by 1).
Amazingly, there was no test for it, and no-one had noticed up until now.
Note that this commit causes three tests in object2.t to fail: this
is because fixing the $#shared bug exposed another bug that was being
masked by this one. They will be fixed in the next commit
|
|
|
|
|
|
|
|
|
|
| |
In Perl_sharedsv_init() - which is called from the threads::shared BOOT
code - it creates a new shared interpreter, then tries to undo the ENTER
done as the last step of the perl_construct(PL_sharedsv_space) step, with
a LEAVE. But the LEAVE was being done in the context of the caller
interpreter rather than the shared one.
See the thread beginning <52D528FE.20701@havurah-software.org>
|
| |
|
|
|
|
|
|
|
| |
The Scalar::Util documentation has changed, so the links are broken.
But we cannot just update the link targets, as threads::shared is
living a double life and may be installed along with an older
Scalar::Util.
|
|
|
|
|
|
|
|
|
| |
make the file name stored in the lock on debugging builds
be const char* rather than char*, so that passing __FILE__ doesn't give
lots of
shared.xs:1287:323: warning: deprecated conversion from string
constant to 'char *' [-Wwrite-strings]
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
In commit fd013656aac I moved the documentation for the two warnings
that threads::shared produces from perldiag.pod to shared.pm, without
rewording anything.
These two warnings had almost identical descriptions, and--I found
later--basically repeat some of the documentation of the cond_signal
function, so point to that description in the warnings section and
avoid repeating things.
|
| |
|
|
|
|
| |
following the precedent set by threads.pm.
|
|
|
|
|
|
|
|
|
|
| |
fakethr.h and FAKE_THREADS were for a "green" threads implementation of
5005threads. 5005threads itself is long gone, and it's not clear that
-DFAKE_THREADS *ever* built correctly. Certainly it did not work for the
5.005 release, and it did not work at the time of the commits for the initial
checkin. The closest that it seems to have been to working is around commit
c6ee37c52f2ca9e5 (Dec 1997), where the headers no longer contained errors,
but perl.c failed to compile.
|
|
|
|
| |
To sync with the forthcoming CPAN release.
|
|
|
|
|
|
|
|
|
|
|
| |
Looping 500,000 times takes between 0.025s and 1s depending on hardware and
optimisation levels on machines I have access to. For a fixed iteration
count, on a particularly slow machine the timeout can fire before all
threads have had a realistic chance to complete, but dropping the iteration
count will cause fast machines to finish each thread too quickly.
So use an initial busy loop (single-thread) to estimate a suitable iteration
count to use for the per-thread test loop.
|
| |
|
|
|
|
| |
This reverts commit 34bd199a87daedeaeadd8e9ef48032c8307eaa94.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
In the previous commit, I added duplicate code to make it obvious what
was going on.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Jerry wrote:
> threads::shared objects stored inside other
> threads::shared structures are not properly destroyed.
> When a threads::shared object is 'removed' from a
> threads::shared structure (e.g., a hash), the object's
> DESTROY method is not called.
Later, he said:
> When PL_destroyhook and Perl_shared_object_destroy were
> added, the problem they were overcoming was that the
> destruction of each threads::shared proxy was causing the
> underlying shared object's DESTROY method to be called. The
> fix provided a refcount check on the shared object so that
> the DESTROY method was only called with the shared object
> was no longer in use.
>
> The above works fine when delete() and pop() are used,
> because a proxy is created for the stored shared object that
> is being deleted (i.e., the value returned by the delete()
> call), and when the proxy is destroyed, the object's DESTROY
> method is called.
>
> However, when the stored shared object is 'removed' in some
> other manner (e.g., setting the storage location to
> 'undef'), there is no proxy involved, and hence DESTROY does
> not get called for the object.
This commit fixes that by modifying sharedsv_scalar_store,
sharedsv_scalar_mg_free and sharedsv_array_mg_CLEAR.
Each of those functions now checks whether the current item being
freed has sub-items with reference counts of 1. If so, that means the
sub-item will be freed as a result of the outer SV’s being freed. It
also means that there are no proxy objects and that destructors will
hence not be called. So it pushes a new proxy on to the calling con-
text’s mortals stack. If there are multiple levels of nested objects,
then, when the proxy on the mortals stack is freed, it triggers
sharedsv_scalar_mg_free, which goes through the process again.
This does not fix the problem for shared objects that still exist
(without proxies) at global destruction time. I cannot make that
work, as circularities will cause new proxies to be created continu-
ously and pushed on to the mortals stack. Also, the proxies may end
up being created too late during global destruction, after the mor-
tals stack has been emptied, and when there is not enough of the run-
time environment left for destructors to run. That will happen if
the shared object is referenced by a shared SV that is not an object.
The calling context doesn’t know about the object, so it won’t fire
the destructor at the object-destroying stage of global destruction.
Detecting circularities is also problematic: We would have to keep
a hash of ‘seen’ objects in the shared space, but then how would we
know when to free that? Letting it leak would affect embedded
environments.
So this whole trick of creating mortal proxy objects is skipped during
global destruction.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Every function that calls S_get_RV needs this same incantation:
S_get_RV(aTHX_ sv, ssv);
/* Look ahead for refs of refs */
if (SvROK(SvRV(ssv))) {
SvROK_on(SvRV(sv));
S_get_RV(aTHX_ SvRV(sv), SvRV(ssv));
}
Also, S_get_RV keeps repeating SvRV(ssv), even though it assigns it to
sobj at the top.
Also, an upcoming commit will need the ability to pass the referent to
S_get_RV.
So this patch changes S_get_RV to accept a referent instead (eliminat-
ing its multiple use of SvRV) and adds a get_RV macro to take care of
the standard calling rite.
|
| |
|
|
|
|
|
|
| |
Since these numbers have already been used for developement releases,
they need to be changed again. I also added a note to make sure they
no longer get out of sync with the pod.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Under conditions of high load (e.g. parallel testing),
some of the tests in threads-shared/t/waithires.t can fail.
My previous attempt at fixing this,
bb09c94c3bb1638714998511ecf5d337a708535a
was mostly wrong. In particular, the new sub cond_timedwaitN()
didn't actually do what it advertised, since it didn't increment
the timeout, which was an absolute clock time. Instead, it's main
affect was to mostly guarantee (within a 10 second window) that
a wait succeeded (and thus that the whole test file didn't hang),
although as it happens, after the first fail it was no longer actively
testing a timed wait.
Formalise this, by renaming cond_timedwaitN() to do_cond_timedwait(), and
just doing an untimed cond_wait() if the initial cond_timedwait() times
out.
In addition, the changes to avoid false positives are:
Increase the wait periods from 0.1s and 0.05s to 0.4s, to give a bigger
window.
Add a new lock, $ready, that ensures that the child is fully started and
ready before the parent starts the cond_timedwait(), which reduces the
window of time where the wait might time out.
Make the scope of the lock as small as possible, so that that the parent
cond_timedwait() isn't still trying to re-acquire the lock while the child
prints out 'ok' messages etc.
And most importantly, don't automatically treat a cond_timedwait() timeout
as a failure. Instead, measure the time the parent spends in
cond_timedwait(), and the time the child spends between locking and
signalling; and if both of these are greater than the timeout, then we
know we timed out because we were loaded, rather than because something
was wrong with cond_timedwait.
|
|
|
|
|
|
|
|
|
| |
A backport from downstream at Debian:
perl (5.14.0-1) debian/m68k_thread_stress.diff
Subject: Disable some threads tests on m68k for now due to missing TLS.
Closes: #495826, #517938
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This:
commit 8aacddc1ea3837f8f1a911d90c644451fc7cfc86
Author: Nick Ing-Simmons <nik@tiuk.ti.com>
Date: Tue Dec 18 15:55:22 2001 +0000
Tidied version of Jeffrey Friedl's <jfriedl@yahoo.com> restricted hashes
- added delete of READONLY value inhibit & test for same
- re-tabbed
p4raw-id: //depot/perlio@13760
essentially deprecated HvKEYS() in favor of HvUSEDKEYS(); this is
explained in line 144 (now 313) of file `hv.h':
/*
* HvKEYS gets the number of keys that actually exist(), and is provided
* for backwards compatibility with old XS code. The core uses HvUSEDKEYS
* (keys, excluding placeholdes) and HvTOTALKEYS (including placeholders)
*/
This commit simply puts that into practice, and is equivalent to running
the following (at least with a35ef416833511da752c4b5b836b7a8915712aab
checked out):
git grep -l HvKEYS | sed /hv.h/d | xargs sed -i s/HvKEYS/HvUSEDKEYS/
Notice that HvKEYS is currently just an alias for HvUSEDKEYS:
$ git show a35ef416833511da752c4b5b836b7a8915712aab:hv.h | sed -n 318p
#define HvKEYS(hv) HvUSEDKEYS(hv)
According to `make tests':
All tests successful.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
In threads::shared, the waithires.t test checks cond_timedwait() with
sub-second timeouts. If the newly-minted child doesn't manage to grab the
lock within 0.05s, the cond_timedwait() will timeout, and the child and
the test will hang, until eventually killed off by the watchdog. This can
easily happen on a slow/loaded system.
We fix this by putting the cond_timedwait() in a retry loop, only giving
up after 10 seconds of repeated timeouts.
|
|
|
|
| |
Curiously it was the only test of Threads::Shared that used Test.
|
|
|
|
|
|
|
|
|
| |
# New Ticket Created by (Peter J. Acklam)
# Please include the string: [perl #81888]
# in the subject line of all future correspondence about this issue.
# <URL: http://rt.perl.org/rt3/Ticket/Display.html?id=81888 >
Signed-off-by: Abigail <abigail@abigail.be>
|