| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
mfa() is predefined as {atom(), atom(), byte()}
|
|\ |
|
| |\ |
|
| | |\ |
|
| | | |\ |
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | |/
| | |/| |
|
| | |\ \ |
|
| | |\ \ \ |
|
| | | |\ \ \
| | | | | |/
| | | | |/| |
|
| | | |\ \ \ |
|
| | | | | | | |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
include rpm-specifc files as sources instead of patch
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
instead of directly changing the source tarball
|
| | | | | | | |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Cleaned up the rpm Makefile.
|
| | | | | | | |
|
| | |/ / / / |
|
| |\ \ \ \ \ |
|
| |\ \ \ \ \ \
| | |/ / / / /
| |/| | | | | |
|
| | | | | | | |
|
| | |\ \ \ \ \
| | | |/ / / / |
|
| | | | | | | |
|
| |\ \ \ \ \ \
| | | |_|_|/ /
| | |/| | | | |
|
| |\ \ \ \ \ \
| | |/ / / / / |
|
| | |\ \ \ \ \ |
|
| | | | |/ / /
| | | |/| | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
If the target queue died normally we don't care, and if it died
abnormally the reason is logged by the queue supervisor. In both cases
we treat the message as unrouted.
|
| | |\ \ \ \ \
| | | |/ / / /
| | |/| | | | |
|
| | | | | | | |
|
| | | | | | | |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
With this in place I am finally unable to make rabbit grind to a halt
due to garbage being held by idle processes.
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
My tests got stuck after about an hour, and the cause was
buffering_proxies holding on to memory
|
|\ \ \ \ \ \ \ |
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
In my experiments I encountered situations where rabbit would not
recover from a high memory alert even though all messages had been
drained from it. By inspecting the running processes I determined that
queue and channel processes sometimes hung on to garbage. Erlang's gc
is per-process and triggered by process reduction counts, which means
an idle process will never perform a gc. This explains the behaviour -
the publisher channel goes idle when channel flow control is activated
and the queue process goes idle once all messages have been drained
from it.
Hibernating idle processes forces a gc, as well as generally reducing
memory consumption. Currently only channel and queue processes are
hibernating, since these are the only two that seemed to be causing
problems in my tests. We may want to extend hibernation to other
processes in the future.
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
The default 80% is just too low for many systems - I have less than
that on tanto most of the time.
It remains to be seen whether the new figure works ok for most users.
|
|\ \ \ \ \ \ \ \
| | |_|_|_|_|_|/
| |/| | | | | |
| | | | | | | | |
The former triggered errors in the latter
|
| | |_|/ / / /
| |/| | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
The buffering_proxy:mainloop was unconditionally requesting new
messages from the proxy. It should only do that when it has just
finished handling the messages given to it by the proxy in response to
a previous request, and not after handling a direct message.
|
| |\ \ \ \ \ \ |
|
| | |\ \ \ \ \ \
| | | |_|_|_|_|/
| | |/| | | | | |
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
while building on Debian systems. Unfortunately
.spec doesn't have 'not' logic.
|
| | | | |_|/ /
| | | |/| | | |
|
| |\ \ \ \ \ \ |
|
| | |\ \ \ \ \ \
| | | | |_|_|_|/
| | | |/| | | | |
|
| | | | | | | | |
|
| | | | | | | | |
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
When we fire off lots of gen_server:calls in parallel, we may create
enough work for the VM to cause the calls to time out - since the
amount of work that can actually be done in parallel is finite.
The fix is to adjust the timeout based on the total workload.
Alternatively we could not have any timeout at all, but
that is bad Erlang style since a small error somewhere could result in
stuck processes.
I moved the parallelisation - and hence timeout modulation - from the
channel into the amqqueue module, changing the API in the process -
commit, rollback and notify_down now all operate on lists of
QPids (and I've renamed the functions to make that clear). The
alternative would have been to add Timeout params to these
three functions, but I reckon the API is cleaner this way,
particularly considering that rollback doesn't actually do a call - it
does a cast and hence doesn't require a timeout - so in the
alternative API we'd either have to expose that fact indirectly by not
having a Timeout param, or have a bogus Timeout param, neither of
which is particularly appealing.
I considered making the functions take sets instead of lists, since
that's what the channel code produces, plus sets have a more efficient
length operation. However, API-wise I reckon lists are nicer, plus it
means I can give a more precise type to dialyzer - sets would be
opaque and non-polymorphic.
|
| | |/ / / / /
| |/| | | | | |
|