| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
| |
with test cases to validate functionality
|
| |
|
| |
|
|\ |
|
| |
| |
| |
| |
| | |
and handle the case where both props are available in both binary and
decoded form.
|
|\ \
| |/ |
|
| |\ |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | | |
to make it simpler and shorter
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | | |
without having a msg to receive. We have no idea if this is possible. Nevertheless, in the previous code, it would block waiting for a msg (mirroring OTP behaviour). Now it'll work, but we have to avoid a potential crash elsewher.
|
| | |
| | |
| | |
| | | |
20980). However, we should definitely make sure we receive at least 1 msg when coming out of hibernate, and in loop we don't care too much. Also, use now() to seed the rng as erlang doesn't do it for you (rng state is implicitly per process).
|
| | |
| | |
| | |
| | | |
Introduced drain explicitly because to do otherwise would have made life even harder. Everything addressed as per bug and IM. Test once for functions being exported and cache
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
backoff as discussed with Matthias over IM. Documented in comments, reproduced here:
%% 5) init can return a 4th arg, {backoff, InitialTimeout,
%% MinimumTimeout, DesiredHibernatePeriod} (all in
%% milliseconds). Then, on all callbacks which can return a timeout
%% (including init), timeout can be 'hibernate'. When this is the
%% case, the current timeout value will be used (initially, the
%% InitialTimeout supplied from init). After this timeout has
%% occurred, handle_pre_hibernate/1 will be called. If that returns
%% {hibernate, State} then the process will be hibernated. Upon
%% awaking, a new current timeout value will be calculated, and then
%% handle_post_hibernate/1 will be called. The purpose is that the
%% gen_server2 takes care of adjusting the current timeout value such
%% that the process will increase the timeout value repeatedly if it
%% is unable to sleep for the DesiredHibernatePeriod. If it is able to
%% sleep for the DesiredHibernatePeriod it will decrease the current
%% timeout down to the MinimumTimeout, so that the process is put to
%% sleep sooner (and hopefully for longer). In short, should a process
%% using this receive a burst of messages, it should not hibernate
%% between those messages, but as the messages become less frequent,
%% the process will not only hibernate, it will do so sooner after
%% each message.
%%
%% Normal timeout values (i.e. not 'hibernate') can still be used, and
%% if they are used then the handle_info(timeout, State) will be
%% called as normal. In this case, returning 'hibernate' from
%% handle_info(timeout, State) will not hibernate the process
%% immediately, as it would if backoff wasn't being used. Instead
%% it'll wait for the current timeout as described above, before
%% calling handle_pre_hibernate(State).
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
ordinary timeouts are not shown as part of the status, so it doesn't
make sense to show the special ones. Also, the code was incorrect.
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
gen_server2 to R13B1 - this was a change originally made by matthias to ensure that messages cast to remote nodes are done so in order
b) Add guards and slightly relax name/1 so that it works in R11B5. All tests pass in R11B5 and manual testing of the binary backoff hibernation shows that too works.
|
| | | |
|
| | |
| | |
| | |
| | | |
Adjusted amqqueue_process to use it. Added documentation. Tested thoroughly with explicit test module (not added), and full test suite, which all passed. Existing tests further up in this bug similarly pass and demonstrate code is functioning correctly.
|
| | |
| | |
| | |
| | | |
slip badly behind the shipped version.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
reply and noreply functions. This means that the now() value there includes computation relating to the last message in. This is maybe not desirable, but the alternative is to wrap all of handle_cast, handle_call and handle_info. Nevertheless, testing shows this works:
in the erlang client:
Conn = amqp_connection:start("guest", "guest", "localhost"),
Chan = lib_amqp:start_channel(Conn),
[begin Q = list_to_binary(integer_to_list(R)), Q = lib_amqp:declare_queue(Chan, Q) end || R <- lists:seq(1,1000)],
Props = (amqp_util:basic_properties()).
[begin Q = list_to_binary(integer_to_list(R)), ok = lib_amqp:publish(Chan, <<"">>, Q, <<0:(8*1024)>>, Props) end || _ <- lists:seq(1,1500), R <- lists:seq(1,1000)].
Then, after that lot's gone in, in a shell do:
watch -n 2 "time ./scripts/rabbitmqctl list_queues | tail"
The times for me start off at about 2.3 seconds, then drop rapidly to 1.4 and then 0.2 seconds and stay there.
|
| |\ \ |
|
| | |\ \ |
|
| | | | |
| | | | |
| | | | |
| | | | | |
RABBITMQ_SERVER_START_ARGS= line being in the Makefile but it should be dealt with in a different bug.
|
| | | | | |
|
| | |\ \ \
| |_|/ / /
|/| | | | |
|
| | | | |
| | | | |
| | | | |
| | | | | |
more sane
|
| | |\ \ \
| | | |/ / |
|
| | |/ / |
|
| | |\ \
| |_|/ /
|/| | | |
|
| | | | |
|
| |/ /
|/| |
| | |
| | |
| | |
| | | |
msg gets processed shows this is benficial:
In the erlang client, hammer in a few million messages, with no consumer. This causes the channel mailbox to get pretty big. Without the higher priority can demonstrate a delay of over a second before the conserve_message gets processed. With this, it's a mere fraction of that.
|
| | | |
|
|/ / |
|
|\ \ |
|
| |\ \ |
|
| | |\ \ |
|
| | | |/
| | |/|
| | | |
| | | |
| | | | |
EPEL now includes the erlang-R12B-5.6 package, which has a
/usr/bin/escript symlink.
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|