| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
| |
to make it simpler and shorter
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
without having a msg to receive. We have no idea if this is possible. Nevertheless, in the previous code, it would block waiting for a msg (mirroring OTP behaviour). Now it'll work, but we have to avoid a potential crash elsewher.
|
|
|
|
| |
20980). However, we should definitely make sure we receive at least 1 msg when coming out of hibernate, and in loop we don't care too much. Also, use now() to seed the rng as erlang doesn't do it for you (rng state is implicitly per process).
|
|
|
|
| |
Introduced drain explicitly because to do otherwise would have made life even harder. Everything addressed as per bug and IM. Test once for functions being exported and cache
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
backoff as discussed with Matthias over IM. Documented in comments, reproduced here:
%% 5) init can return a 4th arg, {backoff, InitialTimeout,
%% MinimumTimeout, DesiredHibernatePeriod} (all in
%% milliseconds). Then, on all callbacks which can return a timeout
%% (including init), timeout can be 'hibernate'. When this is the
%% case, the current timeout value will be used (initially, the
%% InitialTimeout supplied from init). After this timeout has
%% occurred, handle_pre_hibernate/1 will be called. If that returns
%% {hibernate, State} then the process will be hibernated. Upon
%% awaking, a new current timeout value will be calculated, and then
%% handle_post_hibernate/1 will be called. The purpose is that the
%% gen_server2 takes care of adjusting the current timeout value such
%% that the process will increase the timeout value repeatedly if it
%% is unable to sleep for the DesiredHibernatePeriod. If it is able to
%% sleep for the DesiredHibernatePeriod it will decrease the current
%% timeout down to the MinimumTimeout, so that the process is put to
%% sleep sooner (and hopefully for longer). In short, should a process
%% using this receive a burst of messages, it should not hibernate
%% between those messages, but as the messages become less frequent,
%% the process will not only hibernate, it will do so sooner after
%% each message.
%%
%% Normal timeout values (i.e. not 'hibernate') can still be used, and
%% if they are used then the handle_info(timeout, State) will be
%% called as normal. In this case, returning 'hibernate' from
%% handle_info(timeout, State) will not hibernate the process
%% immediately, as it would if backoff wasn't being used. Instead
%% it'll wait for the current timeout as described above, before
%% calling handle_pre_hibernate(State).
|
| |
|
|
|
|
|
| |
ordinary timeouts are not shown as part of the status, so it doesn't
make sense to show the special ones. Also, the code was incorrect.
|
| |
|
| |
|
|
|
|
|
|
| |
gen_server2 to R13B1 - this was a change originally made by matthias to ensure that messages cast to remote nodes are done so in order
b) Add guards and slightly relax name/1 so that it works in R11B5. All tests pass in R11B5 and manual testing of the binary backoff hibernation shows that too works.
|
| |
|
|
|
|
| |
Adjusted amqqueue_process to use it. Added documentation. Tested thoroughly with explicit test module (not added), and full test suite, which all passed. Existing tests further up in this bug similarly pass and demonstrate code is functioning correctly.
|
|
|
|
| |
slip badly behind the shipped version.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
reply and noreply functions. This means that the now() value there includes computation relating to the last message in. This is maybe not desirable, but the alternative is to wrap all of handle_cast, handle_call and handle_info. Nevertheless, testing shows this works:
in the erlang client:
Conn = amqp_connection:start("guest", "guest", "localhost"),
Chan = lib_amqp:start_channel(Conn),
[begin Q = list_to_binary(integer_to_list(R)), Q = lib_amqp:declare_queue(Chan, Q) end || R <- lists:seq(1,1000)],
Props = (amqp_util:basic_properties()).
[begin Q = list_to_binary(integer_to_list(R)), ok = lib_amqp:publish(Chan, <<"">>, Q, <<0:(8*1024)>>, Props) end || _ <- lists:seq(1,1500), R <- lists:seq(1,1000)].
Then, after that lot's gone in, in a shell do:
watch -n 2 "time ./scripts/rabbitmqctl list_queues | tail"
The times for me start off at about 2.3 seconds, then drop rapidly to 1.4 and then 0.2 seconds and stay there.
|
|\ |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
|/ |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\ |
|
| |\
| | |
| | |
| | | |
further qa is still required
|
| | |\ |
|
| | | |\ |
|
| | | |/ |
|
| | |/ |
|
| | | |
|
| | | |
|
| | | |
|
| |/
|/| |
|
|\ \ |
|
| | | |
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This involved some substantial changes to the queue internal data
structures - mostly by choice; the new design is cleaner:
- We no longer keep a list of consumers in the channel
records. Now the channel records just contain a consumer count
instead, and that's only there for efficiency so we can more
easily tell when we need to register/unregister with the limiter.
- We now keep *two* consumer queues - one of active consumers
(that's the one we've always had) and one of blocked consumers.
We round-robin on the first one as before, and move things between the
two queues when blocking/unblocking channels. When doing so the
relative order of a channel's consumers is preserved, so the effects
of any round-robining the active consumers get carried
through to the blocked consumers when they get blocked and then back
to the active consumers when they get unblocked.
|
|\ \
| |/
|/|
| |
| |
| | |
We point to the macports files of the default branch from our web site
and they got broken with the merge of bug20333. This hopefully fixes
that, but further qa is required.
|
| |\ |
|
| |/ |
|
| |\ |
|
| | |\ |
|