| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
| |
|
|\ |
|
| | |
|
| | |
|
| | |
|
| |\ |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | |\ |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | |\ \ |
|
| | | | |
| | | | |
| | | | |
| | | | | |
distributed node
|
| | | | | |
|
| | |\ \ \ |
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
tell us little.
|
| | | | | | |
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
There are two things that may break network-wise when starting rabbit: epmd
may fail to start for some reason, net_kernel may fail to start for some
reason. We're now starting epmd manually, because net_kernel needs it; also,
there doesn't seem to be a way to start it from Erlang, so we have to start it
from the shell scripts. Of course, running it in daemon mode hides any
errors it may encounter completely; i.e. there's no way to tell if "epmd
-daemon" actually started the daemon.
There isn't any documentation for what errors net_kernel:start/1 may return,
so we print a vague error message and exit in case of an error. There's also
a bit of special handling for the case in which epmd didn't start (detected
because something deep down in Erlang fails to start).
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
And always start the new node on the current hostname; otherwise, it gets
started on 'nohost'.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
ALso, move print_error/2 to rabbit_misc since it's now used by control,
plugins and prelaunch.
We need to start epmd manually now (because net_kernel:start/1 fails
otherwise). Running "epmd -daemon" repeatedly seems to have no adverse
effects.
|
|\ \ \ \ \ \
| |/ / / / /
|/| | | | | |
|
|/ / / / / |
|
|\ \ \ \ \ |
|
| | | | | | |
|
| |\ \ \ \ \ |
|
| | | | | | | |
|
| | | | | | | |
|
| |/ / / / / |
|
| | | | | | |
|
| |\ \ \ \ \
| | | | | | |
| | | | | | |
| | | | | | | |
performance along the way)
|
| | | | | | | |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Previously we effectively had a credit_spec of {100,1}, i.e. the queue
would send up to 100 messages to a consumer channel/writer, and the
writer would 'ack' them individually. That is horrendeously
inefficient:
- when draining a queue, after the queue had sent 100 messages it
would block the consumer, unblock when the notify_sent 'ack' came in,
send another message to the channel/queue, block again. So a vast
amount of work per message
- in a cluster, the notify_sent 'acks' effectively doubled the
cross-cluster traffic
We now use a scheme much like credit_flow. Except we cannot *actually*
use credit_flow because
- rather than wanting to know whether a sender is lacking credit for
*any* receiver, as indicated by credit_flow:blocked(), we need to know
*which* receiver we are lacking credit for.
- (lack of) credit from receiver should *not* propagate to senders,
i.e. sender and receiver credits are completely decoupled. Instead the
queue should, er, queue messages when receivers cannot keep up.
While we could modify credit_flow to accomodate the above, the changes
would be quite unpleasant and not actually reduce the amount of code
vs implementing a more specialised scheme.
The downside is that the contract for using
rabbit_amqqueue:notify_sent becomes somewhat mysterious. In
particular it sets up a monitor for queues in the caller, and expects
the caller to invoke rabbit_amqqueue:notify_sent_queue_down when a
'DOWN' message is received.
|
|/ / / / / / |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
...so there is just one process_channel_frame call site.
Also, ensure control_throttle isn't called twice, which would happen
when processing a 'channel.close_ok' frame. No harm in it, really, but
unnecessary.
|
|\ \ \ \ \ \ |
|
| |\ \ \ \ \ \ |
|
| | | | | | | | |
|
| | | | | | | | |
|
| | |\ \ \ \ \ \ |
|
| | | |\ \ \ \ \ \ |
|
| | | | | | | | | | |
|