| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
That way we don't leave garbage - transactionally published, but
uncommitted messages - in the message store.
Also, we we can get rid of the pending_commits state wart in
disk_queue. That is possible because both tx commits and queue
deletions are issued by the queue process and tx commits are
synchronous, so there is never a chance of there being a pending commit
when doing a deletion.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The API to the msg_store has changed: now instead of asking whether a
sync is needed for a set of msg ids, and subsequently requesting a
sync, we request a sync for a set of msg ids and supply a callback
that is invoked when that sync is done. That way the msg_store can
make its own decisions on when to sync, and less logic is required by
callers.
During queue deletion we must remove *all* queue messages from the
store, including those that are part of committed transactions for
which the disk_queue has not yet received the sync callback. To do
that we keep a record of these messages in a dict in the state. The
dict also ensures that we do not act on a sync callback involving a
queue which has since been deleted and perhaps recreated.
|
|\ |
|
| | |
|
| |\ |
|
| | | |
|
| |\ \ |
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This is a step on the path to getting rid of message attributes in msg_store.
msg_store:attrs was only being used in disk_queue:prune, to detect
when the store contained a non-persistent message and remove that
message from the store and the rabbit_disk_queue table.
Now rabbit_disk_queue records contain an IsPersistent flag. By making
the msg count delta generator pay attention to that flag we trim
non-persistent messages from the store during its initialisation,
disk_queue:prune no longer needs to remove messages from the store, it
just needs to remove all messages from the rabbit_disk_queue table
which are no longer referenced by the store - hence the new
msg_store:contains function.
Keeping the IsPersistent flag in the rabbit_disk_queue table is
sub-optimal since it means we store it once per message reference
rather than just once per message. That's a small price to pay though
for the cleaner interaction between the disk_queue and msg_store, and
the opportunity to remove the notion of message attributes from
msg_store altogether.
Populating the new field in rabbit_disk_queue is straightforward in
most places except disk_queue:tx_commit. That used to just be given
{MsgId, IsDelivered} tuples, so I had to change the API to {MsgId,
IsDelivered, IsPersistent} tuples.
|
| |\ \ \ |
|
| | |\ \ \ |
|
| | | | | | |
|
| | |/ / /
| |/| | | |
|
| |/ / / |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
associated minor refactorings
|
| | | |
| | | |
| | | |
| | | | |
the guid to be the empty binary
|
| | | |
| | | |
| | | |
| | | |
| | | | |
since that's what it does
plus some minor cosmetic changes
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This can actually fail, e.g. when a message got ack'ed but the
corresponding mnesia delete in dq hasn't been flushed yet.
load_messages and the pruning in dq take care of this situation.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
limiting the size of the message cache is pointless. Firstly, the ets
memory info does not include binaries. Secondly, the cache is only
holding onto messages which current queue processes are holding onto,
so we are not actually leaking any memory, and the only cost is the
cost of the cache entries themselves, which should be small.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
It turns out that we don't actually need the 'persistent'
attribute. So this saves us a potentially expensive interaction with
the msg_store.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
and all the mode switching and memory management logic that goes with it.
The 2G limitition of dets make the disk_only mode not worthwhile.
In the process I refactored the msg_location access in msg_store
s.t. it shouldn't be much effort to plug in a different index store in
the future.
Also some minor tweaks and tidying up here and there.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This is more flexible than the previous ref_count function, allowing
the ref counts to be obtained w/o consuming any memory at the
supplying end in a variety of scenarios.
We use the dq_msg_loc ets/dets table to store the ref counts. That
table is later updated with the full details of the messages (their
file and position, etc). At the end we prune any entries that have a
ref count but no associated file - i.e. the referenced message
couldn't be found on disk.
This change should also fix the "All replicas on diskfull nodes are
not active yet" error observed in bug 21530 since we no longer need
the indices on the rabbit_disk_queue mnesia table which we identified
as the most likely cause of that error.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
This makes matching faster and keeps record sizes smaller. It also
means we can get rid of one bit of state.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
lists are more pleasant than sets in APIs, plus in our use we have a
list to start with.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The main motivation is to reduce the memory and on-disk footprint of
the guid from ~34 bytes to 16. But it turns out that this actually
results in a speed improvement of a few percent as well, even for
non-persistent messaging, presumably due to the memory management
effects and the fact that 16 byte binaries are easier to copy between
processes than the deep(ish) original guid structure.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
since we prune more than just mnesia
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
giving it a name that describes what it does, and extracting recursion
vestiges into caller.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
This is just a renaming exercise, but it turns rabbit_msg_store into
a general purpose message store.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
rabbit_mixed_queue and rabbit_disk_queue see guids, not non_neg_integers.
|
| | | |
| | | |
| | | |
| | | | |
thus further generalising rabbit_msg_file
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
it's not used anywhere and was cluttering the api
Also, make type sigs more meaningful and do not include rabbit.hrl,
thus underlining the general nature of this module.
|
| | | |
| | | |
| | | |
| | | | |
The msg_store knows nothing about queues, or message structure.
|
| | | |
| | | |
| | | |
| | | | |
to match what we call the containing table
|
| | | |
| | | |
| | | |
| | | | |
to match what we call the containing table
|
| | | |
| | | |
| | | |
| | | | |
vaporise was wiping out the disk_only data (both as a file, and when it was in mnesia). The result was that if the dq was in disk_only mode before being vaporised, it would refuse to start up again. Thus vaporise now pushes the queue back to ram_disk mode if necessary, after wiping out the contents of the mnesia table. Finally, all tests pass again.
|
| | | |
| | | |
| | | |
| | | | |
heavily loaded - up the limit to 5 seconds. However, I suspect something like 60 seconds is more likely to be realistic value
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|