| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|\ |
|
| |\ |
|
| | |\ |
|
| | | |\ |
|
| | | |/
| | | |
| | | |
| | | | |
consistent.
|
| | | |
| | | |
| | | |
| | | | |
...in the channel, queue and msg_store
|
| | | | |
|
| | | | |
|
| | | | |
|
| |/ / |
|
| |\ \ |
|
| | | | |
|
| | | | |
|
| |/ / |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
My general rule for spelling these things was: if it's followed by
node, it should be disc; otherwise, it should be disk. So,
is_disc_node vs OnDisk. I managed to miss those two, though.
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | | |
Moving to another branch... but I need the diff.
|
| | |
| | |
| | |
| | |
| | | |
I don't want to risk breaking something with that right now. I'll
test and file another bug if it's safe.
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
A bit of trouble performing local upgrades on ram nodes: the upgrade
process takes down Mnesia, which causes it to lose its schema and its
recorded table info, which causes the ensuing schema check to fail.
Solution: after doing the secondary upgrades, re-cluster ram nodes and
wait for them to sync up, again.
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Always backup and reset during clustering if the node type has
changed. Worst case, it ensures that we actually have an empty mnesia
dir with new ram nodes.
Also, preemptively leave a cluster before joining it. Suppose we had
a two-node cluster, the first node goes down, the second hard resets,
the first node comes back up, the second node tries to rejoin the
cluster with a different type. Since it hard-reset, it doesn't know
that it used to be part of the cluster, and the other node is unaware
that our node is supposed to have left the cluster. So, when
clustering, we always try to leave a cluster before joining it.
In leave_cluster/2, I added {aborted, {node_not_running, _}} to the
"not error" returns, because it looks similar to {badrpc, nodedown},
which was already there. This may be wrong.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Matthias points out that the only way for mnesia:change_config to
start creating tables is for the node to re-join the cluster (and get
its old table definitions from other nodes). Instead of fixing the
tables after that, we now reset inside the cluster command. Note that
in very old rabbit versions, we'd just prevent this by having the user
reset manually.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We don't need them anymore.
BTW, when generating the backup db directory name, we used to just
append the date (including minutes, seconds) to the current directory
name. But since we do an extra move_db/0 now (when converting from
disc to ram), some of the tests were fast enough to try to backup the
database twice in the same second. Needless to say, this would cause
an error. So, now we generate the new name as before, but if it's
already used, we wait 1s and try again.
|
| |\ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The tests used to be slightly wrong. They'd expect:
rabbitmqctl cluster
to create a cluster with a single ram node. It didn't, instead, it
created a cluster with a single disc node. Since we weren't actually
checking the resulting node type, everything went along fine.
Assert_tables_copy_type is complicated by all the cases it has to
handle, namely converting disc -> ram and ram -> disc. For disc ->
ram, we first need to convert all the tables to ram, then, we need to
convert the schema (converting it before fails with "Disc resident
tables"). For ram -> disc, we first need to convert the schema,
otherwise, all the table conversions will fail with 'has_no_disc'.
Regarding an earlier commit, using mensia:system_info(use_dir) to
check if we have a disc node is wrong because we create the directory
for the message_store anyway.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The main trouble with this is
mnesia:change_config(extra_db_nodes,...). If the current node doesn't
have a schema with all the tables created, it will get it from one of
the nodes it connects to. This is a problem if we're a disc node and
they're a ram node or vice versa. So, after we get the tables from
the other node and wait_for_remote_tables, we assert that their
storage location is correct and change it if not. If the other node
is disc and we are ram, this unfortunately means that the tables are
first written to disc and then converted to ram, but since this is a
one-off operation during startup, it shouldn't be a problem.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
I've disabled disc-to-ram conversions, for now.
Mnesia:change_table_copy_type(schema, node(), ram_copies) half-fails
silently if mnesia isn't running. It removes the stuff in the db dir,
but it gets recreated on the next mnesia:start/0.
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
Gzips don't normally print anything, so they're fine.
|
| |_|/
|/| | |
|