| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
The format string is hardcoded as "~s~n" in
[ERTS](https://github.com/erlang/otp/blame/OTP-26.0/erts/emulator/beam/utils.c#L878).
Using unicode modifier resulted in a mismatch.
|
|\
| |
| | |
Github Actions pipeline to compare build systems nightly
|
| |
| |
| |
| | |
To catch any drift between the builds
|
| |
| |
| |
| |
| |
| |
| |
| | |
as it nicer categorises if there will be a future
"message_interceptors.outgoing.*" key.
We leave the advanced config file key because simple single value
settings should not require using the advanced config file.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As reported in https://groups.google.com/g/rabbitmq-users/c/x8ACs4dBlkI/
plugins that implement rabbit_channel_interceptor break with
Native MQTT in 3.12 because Native MQTT does not use rabbit_channel anymore.
Specifically, these plugins don't work anymore in 3.12 when sending a message
from an MQTT publisher to an AMQP 0.9.1 consumer.
Two of these plugins are
https://github.com/rabbitmq/rabbitmq-message-timestamp
and
https://github.com/rabbitmq/rabbitmq-routing-node-stamp
This commit moves both plugins into rabbitmq-server.
Therefore, these plugins are deprecated starting in 3.12.
Instead of using these plugins, the user gets the same behaviour by
configuring rabbitmq.conf as follows:
```
incoming_message_interceptors.set_header_timestamp.overwrite = false
incoming_message_interceptors.set_header_routing_node.overwrite = false
```
While both plugins were incompatible to be used together, this commit
allows setting both headers.
We name the top level configuration key `incoming_message_interceptors`
because only incoming messages are intercepted.
Currently, only `set_header_timestamp` and `set_header_routing_node` are
supported. (We might support more in the future.)
Both can set `overwrite` to `false` or `true`.
The meaning of `overwrite` is the same as documented in
https://github.com/rabbitmq/rabbitmq-message-timestamp#always-overwrite-timestamps
i.e. whether headers should be overwritten if they are already present
in the message.
Both `set_header_timestamp` and `set_header_routing_node` behave exactly
to plugins `rabbitmq-message-timestamp` and `rabbitmq-routing-node-stamp`,
respectively.
Upon node boot, the configuration is put into persistent_term to not
cause any performance penalty in the default case where these settings
are disabled.
The channel and MQTT connection process will intercept incoming messages
and - if configured - add the desired AMQP 0.9.1 headers.
For now, this allows using Native MQTT in 3.12 with the old plugins
behaviour.
In the future, once "message containers" are implemented,
we can think about more generic message interceptors where plugins can be
written to modify arbitrary headers or message contents for various protocols.
Likewise, in the future, once MQTT 5.0 is implemented, we can think
about an MQTT connection interceptor which could function similar to a
`rabbit_channel_interceptor` allowing to modify any MQTT packet.
|
|
|
|
|
|
|
|
| |
Left as it was, a failure enabling the feature flags leaves the
cluster in an inconsistent state where the joined nodes think
the joining node is already a member, but the joining node
believes its a standalone node. Thus, later join_cluster commands
fail with an inconsistent cluster error.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
[Why]
The Feature flags registry is implemented as a module called
`rabbit_ff_registry` recompiled and reloaded at runtime.
There is a copy on disk which is a stub responsible for triggering the
first initialization of the real registry and please Dialyzer. Once the
initialization is done, this stub calls `rabbit_ff_registry` again to
get an actual return value. This is kind of recursive: the on-disk
`rabbit_ff_registry` copy calls the `rabbit_ff_registry` copy generated
at runtime.
Early during RabbitMQ startup, there could be multiple processes
indirectly calling `rabbit_ff_registry` and possibly triggering that
first initialization concurrently. Unfortunately, there is a slight
chance of race condition and deadlock:
0. No `rabbit_ff_registry` is loaded yet.
1. Both process A and B call `rabbit_ff_registry:something()` indirectly
which triggers two initializations in parallel.
2. Process A acquires the lock first and finishes the initialization. A
new registry is loaded and the old `rabbit_ff_registry` module copy
is marked as "old". At this point, process B still references that
old copy because `rabbit_ff_registry:something()` is up above in its
call stack.
3. Process B acquires the lock, prepares the new registry and tries to
soft-purge the old `rabbit_ff_registry` copy before loading the new
one.
This is where the deadlock happens: process B requests the Code server
to purge the old copy, but the Code server waits for process B to stop
using it.
[How]
With this commit, process B calls `erlang:check_process_code/2` before
asking for a soft purge. If it is using an old copy, it skips the purge
because it will deadlock anyway.
|
|\
| |
| | |
bazel run gazelle
|
| | |
|
|/ |
|
|\ |
|
| |
| |
| |
| |
| |
| |
| | |
In OTP 26, our custom type tuple(A,B) starts intefering
with the built-in type tuple().
Therefore rename tuple(A,B) to optimised_tuple(A,B).
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
classic.
|
|/ |
|
| |
|
| |
|
|
|
|
| |
stop_clear is deprecated
|
| |
|
|\
| |
| | |
CQv1: Don't limit messages in memory based on consume rate
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The v1 index is not optimised for reading messages except when
the entire segment is read. So we always do that.
This change was made because when the read is inefficient and
TTL is used the queue can get unresponsive while getting the
TTL messages dropped. In that case the queue may drop messages
slower than they expire and as a result will not process any
Erlang messages until it has dropped all messages in the queue.
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
Close #5437
|
|/
|
|
| |
As this is preferred in rules_erlang 3.9.14
|
|
|
|
|
|
| |
since this broke erlang_ls
requires rules_erlang 3.9.13
|
|\
| |
| | |
Adjust `-include(...` in some tests to work with both bazel and make
|
| | |
|
|\ \
| | |
| | | |
Ensure monitor is started when dequeuing
|
| | |
| | |
| | |
| | | |
Fixes #4976
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Use reachable nodes to start stream coordinator Ra cluster
rabbit_nodes:list_running/0 was previously used, but it returns
only nodes with the rabbit application running. The application
may not be running when streams are created, e.g. when importing
definitions on startup. The function would then return an empty
list, which makes the Ra cluster startup fail.
rabbit_nodes:list_reachable/0 returns cluster nodes, regardless
of the status of the rabbit application, which is enough in this case.
* Return existing queue record on queue creation
If any. And do not expect the record to be exactly
equal to the passed-in queue parameter: it is not
for streams (spotted by trying to import stream definitions
on startup several times).
* Add an assertion
---------
Co-authored-by: Michal Kuratczyk <mkuratczyk@vmware.com>
|
|\ \
| |/
|/| |
Recovery terms: use ram_file on start, but not on shutdown
|
| |\ |
|
| | | |
|
| | | |
|
| | | |
|
|\ \ \
| | | |
| | | | |
Use gazelle for some maintenance of bazel BUILD files
|
| | |/
| |/|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Bazel build files are now maintained primarily with `bazel run
gazelle`. This will analyze and merge changes into the build files as
necessitated by certain code changes (e.g. the introduction of new
modules).
In some cases there hints to gazelle in the build files, such as `#
gazelle:erlang...` or `# keep` comments. xref checks on plugins that
depend on the cli are a good example.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Demonitor when timing out waiting for quorum queue leader to be gone.
This commit fixes the failing test:
```
make -C deps/rabbit ct-publisher_confirms_parallel t=quorum_queue:confirm_minority
```
which caused the channel to crash:
```
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> ** Reason for termination ==
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> ** {function_clause,
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> [{rabbit_channel,handle_info,
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> [{'DOWN',#Ref<0.1252370609.3538681857.123766>,process,
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> {'%2F_quorum_queue_confirm_minority',
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> 'rmq-ct-mnesia_store-1-21000@localhost'},
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> shutdown},
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> {ch,{conf,running,rabbit_framing_amqp_0_9_1,58,<0.1240.0>,
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> <0.2389.0>,<0.1240.0>,
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> <<"127.0.0.1:42312 -> 127.0.0.1:21000">>,undefined,
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> {user,<<"guest">>,
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> [administrator],
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> [{rabbit_auth_backend_internal,
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> #Fun<rabbit_auth_backend_internal.3.61791021>}]},
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> <<"/">>,<<>>,<0.1241.0>,
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> [{<<"publisher_confirms">>,bool,true},
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> {<<"exchange_exchange_bindings">>,bool,true},
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> {<<"basic.nack">>,bool,true},
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> {<<"consumer_cancel_notify">>,bool,true},
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> {<<"connection.blocked">>,bool,true},
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> {<<"authentication_failure_close">>,bool,true}],
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> none,0,134217728,1800000,#{},1000000000},
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> {lstate,<0.2390.0>,false},
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> none,1,
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> {0,[],[]},
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> {state,#{},erlang},
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> #{},#{},
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> {state,none,5000,undefined},
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> false,1,
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> {rabbit_confirms,undefined,#{}},
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> [],[],none,flow,[],
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> {rabbit_queue_type,#{}},
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> #Ref<0.1252370609.3538681858.121367>,false}],
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> [{file,"rabbit_channel.erl"},{line,767}]},
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> {gen_server2,handle_msg,2,[{file,"gen_server2.erl"},{line,1067}]},
2023-04-18 12:03:28.476525+00:00 [error] <0.2391.0> {proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,240}]}]}
2023-04-18 12:03:
```
|
|\ \ \
| | | |
| | | | |
Make it possible to configure per-node runtime parameter limits
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Introduce 'ctl update_vhost_metadata'
that can be used to update the description, tags or default queue type of
any existing virtual hosts.
Closes #7912, #7857.
#7912 will need an HTTP API counterpart change.
|