| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(take 2)
[Why]
The background reason for this fix is about the same as the one
explained in the previous version of this fix; see commit
e0a2f1027278bd0901bdaa9af6065abc676ff14f.
This time, the order of events that led to a similar deadlock is the
following:
0. No `rabbit_ff_registry` is loaded yet.
1. Process A, B and C call `rabbit_ff_registry:something()` indirectly
which triggers two initializations in parallel.
* Process A did it from an explicit call to
`rabbit_ff_registry_factory:initialize_factory()` during RabbitMQ
boot.
* Process B and C indirectly called it because they checked if a
feature flag was enabled.
2. Process B acquires the lock first and finishes the initialization. A
new registry is loaded and the old `rabbit_ff_registry` module copy
is marked as "old". At this point, process A and C still reference
that old copy because `rabbit_ff_registry:something()` is up above in
its call stack.
3. Process A acquires the lock, prepares the new registry and tries to
soft-purge the old `rabbit_ff_registry` copy before loading the new
one.
This is where the deadlock happens: process A requests the Code server
to purge the old copy, but the Code server waits for process C to stop
using it.
The difference between the steps described in the first bug fix
attempt's commit and these ones is that the process which lingers on the
deleted `rabbit_ff_registry` (process C above) isn't the one who
acquired the lock; process A has it.
That's why the first bug fix isn't effective in this case: it relied on
the fact that the process which lingers on the deleted
`rabbit_ff_registry` is the process which attempts to purge the module.
[How]
In this commit, we go with a more drastic change. This time, we put a
wrapper in front of `rabbit_ff_registry` called
`rabbit_ff_registry_wrapper`. This wrapper is responsible for doing the
automatic initialization if the loaded registry is the stub module. The
`rabbit_ff_registry` stub now always returns `init_required` instead of
performing the initialization and calling itself recursively.
This way, processes linger on `rabbit_ff_registry_wrapper`, not on
`rabbit_ff_registry`. Thanks to this, the Code server can proceed with
the purge.
See #8112.
|
|\
| |
| | |
Github Actions pipeline to compare build systems nightly
|
| |
| |
| |
| | |
To catch any drift between the builds
|
|\ \
| |/
|/| |
Move plugin rabbitmq-message-timestamp to the core
|
| |
| |
| |
| |
| |
| |
| |
| | |
as it nicer categorises if there will be a future
"message_interceptors.outgoing.*" key.
We leave the advanced config file key because simple single value
settings should not require using the advanced config file.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
As reported in https://groups.google.com/g/rabbitmq-users/c/x8ACs4dBlkI/
plugins that implement rabbit_channel_interceptor break with
Native MQTT in 3.12 because Native MQTT does not use rabbit_channel anymore.
Specifically, these plugins don't work anymore in 3.12 when sending a message
from an MQTT publisher to an AMQP 0.9.1 consumer.
Two of these plugins are
https://github.com/rabbitmq/rabbitmq-message-timestamp
and
https://github.com/rabbitmq/rabbitmq-routing-node-stamp
This commit moves both plugins into rabbitmq-server.
Therefore, these plugins are deprecated starting in 3.12.
Instead of using these plugins, the user gets the same behaviour by
configuring rabbitmq.conf as follows:
```
incoming_message_interceptors.set_header_timestamp.overwrite = false
incoming_message_interceptors.set_header_routing_node.overwrite = false
```
While both plugins were incompatible to be used together, this commit
allows setting both headers.
We name the top level configuration key `incoming_message_interceptors`
because only incoming messages are intercepted.
Currently, only `set_header_timestamp` and `set_header_routing_node` are
supported. (We might support more in the future.)
Both can set `overwrite` to `false` or `true`.
The meaning of `overwrite` is the same as documented in
https://github.com/rabbitmq/rabbitmq-message-timestamp#always-overwrite-timestamps
i.e. whether headers should be overwritten if they are already present
in the message.
Both `set_header_timestamp` and `set_header_routing_node` behave exactly
to plugins `rabbitmq-message-timestamp` and `rabbitmq-routing-node-stamp`,
respectively.
Upon node boot, the configuration is put into persistent_term to not
cause any performance penalty in the default case where these settings
are disabled.
The channel and MQTT connection process will intercept incoming messages
and - if configured - add the desired AMQP 0.9.1 headers.
For now, this allows using Native MQTT in 3.12 with the old plugins
behaviour.
In the future, once "message containers" are implemented,
we can think about more generic message interceptors where plugins can be
written to modify arbitrary headers or message contents for various protocols.
Likewise, in the future, once MQTT 5.0 is implemented, we can think
about an MQTT connection interceptor which could function similar to a
`rabbit_channel_interceptor` allowing to modify any MQTT packet.
|
|\ \
| | |
| | | |
Also exclude the .erlang.mk directory in gazelle
|
|/ / |
|
|\ \
| | |
| | | |
Exclude nested deps fetched by make from gazelle
|
| |/ |
|
|\ \
| |/
|/| |
UnsubscribeResponse in stream protocol doc
|
|/ |
|
|\
| |
| | |
Adopt otp 25.2.3
|
| | |
|
|\ \
| | |
| | | |
Adopt otp 25.1.2.1
|
| |/ |
|
|\ \
| | |
| | | |
Adopt otp 25.0.4
|
| |/ |
|
|\ \
| | |
| | | |
Adopt otp 25.3.2
|
| |/ |
|
|\ \
| |/
|/| |
Adopt otp
|
|/ |
|
|\
| |
| | |
3.11.16 release notes
|
| | |
|
|/ |
|
|\
| |
| | |
bazel run gazelle-update-repos for Ra 2.6
|
| | |
|
|\ \
| |/
|/| |
rabbitmq_cli dialyze enhancements
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- eex
- elixir
- ex_unit
- iex
- logger
- mix
So that apps (like rabbitmq_cli) can dialyze against the extra
components
|
| |
| |
| |
| |
| | |
So that they are no longer reported as unknown in
//deps/rabbitmq_cli:dialyze
|
|/
|
|
|
| |
This provides an elixir/erlang agnostic way of providing them other
erlang rules
|
|\
| |
| | |
Update 3.12.0 release notes
|
| | |
|
|\ \
| |/
|/| |
Pin Ra to 2.6.1
|
| | |
|
|/ |
|
|\
| |
| | |
Synchronise feature flags before any changes to Mnesia membership
|
|/
|
|
|
|
|
|
| |
Left as it was, a failure enabling the feature flags leaves the
cluster in an inconsistent state where the joined nodes think
the joining node is already a member, but the joining node
believes its a standalone node. Thus, later join_cluster commands
fail with an inconsistent cluster error.
|
|\
| |
| | |
Correctly use AMQP URI query parameter `password`
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes #8129
The query parameter `password` in an AMQP URI should only be used to set
a certificate password, *not* the login password. The login password is
set via the `amqp_authority` section as defined here -
https://www.rabbitmq.com/uri-spec.html
* Add test that demonstrates issue in #8129
* Modify code to fix test
Modify amqp_uri so that test passes
|
|\
| |
| |
| |
| | |
rabbitmq/fix-feature-flags-init-vs-code_server-deadlock
rabbit_feature_flags: Fix possible deadlock when calling the Code server
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
[Why]
The Feature flags registry is implemented as a module called
`rabbit_ff_registry` recompiled and reloaded at runtime.
There is a copy on disk which is a stub responsible for triggering the
first initialization of the real registry and please Dialyzer. Once the
initialization is done, this stub calls `rabbit_ff_registry` again to
get an actual return value. This is kind of recursive: the on-disk
`rabbit_ff_registry` copy calls the `rabbit_ff_registry` copy generated
at runtime.
Early during RabbitMQ startup, there could be multiple processes
indirectly calling `rabbit_ff_registry` and possibly triggering that
first initialization concurrently. Unfortunately, there is a slight
chance of race condition and deadlock:
0. No `rabbit_ff_registry` is loaded yet.
1. Both process A and B call `rabbit_ff_registry:something()` indirectly
which triggers two initializations in parallel.
2. Process A acquires the lock first and finishes the initialization. A
new registry is loaded and the old `rabbit_ff_registry` module copy
is marked as "old". At this point, process B still references that
old copy because `rabbit_ff_registry:something()` is up above in its
call stack.
3. Process B acquires the lock, prepares the new registry and tries to
soft-purge the old `rabbit_ff_registry` copy before loading the new
one.
This is where the deadlock happens: process B requests the Code server
to purge the old copy, but the Code server waits for process B to stop
using it.
[How]
With this commit, process B calls `erlang:check_process_code/2` before
asking for a soft purge. If it is using an old copy, it skips the purge
because it will deadlock anyway.
|
|/
|
|
|
| |
Debugging a setup-beam issue:
https://github.com/erlef/setup-beam/issues/189
|
|\
| |
| | |
Peer discovery: shrink QQ replicas on forced node removal
|
| | |
|
| |
| |
| |
| | |
the node is being removed
|
| | |
|