| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
The [notification] batch_size parameter has had no effect since
workload partitioning was removed during Stein cycle[1]. This change
fixes the ignored parameter and ensure the parameter is used to enable
batch processing of notifications.
[1] 9d90ce8d37c0020077e4429f41c1ea937c1b3c1e
Change-Id: Id46679933cf96ecaca864aeae271052386b51815
|
|
|
|
| |
Change-Id: Ie882ff258d5a7b88caf29e64cd13f01cb5261326
|
|
|
|
| |
Change-Id: I6ef8a3f7cd3ac2e824bd2a64a64953d13413f0f7
|
|
|
|
| |
Change-Id: If004104d8920f33ab89c7584e685c67208c59675
|
|
|
|
|
|
|
|
| |
Workload partitioning has been quite fragile and poorly performing so it's not
advised to use it. It was useful for transformers: since transformers are going
away too, let's simplify the code base and remove it
Change-Id: Ief2f0e00d3c091f978084da153b0c76377772f28
|
|
|
|
|
|
|
|
|
|
|
| |
Theses features doesn't work well, rate-of-change metrics can still
wrongly be computed even with Pipeline partioning enabled. Also backend
like Gnocchi offers a better alternative to compute them.
This deprecates these two features, to be able to remove them in a couple
of releases.
Change-Id: I52362c69b7d500bfe6dba76f78403a9d376deb80
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When `ack_on_event_error` is set to False, it ack's the notification
anyway. The reason behind this is the configuration of the batch
notification listener which doesn't allow requeues
Change-Id: I5511272c4dc2d5759cab8b9e695bbd9ed6a1bf6a
Closes-Bug: #1720329
|
|/
|
|
| |
Change-Id: I9da63dcf30c11b58298c6db89090fe9e27a8065a
|
|
|
|
|
|
|
|
|
|
|
| |
currently we create queue per pipeline which is not necessary. it
creates more memory usage and doesn't necessarily distribute
work more effectively. this hashes data to queues based on
manager but still internally, the data is destined to specific
pipeline based on event_type. this will minimise queue usage while
keeping internal code path the same.
Change-Id: I0ccd51f13457f208fe2ccedb6e680c91e132f78f
|
|
|
|
|
|
|
|
| |
event, meter (and any other custom pipeline) can be enabled/disabled
by setting `pipelines` option under [notification] agent
Change-Id: Ia21256d0308457d077836e27b45d2acb8bb697e4
Closes-Bug: #1720021
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
notification agent now just asks for pipelinemanagers and gets
endpoints it should broadcast to from there. it only sets up a
listener for main queue and a listener for internal queue
(if applicable)
- pass in publishing/processing context into endpoints instead of
manager. context is based on partitioning or not
- move all endpoint/notifier setup to respective pipeline managers
- change interim broadcast filtering to use event_type rather than
publisher_id so all filtering uses event_type.
- add namespace to load supported pipeline managers
- remove some notification tests as they are redundant and only
different that it mocks stuff other tests don't mock
- change relevant_endpoint test to verify endpoints cover all pipelines
Related-Bug: #1720021
Change-Id: I9f9073e3b15c4e3a502976c2e3e0306bc99282d9
|
|
|
|
|
|
|
|
| |
- they are only used essentially for testing.
- cleanup stray pipeline references in polling tests
- remove random mocks that aren't mocking anything for a reason
Change-Id: I5881c0926dde2247c4606fed26e60bc5e197cf48
|
|
|
|
|
|
|
|
| |
- move sample/event specifc pipeline models to own module
- make grouping key computation part of pipeline
- remove pipeline mocks from polling tests
Change-Id: I20349e48751090210f8a0074c4a735f1b7e74bc1
|
|
|
|
|
|
|
| |
just let the periodic job decide if it needs to refresh.
Change-Id: I300967d926ea4b8b415aac4744fc7bd183b4cca4
Closes-Bug: #1730849
|
|
|
|
|
|
|
|
|
|
|
|
| |
processing endpoints shouldn't dictate what targets are being listened
to. they should just process what it is given based on their filter.
move this logic to notification agent so every processing endpoint
isn't defining the same set of targets to listen to.
ensure duplicate targets aren't created
Change-Id: I9ffe28b6406dcef88ef6861eb8a81e1a3ad786d2
|
|
|
|
|
|
| |
make samples and events use a common endpoint class
Change-Id: I1d15783721f91ee90adfbac88cef2a44e0b23868
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this broke when we switched to tooz partitioner
- ensure we trigger refresh if group changes
- ensure we have heartbeat or else members will just die.
- remove retain_common_targets tests because it doesn't make sense.
it was originally designed for when we had listener per pipeline
but that was changed 726b2d4d67ada3df07f36ecfd81b0cf72881e159
- remove testing workload partitioning path in standard notification
agent tests
- correct test_unique test to properly validate a single target
rather than the number of listeners we have.
- add test to ensure group_state is updated when a member joins
- add test to verify that listener assigned topics based on hashring
Closes-Bug: #1729617
Change-Id: I5039c93e6845a148c24094f755a78870d49ec19f
|
|\ |
|
| |
| |
| |
| | |
Change-Id: I2102d0d90f3f39c255b621e2f8436818ec250362
|
|/
|
|
| |
Change-Id: I7720d20eab345a7835d57fac573332eca0e7d11e
|
|
|
|
|
|
| |
this should've been removed with Id0c976b7e7e57fe9fd908376edc2c85dd1aa2abf
Change-Id: Icd524e778e91747761f182cf4a95b6d64d48913a
|
|
|
|
|
|
|
|
|
| |
If notification agent is stopped before the startup_delay.
terminate() throws error about missing attributes.
This change fixes that.
Change-Id: I43a84ec82916a21df017d2222fea35339b9e13b0
|
|\ |
|
| |
| |
| |
| |
| |
| |
| | |
This replaces the custom made partitioning system using the hashring by the one
provided in tooz.
Change-Id: I2321c92315accc5e5972138e7673d3a665df891e
|
|/
|
|
|
|
|
|
|
|
| |
parallel
This overrides the number of $executor_thread_pool_size with a global option
that is also used to set the number of parallel requests to Gnocchi that can be
done.
Change-Id: Iaa7e3d0739a63d571dd2afc262d191dffe5a0eef
|
|
|
|
|
|
|
| |
This also switches the coordination to zake:// in test so the code actually
works like it would with a production-ready coordinator.
Change-Id: I38f6a3389f70bed6b45fa7526a13d0484bfc9c3f
|
|
|
|
| |
Change-Id: Ie3d478f1983d8c8d562e163ef452d16e261b8322
|
|\ |
|
| |
| |
| |
| |
| |
| |
| | |
- jitter workers when starting so we don't all register with
coordinator and queues at the exact same time.
Change-Id: Ie28c0ca77ffd7f981a33684d379dba5badca0688
|
|/
|
|
| |
Change-Id: Id41a33180bce50959350c8f851b80e00e25fc383
|
|
|
|
|
|
|
|
|
|
| |
The pipeline dynamic refresh code has been removed. Ceilometer relies on the
cotyledon library for a few releases which provides reload functionality by
sending SIGHUP to the signal. This achieves the same feature while making sure
the reload is explicit once the file is correctly and entirely written to the
disk, avoiding the failing load of half-written files.
Change-Id: I1aca9cf4c3634e7c70eea651e1dd5c2f3ebfecc6
|
|
|
|
|
|
|
|
|
| |
Allow users add metrics into meters.yaml themselves. Reusing
http_control_exchanges makes it available to extend addtional
exchanges.
Closes-Bug: #1656873
Change-Id: I196f8fb0e2aee8498309bb0cb1b3ec2b2e21e211
|
|
|
|
|
|
|
| |
Sometimes messages are missing or adding white spaces, now traverse
the same type of error and modify it.
Change-Id: Id6b684a5c7538a8cbb5268501084fc53a56e05d8
|
|
|
|
|
|
|
| |
we don't support large amounts volume=1 meters anymore. the ones that
remain are on purpose and should be disabled via pipeline.
Change-Id: Ie571555449353f464412e71cd229a66544f9ae45
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This remove pollsters option from configfile sample
due to a duplicate option registration. This will
be fixed later.
The exchange_control group in the config sample doesn't exit
in reality exchange are in DEFAULT group.
This removes usage of cfg.CONF everywhere left.
This adds all missing OPTS in sample file.
Change-Id: I48c11ee7e1aae65847958b98532b3bdb48a3ceb5
|
|
|
|
| |
Change-Id: Ieedc84ebe87dfb45941332db25e1fbb0739da455
|
|
|
|
| |
Change-Id: I4e804973ec25dc7e50da956268306ba12ea0f7d2
|
|
|
|
| |
Change-Id: Ic33ae353215b5ef66bdab2fdef6bce8abd5e9921
|
|
|
|
| |
Change-Id: I97840aa9d1249deeba91dcdb6a5d23eca2fecdf1
|
|
|
|
| |
Change-Id: I81fde04a671a39f73a315f3c600e288aa25885d9
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
this patches does multiple things:
- it ignores batching when pulling from main queue to maintain as
much ordering as possible. this avoids related messages split across
multiple batches and one batch starting much farther along than another
- it sets pipeline processing listeners to single thread. at this stage,
the pipeline queue contains less messages and therefore very likely,
thread1 and thread2 will grab related messages and race to set cache
- adds sorting to pipeline queue so if batching is enabled, we can
further ensure that messages are in order before processing.
- enables batching by default. (one thread per listener grabbing one
message at a time will be slow.)
- offers better batching for direct to db dispatcher as resources are
grouped
Change-Id: Iac5b552bae1e73f93cbfc830b1e83510b1aceb9e
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
| |
The current option is misnamed, as it does not enable any storing but
just the processing of events from the notification agent to the
collector.
This means that even if you set event_dispatchers=panko and forget to
set store_events=true, nothing will happen.
This patch enable the event processing as soon as something is
configured in the pipeline.
Change-Id: I5a906684f6371b0548ac08cacc13aa238f780f78
|
|
|
|
|
|
| |
This change replaces oslo.service by cotyledon.
Change-Id: I6eea5fcd26ade07fbcb5718b4285b7fa039a3b08
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change removes usage of eventlet timers.
This allows coordinator heartbeat/watchers to work correctly
when the main thread is stuck for any reason (IO, time.sleep, ...).
This also fixes a concurrency issue in the notification-agent between
stop/reload_pipeline/refresh_agent that manipulate listeners.
For example a listener stopped by stop() could be restart by
reload_pipeline or refresh_agent. Now we use the coord_lock to
protect the listener manipulations and ensure we are not in a shutdown
process when we restart it.
This bug can't occurs with greenlet because we don't monkeypatch system
call for a while now and all of this methods wasn't ran in concurrency
manner. But remplacing greenlet by reel thread have show up the bug.
Closes-Bug: #1582641
Change-Id: I21c3b953a296316b983114435fcbeba1e29f051e
|
|
|
|
|
|
|
|
|
| |
By default oslo.cfg sets the default value as None. There is no
need to explicitly do this.
TrivialFix
Change-Id: I69adadc2196a119eb0661ae7a7fd6af607e61689
|
|
|
|
|
|
|
|
|
| |
The "topic" parameter of the __init__method of Notifier has been
deprecated and will be removed. see change[1].
[1] Id89957411aa219cff92fafec2f448c81cb57b3ca
Change-Id: If41b0aa4f9afc90d049063bf509723c3a8295db7
|
|
|
|
| |
Change-Id: Ibaf3bb7af28a5be62eb97b6a423ed6acbcc4c651
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We have come across problem when we follow the example of
messaging_urls, the credentials for multiple hosts fall back to
default except the first host. This patch updates this option's
help string to make it clear and right.
1. change 'transport' to 'rabbit' in example
transport is a term in oslo.messaging, it is not a real type of
oslo.messaging driver, as for an example, we should use a real
one.
2. add credentials for each node
when there are multiple nodes specified in url, each node should
have its own credentials, oslo.messaging doesn't support using
first one for all.[1]
3. add explaination for the usage scenario
normally, there is no need to set the messaging_urls, it is
useful when we have dedicate messaging nodes for each service.
[1]: http://docs.openstack.org/developer/oslo.messaging/transport.html#oslo_messaging.TransportURL.parse
Change-Id: If077a4b16a292be740aa72329fcda8c7a3973dc7
|