| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|/
|
|
|
|
|
|
|
|
|
| |
The default values needed for trove's implementation of cors
middleware have been moved from paste.ini into a common
set_defaults method, invoked on load. Unlike similar patches
on other services, this patch does not include config-generation
hooks, as trove doesn't use them yet.
Change-Id: Id8e04249498f63e42dadcacbd2c08b525adc0958
Closes-Bug: 1551836
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement backup and restore functionality for Cassandra datastore.
We implement full backup strategy using the Nodetool
(http://goo.gl/QtXVsM) utility.
Snapshots:
Nodetool can take a snapshot of one or more keyspace(s).
Snapshot(s) will be stored in the data directory tree:
'<data dir>/<keyspace>/<table>/snapshots/<snapshot name>'
A snapshot can be restored by moving all *.db files from a snapshot
directory to the respective keyspace overwriting any existing files.
NOTE: It is recommended to include the system keyspace in the backup.
Keeping the system keyspace will reduce the restore time
by avoiding need to rebuilding indexes.
The Backup Procedure:
1. Clear existing snapshots.
2. Take a snapshot of all keyspaces.
3. Collect all *.db files from the snapshot directories package them
into a single TAR archive.
Transform the paths such that the backup can be restored simply by
extracting the archive right to an existing data directory
(i.e. place the root into the <data dir> and
remove the 'snapshots/<snapshot name>' portion of the path).
The data directory itself is not included in the backup archive
(i.e. the archive is rooted inside the data directory).
This is to make sure we can always restore an old backup
even if the standard guest agent data directory changes.
Attempt to preserve access modifiers on the archived files.
Assert the backup is not empty as there should always be
at least the system keyspace. Fail if there is nothing to backup.
4. Compress and/or encrypt the archive as required.
5. This archive is streamed to the storage location.
The Restore Procedure:
1. Create a new data directory as it does not exist.
2. Unpack the backup to that directory.
3. Update ownership of the restored files to the Cassandra user.
Notes on 'cluster_name' property:
Cassandra has a concept of clusters. Clusters are composed of
nodes - instances. All nodes belonging to one cluster must all have the
same 'cluster_name' property. This prevents nodes from different logical
clusters from accidentally talking to each other.
The cluster name can be changed in the configuration file.
It is also stored in the system keyspace.
When the Cassandra service boots up it verifies that the cluster name
stored in the database matches the name in the configuration file and
fails if not. This is to prevent the operator from accidentally
launching a node with data from another cluster.
The operator has to update the configuration file.
Similarly, when a backup is restored it carries the original cluster
name with it. We have to update the configuration file to use the old
name.
When a node gets restored it will still belong to the original cluster.
Notes on superuser password reset:
Database is no longer wide open and requires password authentication.
The 'root' password stored in the system keyspace
needs to be reset before we can start up with restored data.
A general password reset procedure is:
- disable user authentication and remote access
- restart the service
- update the password in the 'system_auth.credentials' table
- re-enable authentication and make the host reachable
- restart the service
Note: The superuser-password-reset and related methods that
potentially expose the database contents are intentionally
decorated with '_' and '__' to discourage a caller from
using them unless absolutely necessary.
Additional changes:
- Adds backup/restore namespaces to the sample config
file 'trove-guestagent.conf.sample'.
We include the other datastores too
for the sake of consistency.
(Auston McReynolds, Jul 6, 2014)
Implements: blueprint cassandra-backup-restore
Co-Authored-By: Denis Makogon <dmakogon@mirantis.com>
Change-Id: I3671a737d3e71305982d8f4965215a73e785ea2d
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
oslo.messaging has deprecated the use of messaging config settings,
specifically rabbit_* settings, in the [DEFAULT] section. This commit
moves the rabbit settings to a [oslo_messaging_rabbit] section in
each of the relevant trove service sample config files.
Change-Id: Ia869768102a8a841313cd7e0fd8a9fdab257d3e3
Closes-Bug: #1528391
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
CORS middleware's latent configuration feature, new in 3.0.0,
allows adding headers that apply to all valid origins.
This patch adds headers commonly used in openstack to trove's
paste pipeline, so that operators do not have to be aware of
additional configuration magic to ensure that browsers can talk
to the API.
For more information:
http://docs.openstack.org/developer/oslo.middleware/cors.html#configuration-for-pastedeploy
Change-Id: Idf2cd7a0d0d701002f2c1f178475da39ae1a9caf
|
|/
|
|
|
|
|
|
|
|
|
|
| |
To properly support different storage strategies the taskmanager
needs to be able to access the proper storage strategy to determine
things like the container name.
The patch addresses moving the storage strategy from guestagent
to common.
Change-Id: If81100cc88c6b883492c9f7b1a5e2437ba155eda
Closes-Bug: 1525283
|
|\ |
|
| |
| |
| |
| |
| |
| |
| | |
Starting with opsrofiler 0.3.1 release there is no need to set HMAC_KEYS
and ENABLED arguments in the api-paste.ini file, this can be set in the
trove.conf configuration file.
Change-Id: Icbeb8bb09536bad88907fe590fa70386199ce03d
|
|\ \
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| | |
The value is not in the proper format, and it lead to an incorrect
value being parsed.
Change-Id: I36483561f80d5d5dec9a30bcb9f003df96250c3f
Closes-Bug: #1506831
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This adds the CORS support middleware to Trove, allowing a deployer
to optionally configure rules under which a javascript client may
break the single-origin policy and access the API directly.
For trove, the paste.ini method of deploying the middleware was
chosen, because it needs to be able to annotate responses created
by keystonemiddleware. If the middleware were explicitly included
as in the previous patch, keystone would reject the request before
the cross-domain headers could be annotated, resulting in an
error response that was unreadable by the user agent.
OpenStack Spec:
http://specs.openstack.org/openstack/openstack-specs/specs/cors-support.html
Oslo_Middleware Docs:
http://docs.openstack.org/developer/oslo.middleware/cors.html
Cloud Admin Guide Documentation:
http://docs.openstack.org/admin-guide-cloud/cross_project_cors.html
Change-Id: Ic55305607e44069d893baf2a261d5fe7da777303
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
ignore_dbs and ignore_users (for MySQL) were still in DEFAULT section
of config. Moved them to the 'mysql' section and 'percona'
section. Sample config file(s) have been updated as well.
Change-Id: I3a41bcb011a76343afa3bcc30013cb835e456950
Closes-Bug: #1465242
Closes-Bug: #1480448
Closes-Bug: #1480447
Closes-Bug: #1374004
|
|\ \ |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Some variables which were actually per-tenant were
incorrectly named with a _user. Corrected this
confusion by making the old names deprecated
and providing new per-tenant names.
Closes Bug: #1232969
Change-Id: I541ed990c0cdd40b3805d2e2a166363d9ff2ad04
|
|/
|
|
|
|
|
|
|
|
|
| |
For more details about why this was done (by the oslo log team), see
references provided below. The change here just removes the entry from
the test config file to avoid the error.
https://review.openstack.org/#/c/206437/3
Change-Id: I941ef163561b80539397788b65aae7d4548bac49
Closes-Bug: #1507217
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Performs backup using the Redis client to persist data to the file
system, then streams the result to swift.
Performs restore by replacing the data file with the Swift backup
and starting the server again in the correct manner.
Note: Running the int-tests require that volume_support is set
to false in the test.conf file.
To run:
./redstack install
./redstack kick-start redis
(vi /etc/trove/test.conf and change volume_support to false)
./redstack int-tests --group=backup (or --group=redis_supported)
Co-Authored-by: hardy.jung <hardy.jung@daumkakao.com>
Co-Authored-by: Peter Stachowski <peter@tesora.com>
Depends-On: I633273d438c22f98bef2fd1535730bcdb5e5cff0
Implements: blueprint redis-backup-restore
Change-Id: I1bd391f8e3f7de12396fb41000e3c55be23c04ee
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The int-tests in Trove are very MySQL specific, which makes it difficult
to reuse code for other datastores. This changeset breaks them down
into 'groups' and 'runners.' Runners can be over-ridden to add
datastore specific handling/tests. This should allow most generic
code to be reused across datastores, while also providing for datastore
specific enhancements.
Runner implementations are stored in a new package
'trove.tests.scenario.runners'. A datastore-specific implementation can
be added to the appropriate runner module file. Its name has to match
'PrefixBaseRunnerClassName' pattern, where 'BaseRunnerClassName' is the
name of the default runner and 'Prefix' is the datastore's manager
name with the first letter capitalized.
Example:
Given the default implementation for negative cluster tests in
'trove.tests.api.runners.NegativeClusterActionsGroup'. One can
provide a custom implementation for MongoDB (with manager mongodb)
in 'trove.tests.api.runners.MongodbNegativeClusterActionsRunner'
This initial changeset adds tests for basic actions on instances
and clusters. Some basic replication tests were also migrated.
The concept of a helper class for datastore specific activies
was also created. This makes it easy to have tests use standard
methods of adding data and verifying that the datastore behaves
as it should.
Vertica was refactored to use the new infrastructure.
Running the tests can be accomplished by specifying one of the
new groups in int-tests (see int_tests.py for the complete list):
./redstack kick-start mongodb
./redstack int-tests --group=instance_actions --group=cluster
or
./redstack int-tests --group=mongodb_supported (to run all
tests supported by the MongoDB datastore)
As with the original int-tests, the datastore used is the one
referenced in test configuration file (test.conf) under the
key dbaas_datastore. This key is automatically set when
kick-start is run.
Additional Notes:
Also temporarily disabled volume size check in
instances tests.
It is supposed to assert that the used space on the
Trove volume is less that the size of the volume.
It however often fails because 'used' > 'size'.
From inspection of the instance it appears that the reported
'used' space is from the root volume instead of the
attached Trove volume. Plus it sometimes returns int instead of float.
Change-Id: I34fb974a32dc1b457026f5b9d98e20d1c7219009
Authored-By: Petr Malik <pmalik@tesora.com>
Co-Authored-By: Peter Stachowski <peter@tesora.com>
|
|
|
|
|
|
|
|
|
|
| |
The rsdns service is no longer needed since trove now supports designate.
It is not being tested in the gate and is currently unsupported.
Remove other rsdns related files and references.
Change-Id: I44009dace44afb5467c51def33c794641ffa33c0
Closes-Bug: #1454028
|
|
|
|
|
|
|
|
|
|
|
|
| |
The guestagent related settings are in the database section of the
taskmanager.conf.sample file. If a user were to use this sample then
the guestagent related settings will not be read by the taskmanager
service because it expects these guestagent related settings to be
in the default section. This fix moves the guestagent related
settings to the default section.
Change-Id: I8dbdf7abcfd5f2341831cc6daaf5cb14d8d42d61
Closes-Bug: #1459415
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The task manager can be configured with periodic events
that check for the existence of running nova instances.
The events run via a separate admin connection to nova.
Taskmanager.Manager populates the admin client context
with the tenant name provided by nova_proxy_admin_tenant_name
parameter instead of the uuid, which results in an invalid
management url that is composed of the two parameters:
<nova_compute_url>/<nova_proxy_admin_tenant_name>.
Changing tenant_name to tenant_id results in a valid endpoint.
DocImpact
deprecating option 'nova_proxy_admin_tenant_name'
to 'nova_proxy_admin_tenant_id'
Change-Id: Ia1315e41288ab1b24ac402bad15176cb1ae0e5cd
Co-Authored-By: Li Ma <skywalker.nick@gmail.com>
Closes-Bug: #1289101
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
A specification for this change was submitted for review in
https://review.openstack.org/#/c/151279
- HP Vertica Community Edition supports upto a 3-node cluster.
- HP Vertica requires a minimum of 3 nodes to achieve fault tolerance.
- This patchset provides ability to launch HP Vertica 3-node cluster.
- The cluster-show API, would also list the IPs of underlying instances.
Code Added:
- Added API strategy, taskmanager strategy, and guestagent strategy.
- Included unit tests.
Workflow for building Vertica cluster is as follows:
- Guest instances are booted using new API strategy which then
sends control to taskmanager strategy for further communication
and guestagent API execution.
- Once the guest instances are active in nova,
they receive "prepare" message and following steps are performed:
- Mount the data disk on device_path.
- Check if vertica packages have been installed, install_if_needed().
- Run Vertica pre-install test, prepare_for_install_vertica().
- Get to a status BUILD_PENDING.
- Cluster-Taskmanager strategy waits for all the instances
in cluster to get to BUILD_PENDING state.
- Once all instances in a cluster get to BUILD_PENDING state,
taskmanager first, configures passwordless ssh for os-users(root, dbadmin)
with the help of guestagent APIs get_keys and authroize_keys.
- Once passwordless ssh has been configured, the taskmanager calls
install_cluster guestagent API, which installs cluster on
member instances and creates a database on the cluster.
- Once this method finishes its job then taskmanager calls
another guestagent API cluster_complete to
notify cluster member of completion of cluster creation.
New Files:
- A new directory, vertica, has been created, for api, taskmanager,
guestagent strategies under
trove/common/strategies/cluster/experimental.
- Unit-tests for cluster-controller, api and taskmanager code.
DocImpact
Change-Id: Ide30d1d2a136c7e638532a115db5ff5ab2a75e72
Implements: blueprint implement-vertica-cluster
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update trove configuration files to use the current convention for db
options:
- deprecate the used of [DEFAULT] sql_connection, sql_idle_timeout,
sql_query_log
- add support for [database] connection, idle_timeout, query_log
- update sample/test conf files to use the new convention
DocImpact: Change to conf options in Install Guide - Install the
Database service.
This has become more important, as the recent changes to devstack
now use the new [database] section, and the gate is failing due to
that.
Note: On my quick look at the code, there's more work needed to use
oslo_db properly, and it might be better to do that out-of-band
of this fix.
Authored-By: Greg Lucas <glucas@tesora.com>
Co-Authored-By: Peter Stachowski <peter@tesora.com>
Closes-Bug: #1308411
Closes-Bug: #1434856
Change-Id: Ic9e5b20e21d09c4b1d49df660b89decf094440bb
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
*) Add osprofiler wsgi middleware
This middleware is used for 2 things:
1) It checks that the person who wants to trace is trusted and knows
the secret HMAC key.
2) It start tracing in case of proper trace headers
and adds the first wsgi trace point, with info about the HTTP request.
*) Add initialization of osprofiler at start of service
Initialize osprofiler with oslo.messaging notifier which
is used to send notifications to Ceilometer.
*) Use profile enabled context in services rpc interaction
Change context serializer to pass profile context to service;
rpc server will use trace info to initialize their profile context
to continue the profiling for the request as a transaction.
*) Add tracing on service manager and sqlalchemy db engine object
NOTE to test this:
You should put to localrc:
CEILOMETER_NOTIFICATION_TOPICS=notifications,profiler
ENABLED_SERVICES+=,ceilometer-acompute,ceilometer-acentral
ENABLED_SERVICES+=,ceilometer-anotification,ceilometer-collector
ENABLED_SERVICES+=,ceilometer-alarm-evaluator,ceilometer-alarm-notifier
ENABLED_SERVICES+=,ceilometer-api
Run any command with --profile <SECRET_KEY>
$ trove --profile <SECRET_KEY> list
# it will print <Trace ID>
Get pretty HTML with traces:
$ osprofiler trace show --html <Trace ID>
Note: Trace showing can be executed with the admin account only.
The change to enable Trove exchange in ceilometer has been merged:
Idce1c327c6d21a767c612c13c1ad52a794017d71 .
The change to enable profile in python-troveclient has been merged:
I5a76e11d428c63d33f6d2c2021426090ebf8340c
We prepared a common BP in oslo-spec as the integration change is
similar in all projects: I95dccdc9f274661767d2659c18b96da169891f30
Currently there are 2 other projects are using osprofiler: Glance &
Cinder, and some others are a work in progress.
Change-Id: I580cce8d2b3c4ec9ce625ac09de6f14e1249f6f5
Signed-off-by: Zhi Yan Liu <zhiyanl@cn.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Introduce a configuration option for a directory on the guest where the
taskmanager should inject configuration files.
During instance creation inject the guest_info and trove-guestagent
conf files to the 'injected_config_location'. The default location is
/etc/trove/conf.d.
Also:
- Change the default value for 'guest_config' to remove the risk of
overwriting a guest image's conf file with the sample.
- Add trove-guestagent.conf injection to heat template.
- Add new 'guest_info' option that defaults to "guest_info.conf".
Depends-On: I1dffd373da722af55bdea41fead8456bb60c82b2
Co-Authored-By: Denis Makogon <dmakogon@mirantis.com>
Co-Authored-By: Duk Loi <duk@tesora.com>
DocImpact: This change introduces a new option with a default that
affects existing guest images. The guestagent init script for any
existing guest image will need to be modified to read the conf
files from /etc/trove/conf.d. For backwards compatibility set the
injected_config_location to /etc/trove and guest_info to
/etc/guest_info.
Closes-Bug: 1309030
Change-Id: I1057f4a1b6232332ed4b520dbc7b3bbb04145f73
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Problem:
Redstack sets 'GUEST_LOGDIR' to the 'log_dir' value from
'etc/trove/trove-guestagent.conf.sample' which happens to be '/tmp/'.
Aside from not being the canonical log file destination,
temporary directory in Linux is a subject to the, so called,
'restricted deletion' policy which dictates that only file owners
(and the directory owner) can delete the files, irrespective of
other access modifiers on the directory.
Redstack changes the owner of 'GUEST_LOGDIR' (default='/tmp')
to the 'trove' user. This may easily mask any potential issues with
the 'restricted deletion' that would only show up later on
production systems where '/tmp' is commonly owned by the root
(see bug/1423759).
The Solution:
Change the default value of 'log_dir' to a directory
which is not subject to the 'restricted deletion'.
Chose '/var/log/trove/' as it is a common place for
trove-related log files on the guestagent.
Change-Id: I39d801a7e19f329c129a0c6df0c3987049d16394
Closes-Bug: 1423760
Related-Bug: 1423759
Depends-On: I9dd6ed543a01ecc4f84065ea4bf3737960de6e24
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This allows operating on Nova flavors with non-integer ids
by adding a new str_id field to the view that always contains the
unicode flavor id. The current id field will now contain a value
only when the flavor's id can be cast to int, otherwise it will be
None. Validation of flavor id when querying a flavor or creating an
instance has been updated to include non-empty strings in addition to
integers.
This will require a patch to python-troveclient to properly fallback
to str_id in absence of the integer id:
https://review.openstack.org/#/c/123301/
Change-Id: Ie9cfefc6127bc76783cdf9668636342d7590b308
Closes-bug: #1333852
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
With oslo.messsaging adoption [0], trove config template files contain
some inconsistent and duplicated control_exchange option.
[0] Change Ibd886f3cb4a45250c7c434b3af711abee266671c
Change-Id: I024101edee1cf074181e8d58e69033bc509b582c
Signed-off-by: Zhi Yan Liu <zhiyanl@cn.ibm.com>
|
|/
|
|
|
|
|
|
| |
Port Trove to use oslo messaging library instead of obsolete messaging
code from oslo incubator.
Change-Id: Ibd886f3cb4a45250c7c434b3af711abee266671c
Implements: blueprint rpc-versioning
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
They can/should be set in the trove.conf instead, to
centralize all user-preserved/editable changes in the main
configuration file, and allow the ability to track api-paste.ini
unmodified from master.
Change-Id: I9eae06f564c4f99255fc627dbd26006d9048be46
|
|/
|
|
|
|
|
|
|
|
|
| |
This code adds a feature to the tests where all of the example
snippets are generated and then validated. Tests fail if the new
examples don't match the old ones, meaning a dev changing the API
must update the snippets, triggering a conversation about the changes
during the pull request.
Implements: blueprint example-snippet-generator
Change-Id: I5f1bfd47558a646a56e519614ae76a55759a4422
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The current rate limit is 200 requests per minute. I'm getting
consistent tox failures as the machine is exceeding this rate. The fix
impacts only test code and adjusts the limit and a test that has a
hard-coded reference to the old limit. Why 500 you may ask? Because
600 worked and 450 failed consistently with the rate limit error.
In addition, the change addresses the fact that some test
configuration values are duplicated in the test; the change makes the
test reference the configuration value.
Change-Id: I4bb290d8de6253d65b7877c743bb288ee2bce536
Closes-Bug: #1376689
Closes-Bug: #1378932
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The previous event simulator simulated time by making all tasks that
would have been launched as threads run in the same thread. That only
worked to a point but didn't properly simulate more complex behaviors
such as clustering.
This new one handles things properly by still running tasks in
eventlet's greenthreads but forcing them to only run one at a time.
Change-Id: Ife834e8d55193da8416ba700e55e7b0c2496f532
|
|\ \ |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| | |
Restoring an instance from a backup can take significantly longer than
just creating an instance. Introduce a new timeout for the restore case
with a higher default value
Change-Id: I44af03e9b2c966dd94fcfd0ff475f112ac21be5b
Closes-Bug: 1356645
|
|/
|
|
|
|
|
|
|
|
|
|
|
| |
There is no way to tell how long the snapshot for replication
will take, and we have no good way to poll for the slave state.
Eventually, we will need to have an intelligent poll (perhaps
based on guest heartbeats), but in the meantime we will have
the the snapshot use a configurable timeout which can be set
as needed, and independently of the agent_call timeouts.
Co-Authored-By: Nikhil Manchanda <SlickNik@gmail.com>
Change-Id: I6316d748e91d1ec3eebe25a14bb43fbfe10db669
Closes-bug: 1362310
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
adds clusters api, with mongodb sharding as the first implemenation.
Co-Authored-By: Michael Yu <michayu@ebaysf.com>
Co-Authored-By: Mat Lowery <mlowery@ebaysf.com>
Co-Authored-By: rumale <rumale@ebaysf.com>
Co-Authored-By: Timothy He <the@ebaysf.com>
Partially implements: blueprint clustering
Change-Id: Icab6fc3baab72e97f3231eaa4476b56b8dafb2a3
|
|/
|
|
|
|
|
|
|
|
|
| |
Cleaned out the trove-guestagent.conf to only contain values that make
sense for the guest agent. There were also values that were different
between the sample conf, and the defaults in cfg.py, which were fixed
to be consistent, so as to have a single set of sane defaults.
Closes bug: 1336618
Change-Id: Ib3823856c288971665fa29489c4ffc5b899ccbaf
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The current trove impl loads trove API extensions by searching on a
single file path which is exposed as a conf property. This results in a
less than optimal approach from a consumer extension POV. This change
replaces the single extension path approach with dyanmic loading using
stevedore. Consumers can now bind into the API extensions using the
'trove.api.extensions' entry point; a standard means to define extension
points. Moreover this change refactors to the
trove.openstack.common.extensions logic into trove.common.extensions.
In addition this change includes base unit tests to ensure the
existing trove proper extension points are loaded and that some basic
checks are in place W/R/T validating trove extension points.
Change-Id: Id3e712b536aef3bc9c9d1603367cdc0d4816b970
Implements: blueprint dynamic-extension-loading
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement new strategy for replication
Implement replication strategy for mysql binlog replication.
Supporting methods in mysql service
Implement API methods in mysql manager
Define configuration settings for replication
Co-authored by: Nikhil Manchanda <SlickNik@gmail.com>
Co-authored by: Greg Lucas <glucas@tesora.com>
Partially implements: blueprint replication-v1
Change-Id: I70f0b5c37fe3c2d42426029bb627c141965eb524
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implements blueprint: per-datastore-volume-support
Introduces volume support on a datastore basis using
config values in datastore config groups.
DocImpact: New config values device_path and volume_support
for each datastore have been added, instead of the DEFAULT
conf section.
Change-Id: I871cbed1f825d719b189f71a3ff2c748fb8abdc0
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| | |
Change the default extension path in the sample config files as these
take precedence over the files values in common/cfg.py
Change-Id: I3c473fb9cd7e81341a1963a35834ef4c2fd69717
Closes-Bug: 1316195
|
|\ \ |
|