summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--README.rst7
-rw-r--r--doc/source/index.rst7
-rw-r--r--doc/source/user/using.rst2
-rw-r--r--doc/source/user/vendor-support.rst354
-rw-r--r--lower-constraints.txt1
-rw-r--r--os_client_config/_log.py28
-rw-r--r--os_client_config/cloud_config.py392
-rw-r--r--os_client_config/config.py1224
-rw-r--r--os_client_config/defaults.py34
-rw-r--r--os_client_config/exceptions.py4
-rw-r--r--os_client_config/schema.json121
-rw-r--r--os_client_config/tests/base.py2
-rw-r--r--os_client_config/tests/test_cloud_config.py73
-rw-r--r--os_client_config/tests/test_config.py10
-rw-r--r--os_client_config/tests/test_json.py62
-rw-r--r--os_client_config/vendor-schema.json223
-rw-r--r--os_client_config/vendors/__init__.py24
-rw-r--r--os_client_config/vendors/auro.json11
-rw-r--r--os_client_config/vendors/betacloud.json14
-rw-r--r--os_client_config/vendors/bluebox.json7
-rw-r--r--os_client_config/vendors/catalyst.json15
-rw-r--r--os_client_config/vendors/citycloud.json19
-rw-r--r--os_client_config/vendors/conoha.json14
-rw-r--r--os_client_config/vendors/dreamcompute.json11
-rw-r--r--os_client_config/vendors/dreamhost.json13
-rw-r--r--os_client_config/vendors/elastx.json10
-rw-r--r--os_client_config/vendors/entercloudsuite.json16
-rw-r--r--os_client_config/vendors/fuga.json15
-rw-r--r--os_client_config/vendors/ibmcloud.json13
-rw-r--r--os_client_config/vendors/internap.json17
-rw-r--r--os_client_config/vendors/otc.json13
-rw-r--r--os_client_config/vendors/ovh.json15
-rw-r--r--os_client_config/vendors/rackspace.json29
-rw-r--r--os_client_config/vendors/switchengines.json15
-rw-r--r--os_client_config/vendors/ultimum.json11
-rw-r--r--os_client_config/vendors/unitedstack.json16
-rw-r--r--os_client_config/vendors/vexxhost.json16
-rw-r--r--os_client_config/vendors/zetta.json13
-rw-r--r--releasenotes/notes/thin-shim-62c8e6f6942b83a5.yaml13
-rw-r--r--requirements.txt5
40 files changed, 101 insertions, 2788 deletions
diff --git a/README.rst b/README.rst
index 35ff07b..58f39b0 100644
--- a/README.rst
+++ b/README.rst
@@ -5,6 +5,11 @@ os-client-config
.. image:: http://governance.openstack.org/badges/os-client-config.svg
:target: http://governance.openstack.org/reference/tags/index.html
+.. warning::
+ `os-client-config` has been superceded by `openstacksdk`_. While
+ `os-client-config` will continue to exist, it is highly recommended that
+ users transition to using `openstacksdk`_ directly.
+
`os-client-config` is a library for collecting client configuration for
using an OpenStack cloud in a consistent and comprehensive manner. It
will find cloud config for as few as 1 cloud and as many as you want to
@@ -23,3 +28,5 @@ Source
* Documentation: http://docs.openstack.org/os-client-config/latest
* Source: http://git.openstack.org/cgit/openstack/os-client-config
* Bugs: http://bugs.launchpad.net/os-client-config
+
+.. _openstacksdk: http://docs.openstack.org/openstacksdk/latest
diff --git a/doc/source/index.rst b/doc/source/index.rst
index 5a407ad..b2e3d12 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -5,6 +5,13 @@ os-client-config
.. image:: http://governance.openstack.org/badges/os-client-config.svg
:target: http://governance.openstack.org/reference/tags/index.html
+.. warning::
+ `os-client-config` has been superceded by `openstacksdk`_. While
+ `os-client-config` will continue to exist, it is highly recommended that
+ users transition to using `openstacksdk`_ directly.
+
+.. _openstacksdk: https//docs.openstack.org/openstacksdk/latest
+
`os-client-config` is a library for collecting client configuration for
using an OpenStack cloud in a consistent and comprehensive manner. It
will find cloud config for as few as 1 cloud and as many as you want to
diff --git a/doc/source/user/using.rst b/doc/source/user/using.rst
index 7d1d34e..22384d0 100644
--- a/doc/source/user/using.rst
+++ b/doc/source/user/using.rst
@@ -63,7 +63,7 @@ Constructing OpenStack SDK object
If what you want to do is get an OpenStack SDK Connection and you want it to
do all the normal things related to clouds.yaml, `OS_` environment variables,
a helper function is provided. The following will get you a fully configured
-`openstacksdk` instance.
+`openstack.connection.Connection` instance.
.. code-block:: python
diff --git a/doc/source/user/vendor-support.rst b/doc/source/user/vendor-support.rst
index 714718a..9a57265 100644
--- a/doc/source/user/vendor-support.rst
+++ b/doc/source/user/vendor-support.rst
@@ -2,355 +2,5 @@
Vendor Support
==============
-OpenStack presents deployers with many options, some of which can expose
-differences to end users. `os-client-config` tries its best to collect
-information about various things a user would need to know. The following
-is a text representation of the vendor related defaults `os-client-config`
-knows about.
-
-Default Values
---------------
-
-These are the default behaviors unless a cloud is configured differently.
-
-* Identity uses `password` authentication
-* Identity API Version is 2
-* Image API Version is 2
-* Volume API Version is 2
-* Images must be in `qcow2` format
-* Images are uploaded using PUT interface
-* Public IPv4 is directly routable via DHCP from Neutron
-* IPv6 is not provided
-* Floating IPs are not required
-* Floating IPs are provided by Neutron
-* Security groups are provided by Neutron
-* Vendor specific agents are not used
-
-auro
-----
-
-https://api.auro.io:5000/v2.0
-
-============== ================
-Region Name Location
-============== ================
-van1 Vancouver, BC
-============== ================
-
-* Public IPv4 is provided via NAT with Neutron Floating IP
-
-betacloud
----------
-
-https://api-1.betacloud.io:5000/v3
-
-============== ==================
-Region Name Location
-============== ==================
-betacloud-1 Nuremberg, Germany
-============== ==================
-
-* Identity API Version is 3
-* Images must be in `raw` format
-* Public IPv4 is provided via NAT with Neutron Floating IP
-* Volume API Version is 3
-
-catalyst
---------
-
-https://api.cloud.catalyst.net.nz:5000/v2.0
-
-============== ================
-Region Name Location
-============== ================
-nz-por-1 Porirua, NZ
-nz_wlg_2 Wellington, NZ
-============== ================
-
-* Image API Version is 1
-* Images must be in `raw` format
-* Volume API Version is 1
-
-citycloud
----------
-
-https://identity1.citycloud.com:5000/v3/
-
-============== ================
-Region Name Location
-============== ================
-Buf1 Buffalo, NY
-Fra1 Frankfurt, DE
-Kna1 Karlskrona, SE
-La1 Los Angeles, CA
-Lon1 London, UK
-Sto2 Stockholm, SE
-============== ================
-
-* Identity API Version is 3
-* Public IPv4 is provided via NAT with Neutron Floating IP
-* Volume API Version is 1
-
-conoha
-------
-
-https://identity.%(region_name)s.conoha.io
-
-============== ================
-Region Name Location
-============== ================
-tyo1 Tokyo, JP
-sin1 Singapore
-sjc1 San Jose, CA
-============== ================
-
-* Image upload is not supported
-
-dreamcompute
-------------
-
-https://iad2.dream.io:5000
-
-============== ================
-Region Name Location
-============== ================
-RegionOne Ashburn, VA
-============== ================
-
-* Identity API Version is 3
-* Images must be in `raw` format
-* IPv6 is provided to every server
-
-dreamhost
----------
-
-Deprecated, please use dreamcompute
-
-https://keystone.dream.io/v2.0
-
-============== ================
-Region Name Location
-============== ================
-RegionOne Ashburn, VA
-============== ================
-
-* Images must be in `raw` format
-* Public IPv4 is provided via NAT with Neutron Floating IP
-* IPv6 is provided to every server
-
-otc
----
-
-https://iam.%(region_name)s.otc.t-systems.com/v3
-
-============== ================
-Region Name Location
-============== ================
-eu-de Germany
-============== ================
-
-* Identity API Version is 3
-* Images must be in `vhd` format
-* Public IPv4 is provided via NAT with Neutron Floating IP
-
-elastx
-------
-
-https://ops.elastx.net:5000/v2.0
-
-============== ================
-Region Name Location
-============== ================
-regionOne Stockholm, SE
-============== ================
-
-* Public IPv4 is provided via NAT with Neutron Floating IP
-
-entercloudsuite
----------------
-
-https://api.entercloudsuite.com/v2.0
-
-============== ================
-Region Name Location
-============== ================
-nl-ams1 Amsterdam, NL
-it-mil1 Milan, IT
-de-fra1 Frankfurt, DE
-============== ================
-
-* Image API Version is 1
-* Volume API Version is 1
-
-fuga
-----
-
-https://identity.api.fuga.io:5000
-
-============== ================
-Region Name Location
-============== ================
-cystack Netherlands
-============== ================
-
-* Identity API Version is 3
-* Volume API Version is 3
-
-internap
---------
-
-https://identity.api.cloud.iweb.com/v2.0
-
-============== ================
-Region Name Location
-============== ================
-ams01 Amsterdam, NL
-da01 Dallas, TX
-nyj01 New York, NY
-sin01 Singapore
-sjc01 San Jose, CA
-============== ================
-
-* Floating IPs are not supported
-
-limestonenetworks
------------------
-
-https://auth.cloud.lstn.net:5000/v3
-
-============== ==================
-Region Name Location
-============== ==================
-us-dfw-1 Dallas, TX
-us-slc Salt Lake City, UT
-============== ==================
-
-* Identity API Version is 3
-* Images must be in `raw` format
-* IPv6 is provided to every server connected to the `Public Internet` network
-
-ovh
----
-
-https://auth.cloud.ovh.net/v2.0
-
-============== ================
-Region Name Location
-============== ================
-BHS1 Beauharnois, QC
-SBG1 Strassbourg, FR
-GRA1 Gravelines, FR
-============== ================
-
-* Images may be in `raw` format. The `qcow2` default is also supported
-* Floating IPs are not supported
-
-rackspace
----------
-
-https://identity.api.rackspacecloud.com/v2.0/
-
-============== ================
-Region Name Location
-============== ================
-DFW Dallas, TX
-HKG Hong Kong
-IAD Washington, D.C.
-LON London, UK
-ORD Chicago, IL
-SYD Sydney, NSW
-============== ================
-
-* Database Service Type is `rax:database`
-* Compute Service Name is `cloudServersOpenStack`
-* Images must be in `vhd` format
-* Images must be uploaded using the Glance Task Interface
-* Floating IPs are not supported
-* Public IPv4 is directly routable via static config by Nova
-* IPv6 is provided to every server
-* Security groups are not supported
-* Uploaded Images need properties to not use vendor agent::
- :vm_mode: hvm
- :xenapi_use_agent: False
-* Volume API Version is 1
-* While passwords are recommended for use, API keys do work as well.
- The `rackspaceauth` python package must be installed, and then the following
- can be added to clouds.yaml::
-
- auth:
- username: myusername
- api_key: myapikey
- auth_type: rackspace_apikey
-
-switchengines
--------------
-
-https://keystone.cloud.switch.ch:5000/v2.0
-
-============== ================
-Region Name Location
-============== ================
-LS Lausanne, CH
-ZH Zurich, CH
-============== ================
-
-* Images must be in `raw` format
-* Images must be uploaded using the Glance Task Interface
-* Volume API Version is 1
-
-ultimum
--------
-
-https://console.ultimum-cloud.com:5000/v2.0
-
-============== ================
-Region Name Location
-============== ================
-RegionOne Prague, CZ
-============== ================
-
-* Volume API Version is 1
-
-unitedstack
------------
-
-https://identity.api.ustack.com/v3
-
-============== ================
-Region Name Location
-============== ================
-bj1 Beijing, CN
-gd1 Guangdong, CN
-============== ================
-
-* Identity API Version is 3
-* Images must be in `raw` format
-* Volume API Version is 1
-
-vexxhost
---------
-
-http://auth.vexxhost.net
-
-============== ================
-Region Name Location
-============== ================
-ca-ymq-1 Montreal, QC
-============== ================
-
-* DNS API Version is 1
-* Identity API Version is 3
-
-zetta
------
-
-https://identity.api.zetta.io/v3
-
-============== ================
-Region Name Location
-============== ================
-no-osl1 Oslo, NO
-============== ================
-
-* DNS API Version is 2
-* Identity API Version is 3
+Please see
+https://docs.openstack.org/openstacksdk/latest/user/vendor-support.html
diff --git a/lower-constraints.txt b/lower-constraints.txt
index 4ecc7de..3e9ce30 100644
--- a/lower-constraints.txt
+++ b/lower-constraints.txt
@@ -25,6 +25,7 @@ monotonic==0.6
mox3==0.20.0
netaddr==0.7.18
netifaces==0.10.4
+openstacksdk==0.13.0
oslo.i18n==3.15.3
oslo.utils==3.33.0
oslotest==3.2.0
diff --git a/os_client_config/_log.py b/os_client_config/_log.py
deleted file mode 100644
index ff2f2ea..0000000
--- a/os_client_config/_log.py
+++ /dev/null
@@ -1,28 +0,0 @@
-# Copyright (c) 2015 IBM Corp.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import logging
-
-
-class NullHandler(logging.Handler):
- def emit(self, record):
- pass
-
-
-def setup_logging(name):
- log = logging.getLogger(name)
- if len(log.handlers) == 0:
- h = NullHandler()
- log.addHandler(h)
- return log
diff --git a/os_client_config/cloud_config.py b/os_client_config/cloud_config.py
index 0d82ebf..88fe060 100644
--- a/os_client_config/cloud_config.py
+++ b/os_client_config/cloud_config.py
@@ -13,16 +13,11 @@
# under the License.
import importlib
-import math
-import warnings
-from keystoneauth1 import adapter
-import keystoneauth1.exceptions.catalog
-from keystoneauth1 import session
-import requestsexceptions
+from openstack import _log
+from openstack.config import cloud_region
import os_client_config
-from os_client_config import _log
from os_client_config import constructors
from os_client_config import exceptions
@@ -61,30 +56,10 @@ def _get_client(service_key):
return ctr
-def _make_key(key, service_type):
- if not service_type:
- return key
- else:
- service_type = service_type.lower().replace('-', '_')
- return "_".join([service_type, key])
-
-
-class CloudConfig(object):
- def __init__(self, name, region, config,
- force_ipv4=False, auth_plugin=None,
- openstack_config=None, session_constructor=None,
- app_name=None, app_version=None):
- self.name = name
- self.region = region
- self.config = config
+class CloudConfig(cloud_region.CloudRegion):
+ def __init__(self, *args, **kwargs):
+ super(CloudConfig, self).__init__(*args, **kwargs)
self.log = _log.setup_logging(__name__)
- self._force_ipv4 = force_ipv4
- self._auth = auth_plugin
- self._openstack_config = openstack_config
- self._keystone_session = None
- self._session_constructor = session_constructor or session.Session
- self._app_name = app_name
- self._app_version = app_version
def __getattr__(self, key):
"""Return arbitrary attributes."""
@@ -97,253 +72,20 @@ class CloudConfig(object):
else:
return None
- def __iter__(self):
- return self.config.__iter__()
-
- def __eq__(self, other):
- return (self.name == other.name and self.region == other.region
- and self.config == other.config)
-
- def __ne__(self, other):
- return not self == other
-
- def set_session_constructor(self, session_constructor):
- """Sets the Session constructor."""
- self._session_constructor = session_constructor
-
- def get_requests_verify_args(self):
- """Return the verify and cert values for the requests library."""
- if self.config['verify'] and self.config['cacert']:
- verify = self.config['cacert']
- else:
- verify = self.config['verify']
- if self.config['cacert']:
- warnings.warn(
- "You are specifying a cacert for the cloud {0} but "
- "also to ignore the host verification. The host SSL cert "
- "will not be verified.".format(self.name))
-
- cert = self.config.get('cert', None)
- if cert:
- if self.config['key']:
- cert = (cert, self.config['key'])
- return (verify, cert)
-
- def get_services(self):
- """Return a list of service types we know something about."""
- services = []
- for key, val in self.config.items():
- if (key.endswith('api_version')
- or key.endswith('service_type')
- or key.endswith('service_name')):
- services.append("_".join(key.split('_')[:-2]))
- return list(set(services))
-
- def get_auth_args(self):
- return self.config['auth']
-
- def get_interface(self, service_type=None):
- key = _make_key('interface', service_type)
- interface = self.config.get('interface')
- return self.config.get(key, interface)
-
- def get_region_name(self, service_type=None):
- if not service_type:
- return self.region
- key = _make_key('region_name', service_type)
- return self.config.get(key, self.region)
-
- def get_api_version(self, service_type):
- key = _make_key('api_version', service_type)
- return self.config.get(key, None)
-
- def get_service_type(self, service_type):
- key = _make_key('service_type', service_type)
- # Cinder did an evil thing where they defined a second service
- # type in the catalog. Of course, that's insane, so let's hide this
- # atrocity from the as-yet-unsullied eyes of our users.
- # Of course, if the user requests a volumev2, that structure should
- # still work.
- # What's even more amazing is that they did it AGAIN with cinder v3
- # And then I learned that mistral copied it.
- if service_type == 'volume':
- vol_type = self.get_api_version(service_type)
- if vol_type and vol_type.startswith('2'):
- service_type = 'volumev2'
- elif vol_type and vol_type.startswith('3'):
- service_type = 'volumev3'
- elif service_type == 'workflow':
- wk_type = self.get_api_version(service_type)
- if wk_type and wk_type.startswith('2'):
- service_type = 'workflowv2'
- return self.config.get(key, service_type)
-
- def get_service_name(self, service_type):
- key = _make_key('service_name', service_type)
- return self.config.get(key, None)
-
- def get_endpoint(self, service_type):
- key = _make_key('endpoint_override', service_type)
- old_key = _make_key('endpoint', service_type)
- return self.config.get(key, self.config.get(old_key, None))
+ def insert_user_agent(self):
+ self._keystone_session.additional_user_agent.append(
+ ('os-client-config', os_client_config.__version__))
+ super(CloudConfig, self).insert_user_agent()
@property
- def prefer_ipv6(self):
- return not self._force_ipv4
-
- @property
- def force_ipv4(self):
- return self._force_ipv4
-
- def get_auth(self):
- """Return a keystoneauth plugin from the auth credentials."""
- return self._auth
-
- def get_session(self):
- """Return a keystoneauth session based on the auth credentials."""
- if self._keystone_session is None:
- if not self._auth:
- raise exceptions.OpenStackConfigException(
- "Problem with auth parameters")
- (verify, cert) = self.get_requests_verify_args()
- # Turn off urllib3 warnings about insecure certs if we have
- # explicitly configured requests to tell it we do not want
- # cert verification
- if not verify:
- self.log.debug(
- "Turning off SSL warnings for {cloud}:{region}"
- " since verify=False".format(
- cloud=self.name, region=self.region))
- requestsexceptions.squelch_warnings(insecure_requests=not verify)
- self._keystone_session = self._session_constructor(
- auth=self._auth,
- verify=verify,
- cert=cert,
- timeout=self.config['api_timeout'])
- if hasattr(self._keystone_session, 'additional_user_agent'):
- self._keystone_session.additional_user_agent.append(
- ('os-client-config', os_client_config.__version__))
- # Using old keystoneauth with new os-client-config fails if
- # we pass in app_name and app_version. Those are not essential,
- # nor a reason to bump our minimum, so just test for the session
- # having the attribute post creation and set them then.
- if hasattr(self._keystone_session, 'app_name'):
- self._keystone_session.app_name = self._app_name
- if hasattr(self._keystone_session, 'app_version'):
- self._keystone_session.app_version = self._app_version
- return self._keystone_session
-
- def get_service_catalog(self):
- """Helper method to grab the service catalog."""
- return self._auth.get_access(self.get_session()).service_catalog
-
- def _get_version_args(self, service_key, version):
- """Translate OCC version args to those needed by ksa adapter.
-
- If no version is requested explicitly and we have a configured version,
- set the version parameter and let ksa deal with expanding that to
- min=ver.0, max=ver.latest.
-
- If version is set, pass it through.
-
- If version is not set and we don't have a configured version, default
- to latest.
- """
- if version == 'latest':
- return None, None, 'latest'
- if not version:
- version = self.get_api_version(service_key)
- if not version:
- return None, None, 'latest'
- return version, None, None
-
- def get_session_client(self, service_key, version=None):
- """Return a prepped requests adapter for a given service.
-
- This is useful for making direct requests calls against a
- 'mounted' endpoint. That is, if you do:
-
- client = get_session_client('compute')
-
- then you can do:
-
- client.get('/flavors')
-
- and it will work like you think.
- """
- (version, min_version, max_version) = self._get_version_args(
- service_key, version)
+ def region(self):
+ return self.region_name
- return adapter.Adapter(
- session=self.get_session(),
- service_type=self.get_service_type(service_key),
- service_name=self.get_service_name(service_key),
- interface=self.get_interface(service_key),
- version=version,
- min_version=min_version,
- max_version=max_version,
- region_name=self.region)
+ def get_region_name(self, *args):
+ return self.region_name
- def _get_highest_endpoint(self, service_types, kwargs):
- session = self.get_session()
- for service_type in service_types:
- kwargs['service_type'] = service_type
- try:
- # Return the highest version we find that matches
- # the request
- return session.get_endpoint(**kwargs)
- except keystoneauth1.exceptions.catalog.EndpointNotFound:
- pass
-
- def get_session_endpoint(
- self, service_key, min_version=None, max_version=None):
- """Return the endpoint from config or the catalog.
-
- If a configuration lists an explicit endpoint for a service,
- return that. Otherwise, fetch the service catalog from the
- keystone session and return the appropriate endpoint.
-
- :param service_key: Generic key for service, such as 'compute' or
- 'network'
-
- """
-
- override_endpoint = self.get_endpoint(service_key)
- if override_endpoint:
- return override_endpoint
- endpoint = None
- kwargs = {
- 'service_name': self.get_service_name(service_key),
- 'region_name': self.region
- }
- kwargs['interface'] = self.get_interface(service_key)
- if service_key == 'volume' and not self.get_api_version('volume'):
- # If we don't have a configured cinder version, we can't know
- # to request a different service_type
- min_version = float(min_version or 1)
- max_version = float(max_version or 3)
- min_major = math.trunc(float(min_version))
- max_major = math.trunc(float(max_version))
- versions = range(int(max_major) + 1, int(min_major), -1)
- service_types = []
- for version in versions:
- if version == 1:
- service_types.append('volume')
- else:
- service_types.append('volumev{v}'.format(v=version))
- else:
- service_types = [self.get_service_type(service_key)]
- endpoint = self._get_highest_endpoint(service_types, kwargs)
- if not endpoint:
- self.log.warning(
- "Keystone catalog entry not found ("
- "service_type=%s,service_name=%s"
- "interface=%s,region_name=%s)",
- service_key,
- kwargs['service_name'],
- kwargs['interface'],
- kwargs['region_name'])
- return endpoint
+ def get_cache_expiration(self):
+ return self.get_cache_expirations()
def get_legacy_client(
self, service_key, client_class=None, interface_key=None,
@@ -486,107 +228,3 @@ class CloudConfig(object):
constructor_kwargs[interface_key] = interface
return client_class(**constructor_kwargs)
-
- def get_cache_expiration_time(self):
- if self._openstack_config:
- return self._openstack_config.get_cache_expiration_time()
-
- def get_cache_path(self):
- if self._openstack_config:
- return self._openstack_config.get_cache_path()
-
- def get_cache_class(self):
- if self._openstack_config:
- return self._openstack_config.get_cache_class()
-
- def get_cache_arguments(self):
- if self._openstack_config:
- return self._openstack_config.get_cache_arguments()
-
- def get_cache_expiration(self):
- if self._openstack_config:
- return self._openstack_config.get_cache_expiration()
-
- def get_cache_resource_expiration(self, resource, default=None):
- """Get expiration time for a resource
-
- :param resource: Name of the resource type
- :param default: Default value to return if not found (optional,
- defaults to None)
-
- :returns: Expiration time for the resource type as float or default
- """
- if self._openstack_config:
- expiration = self._openstack_config.get_cache_expiration()
- if resource not in expiration:
- return default
- return float(expiration[resource])
-
- def requires_floating_ip(self):
- """Return whether or not this cloud requires floating ips.
-
-
- :returns: True of False if know, None if discovery is needed.
- If requires_floating_ip is not configured but the cloud is
- known to not provide floating ips, will return False.
- """
- if self.config['floating_ip_source'] == "None":
- return False
- return self.config.get('requires_floating_ip')
-
- def get_external_networks(self):
- """Get list of network names for external networks."""
- return [
- net['name'] for net in self.config['networks']
- if net['routes_externally']]
-
- def get_external_ipv4_networks(self):
- """Get list of network names for external IPv4 networks."""
- return [
- net['name'] for net in self.config['networks']
- if net['routes_ipv4_externally']]
-
- def get_external_ipv6_networks(self):
- """Get list of network names for external IPv6 networks."""
- return [
- net['name'] for net in self.config['networks']
- if net['routes_ipv6_externally']]
-
- def get_internal_networks(self):
- """Get list of network names for internal networks."""
- return [
- net['name'] for net in self.config['networks']
- if not net['routes_externally']]
-
- def get_internal_ipv4_networks(self):
- """Get list of network names for internal IPv4 networks."""
- return [
- net['name'] for net in self.config['networks']
- if not net['routes_ipv4_externally']]
-
- def get_internal_ipv6_networks(self):
- """Get list of network names for internal IPv6 networks."""
- return [
- net['name'] for net in self.config['networks']
- if not net['routes_ipv6_externally']]
-
- def get_default_network(self):
- """Get network used for default interactions."""
- for net in self.config['networks']:
- if net['default_interface']:
- return net['name']
- return None
-
- def get_nat_destination(self):
- """Get network used for NAT destination."""
- for net in self.config['networks']:
- if net['nat_destination']:
- return net['name']
- return None
-
- def get_nat_source(self):
- """Get network used for NAT source."""
- for net in self.config['networks']:
- if net.get('nat_source'):
- return net['name']
- return None
diff --git a/os_client_config/config.py b/os_client_config/config.py
index f80e158..d506a04 100644
--- a/os_client_config/config.py
+++ b/os_client_config/config.py
@@ -12,1230 +12,20 @@
# License for the specific language governing permissions and limitations
# under the License.
+from openstack.config import loader
+from openstack.config.loader import * # noqa
-# alias because we already had an option named argparse
-import argparse as argparse_mod
-import collections
-import copy
-import json
-import os
-import re
-import sys
-import warnings
-
-import appdirs
-from keystoneauth1 import adapter
-from keystoneauth1 import loading
-import yaml
-
-from os_client_config import _log
from os_client_config import cloud_config
from os_client_config import defaults
-from os_client_config import exceptions
-from os_client_config import vendors
-
-APPDIRS = appdirs.AppDirs('openstack', 'OpenStack', multipath='/etc')
-CONFIG_HOME = APPDIRS.user_config_dir
-CACHE_PATH = APPDIRS.user_cache_dir
-
-UNIX_CONFIG_HOME = os.path.join(
- os.path.expanduser(os.path.join('~', '.config')), 'openstack')
-UNIX_SITE_CONFIG_HOME = '/etc/openstack'
-
-SITE_CONFIG_HOME = APPDIRS.site_config_dir
-
-CONFIG_SEARCH_PATH = [
- os.getcwd(),
- CONFIG_HOME, UNIX_CONFIG_HOME,
- SITE_CONFIG_HOME, UNIX_SITE_CONFIG_HOME
-]
-YAML_SUFFIXES = ('.yaml', '.yml')
-JSON_SUFFIXES = ('.json',)
-CONFIG_FILES = [
- os.path.join(d, 'clouds' + s)
- for d in CONFIG_SEARCH_PATH
- for s in YAML_SUFFIXES + JSON_SUFFIXES
-]
-SECURE_FILES = [
- os.path.join(d, 'secure' + s)
- for d in CONFIG_SEARCH_PATH
- for s in YAML_SUFFIXES + JSON_SUFFIXES
-]
-VENDOR_FILES = [
- os.path.join(d, 'clouds-public' + s)
- for d in CONFIG_SEARCH_PATH
- for s in YAML_SUFFIXES + JSON_SUFFIXES
-]
-
-BOOL_KEYS = ('insecure', 'cache')
-
-FORMAT_EXCLUSIONS = frozenset(['password'])
-
-
-# NOTE(dtroyer): This turns out to be not the best idea so let's move
-# overriding defaults to a kwarg to OpenStackConfig.__init__()
-# Remove this sometime in June 2015 once OSC is comfortably
-# changed-over and global-defaults is updated.
-def set_default(key, value):
- warnings.warn(
- "Use of set_default() is deprecated. Defaults should be set with the "
- "`override_defaults` parameter of OpenStackConfig."
- )
- defaults.get_defaults() # make sure the dict is initialized
- defaults._defaults[key] = value
-
-
-def get_boolean(value):
- if value is None:
- return False
- if type(value) is bool:
- return value
- if value.lower() == 'true':
- return True
- return False
-
-
-def _get_os_environ(envvar_prefix=None):
- ret = defaults.get_defaults()
- if not envvar_prefix:
- # This makes the or below be OS_ or OS_ which is a no-op
- envvar_prefix = 'OS_'
- environkeys = [k for k in os.environ.keys()
- if (k.startswith('OS_') or k.startswith(envvar_prefix))
- and not k.startswith('OS_TEST') # infra CI var
- and not k.startswith('OS_STD') # infra CI var
- ]
- for k in environkeys:
- newkey = k.split('_', 1)[-1].lower()
- ret[newkey] = os.environ[k]
- # If the only environ keys are selectors or behavior modification, don't
- # return anything
- selectors = set([
- 'OS_CLOUD', 'OS_REGION_NAME',
- 'OS_CLIENT_CONFIG_FILE', 'OS_CLIENT_SECURE_FILE', 'OS_CLOUD_NAME'])
- if set(environkeys) - selectors:
- return ret
- return None
-
-
-def _merge_clouds(old_dict, new_dict):
- """Like dict.update, except handling nested dicts."""
- ret = old_dict.copy()
- for (k, v) in new_dict.items():
- if isinstance(v, dict):
- if k in ret:
- ret[k] = _merge_clouds(ret[k], v)
- else:
- ret[k] = v.copy()
- else:
- ret[k] = v
- return ret
-
-
-def _auth_update(old_dict, new_dict_source):
- """Like dict.update, except handling the nested dict called auth."""
- new_dict = copy.deepcopy(new_dict_source)
- for (k, v) in new_dict.items():
- if k == 'auth':
- if k in old_dict:
- old_dict[k].update(v)
- else:
- old_dict[k] = v.copy()
- else:
- old_dict[k] = v
- return old_dict
-
-
-def _fix_argv(argv):
- # Transform any _ characters in arg names to - so that we don't
- # have to throw billions of compat argparse arguments around all
- # over the place.
- processed = collections.defaultdict(list)
- for index in range(0, len(argv)):
- # If the value starts with '--' and has '-' or '_' in it, then
- # it's worth looking at it
- if re.match('^--.*(_|-)+.*', argv[index]):
- split_args = argv[index].split('=')
- orig = split_args[0]
- new = orig.replace('_', '-')
- if orig != new:
- split_args[0] = new
- argv[index] = "=".join(split_args)
- # Save both for later so we can throw an error about dupes
- processed[new].append(orig)
- overlap = []
- for new, old in processed.items():
- if len(old) > 1:
- overlap.extend(old)
- if overlap:
- raise exceptions.OpenStackConfigException(
- "The following options were given: '{options}' which contain"
- " duplicates except that one has _ and one has -. There is"
- " no sane way for us to know what you're doing. Remove the"
- " duplicate option and try again".format(
- options=','.join(overlap)))
-
-
-class OpenStackConfig(object):
-
- def __init__(self, config_files=None, vendor_files=None,
- override_defaults=None, force_ipv4=None,
- envvar_prefix=None, secure_files=None,
- pw_func=None, session_constructor=None,
- app_name=None, app_version=None,
- load_yaml_config=True):
- self.log = _log.setup_logging(__name__)
- self._session_constructor = session_constructor
- self._app_name = app_name
- self._app_version = app_version
-
- if load_yaml_config:
- self._config_files = config_files or CONFIG_FILES
- self._secure_files = secure_files or SECURE_FILES
- self._vendor_files = vendor_files or VENDOR_FILES
- else:
- self._config_files = []
- self._secure_files = []
- self._vendor_files = []
-
- config_file_override = os.environ.get('OS_CLIENT_CONFIG_FILE')
- if config_file_override:
- self._config_files.insert(0, config_file_override)
-
- secure_file_override = os.environ.get('OS_CLIENT_SECURE_FILE')
- if secure_file_override:
- self._secure_files.insert(0, secure_file_override)
-
- self.defaults = defaults.get_defaults()
- if override_defaults:
- self.defaults.update(override_defaults)
-
- # First, use a config file if it exists where expected
- self.config_filename, self.cloud_config = self._load_config_file()
- _, secure_config = self._load_secure_file()
- if secure_config:
- self.cloud_config = _merge_clouds(
- self.cloud_config, secure_config)
-
- if not self.cloud_config:
- self.cloud_config = {'clouds': {}}
- if 'clouds' not in self.cloud_config:
- self.cloud_config['clouds'] = {}
-
- # Grab ipv6 preference settings from env
- client_config = self.cloud_config.get('client', {})
-
- if force_ipv4 is not None:
- # If it's passed in to the constructor, honor it.
- self.force_ipv4 = force_ipv4
- else:
- # Get the backwards compat value
- prefer_ipv6 = get_boolean(
- os.environ.get(
- 'OS_PREFER_IPV6', client_config.get(
- 'prefer_ipv6', client_config.get(
- 'prefer-ipv6', True))))
- force_ipv4 = get_boolean(
- os.environ.get(
- 'OS_FORCE_IPV4', client_config.get(
- 'force_ipv4', client_config.get(
- 'broken-ipv6', False))))
-
- self.force_ipv4 = force_ipv4
- if not prefer_ipv6:
- # this will only be false if someone set it explicitly
- # honor their wishes
- self.force_ipv4 = True
-
- # Next, process environment variables and add them to the mix
- self.envvar_key = os.environ.get('OS_CLOUD_NAME', 'envvars')
- if self.envvar_key in self.cloud_config['clouds']:
- raise exceptions.OpenStackConfigException(
- '"{0}" defines a cloud named "{1}", but'
- ' OS_CLOUD_NAME is also set to "{1}". Please rename'
- ' either your environment based cloud, or one of your'
- ' file-based clouds.'.format(self.config_filename,
- self.envvar_key))
-
- self.default_cloud = os.environ.get('OS_CLOUD')
-
- envvars = _get_os_environ(envvar_prefix=envvar_prefix)
- if envvars:
- self.cloud_config['clouds'][self.envvar_key] = envvars
- if not self.default_cloud:
- self.default_cloud = self.envvar_key
-
- if not self.default_cloud and self.cloud_config['clouds']:
- if len(self.cloud_config['clouds'].keys()) == 1:
- # If there is only one cloud just use it. This matches envvars
- # behavior and allows for much less typing.
- # TODO(mordred) allow someone to mark a cloud as "default" in
- # clouds.yaml.
- # The next/iter thing is for python3 compat where dict.keys
- # returns an iterator but in python2 it's a list.
- self.default_cloud = next(iter(
- self.cloud_config['clouds'].keys()))
-
- # Finally, fall through and make a cloud that starts with defaults
- # because we need somewhere to put arguments, and there are neither
- # config files or env vars
- if not self.cloud_config['clouds']:
- self.cloud_config = dict(
- clouds=dict(defaults=dict(self.defaults)))
- self.default_cloud = 'defaults'
-
- self._cache_expiration_time = 0
- self._cache_path = CACHE_PATH
- self._cache_class = 'dogpile.cache.null'
- self._cache_arguments = {}
- self._cache_expiration = {}
- if 'cache' in self.cloud_config:
- cache_settings = self._normalize_keys(self.cloud_config['cache'])
-
- # expiration_time used to be 'max_age' but the dogpile setting
- # is expiration_time. Support max_age for backwards compat.
- self._cache_expiration_time = cache_settings.get(
- 'expiration_time', cache_settings.get(
- 'max_age', self._cache_expiration_time))
-
- # If cache class is given, use that. If not, but if cache time
- # is given, default to memory. Otherwise, default to nothing.
- # to memory.
- if self._cache_expiration_time:
- self._cache_class = 'dogpile.cache.memory'
- self._cache_class = self.cloud_config['cache'].get(
- 'class', self._cache_class)
-
- self._cache_path = os.path.expanduser(
- cache_settings.get('path', self._cache_path))
- self._cache_arguments = cache_settings.get(
- 'arguments', self._cache_arguments)
- self._cache_expiration = cache_settings.get(
- 'expiration', self._cache_expiration)
-
- # Flag location to hold the peeked value of an argparse timeout value
- self._argv_timeout = False
-
- # Save the password callback
- # password = self._pw_callback(prompt="Password: ")
- self._pw_callback = pw_func
-
- def get_extra_config(self, key, defaults=None):
- """Fetch an arbitrary extra chunk of config, laying in defaults.
-
- :param string key: name of the config section to fetch
- :param dict defaults: (optional) default values to merge under the
- found config
- """
- if not defaults:
- defaults = {}
- return _merge_clouds(
- self._normalize_keys(defaults),
- self._normalize_keys(self.cloud_config.get(key, {})))
-
- def _load_config_file(self):
- return self._load_yaml_json_file(self._config_files)
-
- def _load_secure_file(self):
- return self._load_yaml_json_file(self._secure_files)
-
- def _load_vendor_file(self):
- return self._load_yaml_json_file(self._vendor_files)
-
- def _load_yaml_json_file(self, filelist):
- for path in filelist:
- if os.path.exists(path):
- with open(path, 'r') as f:
- if path.endswith('json'):
- return path, json.load(f)
- else:
- return path, yaml.safe_load(f)
- return (None, {})
-
- def _normalize_keys(self, config):
- new_config = {}
- for key, value in config.items():
- key = key.replace('-', '_')
- if isinstance(value, dict):
- new_config[key] = self._normalize_keys(value)
- elif isinstance(value, bool):
- new_config[key] = value
- elif isinstance(value, int) and key != 'verbose_level':
- new_config[key] = str(value)
- elif isinstance(value, float):
- new_config[key] = str(value)
- else:
- new_config[key] = value
- return new_config
-
- def get_cache_expiration_time(self):
- return int(self._cache_expiration_time)
-
- def get_cache_interval(self):
- return self.get_cache_expiration_time()
-
- def get_cache_max_age(self):
- return self.get_cache_expiration_time()
-
- def get_cache_path(self):
- return self._cache_path
-
- def get_cache_class(self):
- return self._cache_class
-
- def get_cache_arguments(self):
- return copy.deepcopy(self._cache_arguments)
-
- def get_cache_expiration(self):
- return copy.deepcopy(self._cache_expiration)
-
- def _expand_region_name(self, region_name):
- return {'name': region_name, 'values': {}}
-
- def _expand_regions(self, regions):
- ret = []
- for region in regions:
- if isinstance(region, dict):
- ret.append(copy.deepcopy(region))
- else:
- ret.append(self._expand_region_name(region))
- return ret
-
- def _get_regions(self, cloud):
- if cloud not in self.cloud_config['clouds']:
- return [self._expand_region_name('')]
- regions = self._get_known_regions(cloud)
- if not regions:
- # We don't know of any regions use a workable default.
- regions = [self._expand_region_name('')]
- return regions
-
- def _get_known_regions(self, cloud):
- config = self._normalize_keys(self.cloud_config['clouds'][cloud])
- if 'regions' in config:
- return self._expand_regions(config['regions'])
- elif 'region_name' in config:
- if isinstance(config['region_name'], list):
- regions = config['region_name']
- else:
- regions = config['region_name'].split(',')
- if len(regions) > 1:
- warnings.warn(
- "Comma separated lists in region_name are deprecated."
- " Please use a yaml list in the regions"
- " parameter in {0} instead.".format(self.config_filename))
- return self._expand_regions(regions)
- else:
- # crappit. we don't have a region defined.
- new_cloud = dict()
- our_cloud = self.cloud_config['clouds'].get(cloud, dict())
- self._expand_vendor_profile(cloud, new_cloud, our_cloud)
- if 'regions' in new_cloud and new_cloud['regions']:
- return self._expand_regions(new_cloud['regions'])
- elif 'region_name' in new_cloud and new_cloud['region_name']:
- return [self._expand_region_name(new_cloud['region_name'])]
-
- def _get_region(self, cloud=None, region_name=''):
- if region_name is None:
- region_name = ''
- if not cloud:
- return self._expand_region_name(region_name)
-
- regions = self._get_known_regions(cloud)
- if not regions:
- return self._expand_region_name(region_name)
-
- if not region_name:
- return regions[0]
-
- for region in regions:
- if region['name'] == region_name:
- return region
-
- raise exceptions.OpenStackConfigException(
- 'Region {region_name} is not a valid region name for cloud'
- ' {cloud}. Valid choices are {region_list}. Please note that'
- ' region names are case sensitive.'.format(
- region_name=region_name,
- region_list=','.join([r['name'] for r in regions]),
- cloud=cloud))
-
- def get_cloud_names(self):
- return self.cloud_config['clouds'].keys()
-
- def _get_base_cloud_config(self, name):
- cloud = dict()
-
- # Only validate cloud name if one was given
- if name and name not in self.cloud_config['clouds']:
- raise exceptions.OpenStackConfigException(
- "Cloud {name} was not found.".format(
- name=name))
-
- our_cloud = self.cloud_config['clouds'].get(name, dict())
-
- # Get the defaults
- cloud.update(self.defaults)
- self._expand_vendor_profile(name, cloud, our_cloud)
-
- if 'auth' not in cloud:
- cloud['auth'] = dict()
-
- _auth_update(cloud, our_cloud)
- if 'cloud' in cloud:
- del cloud['cloud']
-
- return cloud
-
- def _expand_vendor_profile(self, name, cloud, our_cloud):
- # Expand a profile if it exists. 'cloud' is an old confusing name
- # for this.
- profile_name = our_cloud.get('profile', our_cloud.get('cloud', None))
- if profile_name and profile_name != self.envvar_key:
- if 'cloud' in our_cloud:
- warnings.warn(
- "{0} use the keyword 'cloud' to reference a known "
- "vendor profile. This has been deprecated in favor of the "
- "'profile' keyword.".format(self.config_filename))
- vendor_filename, vendor_file = self._load_vendor_file()
- if vendor_file and profile_name in vendor_file['public-clouds']:
- _auth_update(cloud, vendor_file['public-clouds'][profile_name])
- else:
- profile_data = vendors.get_profile(profile_name)
- if profile_data:
- status = profile_data.pop('status', 'active')
- message = profile_data.pop('message', '')
- if status == 'deprecated':
- warnings.warn(
- "{profile_name} is deprecated: {message}".format(
- profile_name=profile_name, message=message))
- elif status == 'shutdown':
- raise exceptions.OpenStackConfigException(
- "{profile_name} references a cloud that no longer"
- " exists: {message}".format(
- profile_name=profile_name, message=message))
- _auth_update(cloud, profile_data)
- else:
- # Can't find the requested vendor config, go about business
- warnings.warn("Couldn't find the vendor profile '{0}', for"
- " the cloud '{1}'".format(profile_name,
- name))
-
- def _project_scoped(self, cloud):
- return ('project_id' in cloud or 'project_name' in cloud
- or 'project_id' in cloud['auth']
- or 'project_name' in cloud['auth'])
-
- def _validate_networks(self, networks, key):
- value = None
- for net in networks:
- if value and net[key]:
- raise exceptions.OpenStackConfigException(
- "Duplicate network entries for {key}: {net1} and {net2}."
- " Only one network can be flagged with {key}".format(
- key=key,
- net1=value['name'],
- net2=net['name']))
- if not value and net[key]:
- value = net
-
- def _fix_backwards_networks(self, cloud):
- # Leave the external_network and internal_network keys in the
- # dict because consuming code might be expecting them.
- networks = []
- # Normalize existing network entries
- for net in cloud.get('networks', []):
- name = net.get('name')
- if not name:
- raise exceptions.OpenStackConfigException(
- 'Entry in network list is missing required field "name".')
- network = dict(
- name=name,
- routes_externally=get_boolean(net.get('routes_externally')),
- nat_source=get_boolean(net.get('nat_source')),
- nat_destination=get_boolean(net.get('nat_destination')),
- default_interface=get_boolean(net.get('default_interface')),
- )
- # routes_ipv4_externally defaults to the value of routes_externally
- network['routes_ipv4_externally'] = get_boolean(
- net.get(
- 'routes_ipv4_externally', network['routes_externally']))
- # routes_ipv6_externally defaults to the value of routes_externally
- network['routes_ipv6_externally'] = get_boolean(
- net.get(
- 'routes_ipv6_externally', network['routes_externally']))
- networks.append(network)
-
- for key in ('external_network', 'internal_network'):
- external = key.startswith('external')
- if key in cloud and 'networks' in cloud:
- raise exceptions.OpenStackConfigException(
- "Both {key} and networks were specified in the config."
- " Please remove {key} from the config and use the network"
- " list to configure network behavior.".format(key=key))
- if key in cloud:
- warnings.warn(
- "{key} is deprecated. Please replace with an entry in"
- " a dict inside of the networks list with name: {name}"
- " and routes_externally: {external}".format(
- key=key, name=cloud[key], external=external))
- networks.append(dict(
- name=cloud[key],
- routes_externally=external,
- nat_destination=not external,
- default_interface=external))
-
- # Validate that we don't have duplicates
- self._validate_networks(networks, 'nat_destination')
- self._validate_networks(networks, 'default_interface')
-
- cloud['networks'] = networks
- return cloud
-
- def _handle_domain_id(self, cloud):
- # Allow people to just specify domain once if it's the same
- mappings = {
- 'domain_id': ('user_domain_id', 'project_domain_id'),
- 'domain_name': ('user_domain_name', 'project_domain_name'),
- }
- for target_key, possible_values in mappings.items():
- if not self._project_scoped(cloud):
- if target_key in cloud and target_key not in cloud['auth']:
- cloud['auth'][target_key] = cloud.pop(target_key)
- continue
- for key in possible_values:
- if target_key in cloud['auth'] and key not in cloud['auth']:
- cloud['auth'][key] = cloud['auth'][target_key]
- cloud.pop(target_key, None)
- cloud['auth'].pop(target_key, None)
- return cloud
-
- def _fix_backwards_project(self, cloud):
- # Do the lists backwards so that project_name is the ultimate winner
- # Also handle moving domain names into auth so that domain mapping
- # is easier
- mappings = {
- 'domain_id': ('domain_id', 'domain-id'),
- 'domain_name': ('domain_name', 'domain-name'),
- 'user_domain_id': ('user_domain_id', 'user-domain-id'),
- 'user_domain_name': ('user_domain_name', 'user-domain-name'),
- 'project_domain_id': ('project_domain_id', 'project-domain-id'),
- 'project_domain_name': (
- 'project_domain_name', 'project-domain-name'),
- 'token': ('auth-token', 'auth_token', 'token'),
- }
- if cloud.get('auth_type', None) == 'v2password':
- # If v2password is explcitly requested, this is to deal with old
- # clouds. That's fine - we need to map settings in the opposite
- # direction
- mappings['tenant_id'] = (
- 'project_id', 'project-id', 'tenant_id', 'tenant-id')
- mappings['tenant_name'] = (
- 'project_name', 'project-name', 'tenant_name', 'tenant-name')
- else:
- mappings['project_id'] = (
- 'tenant_id', 'tenant-id', 'project_id', 'project-id')
- mappings['project_name'] = (
- 'tenant_name', 'tenant-name', 'project_name', 'project-name')
- for target_key, possible_values in mappings.items():
- target = None
- for key in possible_values:
- if key in cloud:
- target = str(cloud[key])
- del cloud[key]
- if key in cloud['auth']:
- target = str(cloud['auth'][key])
- del cloud['auth'][key]
- if target:
- cloud['auth'][target_key] = target
- return cloud
-
- def _fix_backwards_auth_plugin(self, cloud):
- # Do the lists backwards so that auth_type is the ultimate winner
- mappings = {
- 'auth_type': ('auth_plugin', 'auth_type'),
- }
- for target_key, possible_values in mappings.items():
- target = None
- for key in possible_values:
- if key in cloud:
- target = cloud[key]
- del cloud[key]
- cloud[target_key] = target
- # Because we force alignment to v3 nouns, we want to force
- # use of the auth plugin that can do auto-selection and dealing
- # with that based on auth parameters. v2password is basically
- # completely broken
- return cloud
-
- def register_argparse_arguments(self, parser, argv, service_keys=None):
- """Register all of the common argparse options needed.
-
- Given an argparse parser, register the keystoneauth Session arguments,
- the keystoneauth Auth Plugin Options and os-cloud. Also, peek in the
- argv to see if all of the auth plugin options should be registered
- or merely the ones already configured.
-
- :param argparse.ArgumentParser: parser to attach argparse options to
- :param list argv: the arguments provided to the application
- :param string service_keys: Service or list of services this argparse
- should be specialized for, if known.
- The first item in the list will be used
- as the default value for service_type
- (optional)
-
- :raises exceptions.OpenStackConfigException if an invalid auth-type
- is requested
- """
-
- if service_keys is None:
- service_keys = []
-
- # Fix argv in place - mapping any keys with embedded _ in them to -
- _fix_argv(argv)
-
- local_parser = argparse_mod.ArgumentParser(add_help=False)
-
- for p in (parser, local_parser):
- p.add_argument(
- '--os-cloud',
- metavar='<name>',
- default=os.environ.get('OS_CLOUD', None),
- help='Named cloud to connect to')
-
- # we need to peek to see if timeout was actually passed, since
- # the keystoneauth declaration of it has a default, which means
- # we have no clue if the value we get is from the ksa default
- # for from the user passing it explicitly. We'll stash it for later
- local_parser.add_argument('--timeout', metavar='<timeout>')
-
- # We need for get_one_cloud to be able to peek at whether a token
- # was passed so that we can swap the default from password to
- # token if it was. And we need to also peek for --os-auth-token
- # for novaclient backwards compat
- local_parser.add_argument('--os-token')
- local_parser.add_argument('--os-auth-token')
-
- # Peek into the future and see if we have an auth-type set in
- # config AND a cloud set, so that we know which command line
- # arguments to register and show to the user (the user may want
- # to say something like:
- # openstack --os-cloud=foo --os-oidctoken=bar
- # although I think that user is the cause of my personal pain
- options, _args = local_parser.parse_known_args(argv)
- if options.timeout:
- self._argv_timeout = True
-
- # validate = False because we're not _actually_ loading here
- # we're only peeking, so it's the wrong time to assert that
- # the rest of the arguments given are invalid for the plugin
- # chosen (for instance, --help may be requested, so that the
- # user can see what options he may want to give
- cloud = self.get_one_cloud(argparse=options, validate=False)
- default_auth_type = cloud.config['auth_type']
-
- try:
- loading.register_auth_argparse_arguments(
- parser, argv, default=default_auth_type)
- except Exception:
- # Hidiing the keystoneauth exception because we're not actually
- # loading the auth plugin at this point, so the error message
- # from it doesn't actually make sense to os-client-config users
- options, _args = parser.parse_known_args(argv)
- plugin_names = loading.get_available_plugin_names()
- raise exceptions.OpenStackConfigException(
- "An invalid auth-type was specified: {auth_type}."
- " Valid choices are: {plugin_names}.".format(
- auth_type=options.os_auth_type,
- plugin_names=",".join(plugin_names)))
-
- if service_keys:
- primary_service = service_keys[0]
- else:
- primary_service = None
- loading.register_session_argparse_arguments(parser)
- adapter.register_adapter_argparse_arguments(
- parser, service_type=primary_service)
- for service_key in service_keys:
- # legacy clients have un-prefixed api-version options
- parser.add_argument(
- '--{service_key}-api-version'.format(
- service_key=service_key.replace('_', '-'),
- help=argparse_mod.SUPPRESS))
- adapter.register_service_adapter_argparse_arguments(
- parser, service_type=service_key)
-
- # Backwards compat options for legacy clients
- parser.add_argument('--http-timeout', help=argparse_mod.SUPPRESS)
- parser.add_argument('--os-endpoint-type', help=argparse_mod.SUPPRESS)
- parser.add_argument('--endpoint-type', help=argparse_mod.SUPPRESS)
-
- def _fix_backwards_interface(self, cloud):
- new_cloud = {}
- for key in cloud.keys():
- if key.endswith('endpoint_type'):
- target_key = key.replace('endpoint_type', 'interface')
- else:
- target_key = key
- new_cloud[target_key] = cloud[key]
- return new_cloud
-
- def _fix_backwards_api_timeout(self, cloud):
- new_cloud = {}
- # requests can only have one timeout, which means that in a single
- # cloud there is no point in different timeout values. However,
- # for some reason many of the legacy clients decided to shove their
- # service name in to the arg name for reasons surpassin sanity. If
- # we find any values that are not api_timeout, overwrite api_timeout
- # with the value
- service_timeout = None
- for key in cloud.keys():
- if key.endswith('timeout') and not (
- key == 'timeout' or key == 'api_timeout'):
- service_timeout = cloud[key]
- else:
- new_cloud[key] = cloud[key]
- if service_timeout is not None:
- new_cloud['api_timeout'] = service_timeout
- # The common argparse arg from keystoneauth is called timeout, but
- # os-client-config expects it to be called api_timeout
- if self._argv_timeout:
- if 'timeout' in new_cloud and new_cloud['timeout']:
- new_cloud['api_timeout'] = new_cloud.pop('timeout')
- return new_cloud
-
- def get_all_clouds(self):
-
- clouds = []
-
- for cloud in self.get_cloud_names():
- for region in self._get_regions(cloud):
- if region:
- clouds.append(self.get_one_cloud(
- cloud, region_name=region['name']))
- return clouds
-
- def _fix_args(self, args=None, argparse=None):
- """Massage the passed-in options
-
- Replace - with _ and strip os_ prefixes.
-
- Convert an argparse Namespace object to a dict, removing values
- that are either None or ''.
- """
- if not args:
- args = {}
-
- if argparse:
- # Convert the passed-in Namespace
- o_dict = vars(argparse)
- parsed_args = dict()
- for k in o_dict:
- if o_dict[k] is not None and o_dict[k] != '':
- parsed_args[k] = o_dict[k]
- args.update(parsed_args)
-
- os_args = dict()
- new_args = dict()
- for (key, val) in iter(args.items()):
- if type(args[key]) == dict:
- # dive into the auth dict
- new_args[key] = self._fix_args(args[key])
- continue
-
- key = key.replace('-', '_')
- if key.startswith('os_'):
- os_args[key[3:]] = val
- else:
- new_args[key] = val
- new_args.update(os_args)
- return new_args
-
- def _find_winning_auth_value(self, opt, config):
- opt_name = opt.name.replace('-', '_')
- if opt_name in config:
- return config[opt_name]
- else:
- deprecated = getattr(opt, 'deprecated', getattr(
- opt, 'deprecated_opts', []))
- for d_opt in deprecated:
- d_opt_name = d_opt.name.replace('-', '_')
- if d_opt_name in config:
- return config[d_opt_name]
-
- def auth_config_hook(self, config):
- """Allow examination of config values before loading auth plugin
-
- OpenStackClient will override this to perform additional checks
- on auth_type.
- """
- return config
-
- def _get_auth_loader(self, config):
- # Use the 'none' plugin for variants of None specified,
- # since it does not look up endpoints or tokens but rather
- # does a passthrough. This is useful for things like Ironic
- # that have a keystoneless operational mode, but means we're
- # still dealing with a keystoneauth Session object, so all the
- # _other_ things (SSL arg handling, timeout) all work consistently
- if config['auth_type'] in (None, "None", ''):
- # 'none' auth_plugin has token as 'notused' so validate_auth will
- # not strip the value (which it does for actual python None)
- config['auth_type'] = 'none'
- elif config['auth_type'] == 'token_endpoint':
- # Humans have been trained to use a thing called token_endpoint
- # That it does not exist in keystoneauth is irrelvant- it not
- # doing what they want causes them sorrow.
- config['auth_type'] = 'admin_token'
- return loading.get_plugin_loader(config['auth_type'])
-
- def _validate_auth(self, config, loader):
- # May throw a keystoneauth1.exceptions.NoMatchingPlugin
-
- plugin_options = loader.get_options()
-
- for p_opt in plugin_options:
- # if it's in config.auth, win, kill it from config dict
- # if it's in config and not in config.auth, move it
- # deprecated loses to current
- # provided beats default, deprecated or not
- winning_value = self._find_winning_auth_value(
- p_opt,
- config['auth'],
- )
- if not winning_value:
- winning_value = self._find_winning_auth_value(
- p_opt,
- config,
- )
-
- config = self._clean_up_after_ourselves(
- config,
- p_opt,
- winning_value,
- )
-
- if winning_value:
- # Prefer the plugin configuration dest value if the value's key
- # is marked as deprecated.
- if p_opt.dest is None:
- good_name = p_opt.name.replace('-', '_')
- config['auth'][good_name] = winning_value
- else:
- config['auth'][p_opt.dest] = winning_value
-
- # See if this needs a prompting
- config = self.option_prompt(config, p_opt)
-
- return config
-
- def _validate_auth_correctly(self, config, loader):
- # May throw a keystoneauth1.exceptions.NoMatchingPlugin
-
- plugin_options = loader.get_options()
-
- for p_opt in plugin_options:
- # if it's in config, win, move it and kill it from config dict
- # if it's in config.auth but not in config it's good
- # deprecated loses to current
- # provided beats default, deprecated or not
- winning_value = self._find_winning_auth_value(
- p_opt,
- config,
- )
- if not winning_value:
- winning_value = self._find_winning_auth_value(
- p_opt,
- config['auth'],
- )
-
- config = self._clean_up_after_ourselves(
- config,
- p_opt,
- winning_value,
- )
-
- # See if this needs a prompting
- config = self.option_prompt(config, p_opt)
-
- return config
-
- def option_prompt(self, config, p_opt):
- """Prompt user for option that requires a value"""
- if (
- getattr(p_opt, 'prompt', None) is not None and
- p_opt.dest not in config['auth'] and
- self._pw_callback is not None
- ):
- config['auth'][p_opt.dest] = self._pw_callback(p_opt.prompt)
- return config
-
- def _clean_up_after_ourselves(self, config, p_opt, winning_value):
-
- # Clean up after ourselves
- for opt in [p_opt.name] + [o.name for o in p_opt.deprecated]:
- opt = opt.replace('-', '_')
- config.pop(opt, None)
- config['auth'].pop(opt, None)
-
- if winning_value:
- # Prefer the plugin configuration dest value if the value's key
- # is marked as depreciated.
- if p_opt.dest is None:
- config['auth'][p_opt.name.replace('-', '_')] = (
- winning_value)
- else:
- config['auth'][p_opt.dest] = winning_value
- return config
-
- def magic_fixes(self, config):
- """Perform the set of magic argument fixups"""
-
- # Infer token plugin if a token was given
- if (('auth' in config and 'token' in config['auth']) or
- ('auth_token' in config and config['auth_token']) or
- ('token' in config and config['token'])):
- config.setdefault('token', config.pop('auth_token', None))
-
- # These backwards compat values are only set via argparse. If it's
- # there, it's because it was passed in explicitly, and should win
- config = self._fix_backwards_api_timeout(config)
- if 'endpoint_type' in config:
- config['interface'] = config.pop('endpoint_type')
-
- config = self._fix_backwards_auth_plugin(config)
- config = self._fix_backwards_project(config)
- config = self._fix_backwards_interface(config)
- config = self._fix_backwards_networks(config)
- config = self._handle_domain_id(config)
-
- for key in BOOL_KEYS:
- if key in config:
- if type(config[key]) is not bool:
- config[key] = get_boolean(config[key])
-
- # TODO(mordred): Special casing auth_url here. We should
- # come back to this betterer later so that it's
- # more generalized
- if 'auth' in config and 'auth_url' in config['auth']:
- config['auth']['auth_url'] = config['auth']['auth_url'].format(
- **config)
-
- return config
-
- def get_one_cloud(self, cloud=None, validate=True,
- argparse=None, **kwargs):
- """Retrieve a single cloud configuration and merge additional options
-
- :param string cloud:
- The name of the configuration to load from clouds.yaml
- :param boolean validate:
- Validate the config. Setting this to False causes no auth plugin
- to be created. It's really only useful for testing.
- :param Namespace argparse:
- An argparse Namespace object; allows direct passing in of
- argparse options to be added to the cloud config. Values
- of None and '' will be removed.
- :param region_name: Name of the region of the cloud.
- :param kwargs: Additional configuration options
-
- :raises: keystoneauth1.exceptions.MissingRequiredOptions
- on missing required auth parameters
- """
-
- args = self._fix_args(kwargs, argparse=argparse)
-
- if cloud is None:
- if 'cloud' in args:
- cloud = args['cloud']
- else:
- cloud = self.default_cloud
-
- config = self._get_base_cloud_config(cloud)
-
- # Get region specific settings
- if 'region_name' not in args:
- args['region_name'] = ''
- region = self._get_region(cloud=cloud, region_name=args['region_name'])
- args['region_name'] = region['name']
- region_args = copy.deepcopy(region['values'])
-
- # Regions is a list that we can use to create a list of cloud/region
- # objects. It does not belong in the single-cloud dict
- config.pop('regions', None)
-
- # Can't just do update, because None values take over
- for arg_list in region_args, args:
- for (key, val) in iter(arg_list.items()):
- if val is not None:
- if key == 'auth' and config[key] is not None:
- config[key] = _auth_update(config[key], val)
- else:
- config[key] = val
-
- config = self.magic_fixes(config)
- config = self._normalize_keys(config)
-
- # NOTE(dtroyer): OSC needs a hook into the auth args before the
- # plugin is loaded in order to maintain backward-
- # compatible behaviour
- config = self.auth_config_hook(config)
-
- if validate:
- loader = self._get_auth_loader(config)
- config = self._validate_auth(config, loader)
- auth_plugin = loader.load_from_options(**config['auth'])
- else:
- auth_plugin = None
-
- # If any of the defaults reference other values, we need to expand
- for (key, value) in config.items():
- if hasattr(value, 'format') and key not in FORMAT_EXCLUSIONS:
- config[key] = value.format(**config)
-
- force_ipv4 = config.pop('force_ipv4', self.force_ipv4)
- prefer_ipv6 = config.pop('prefer_ipv6', True)
- if not prefer_ipv6:
- force_ipv4 = True
-
- if cloud is None:
- cloud_name = ''
- else:
- cloud_name = str(cloud)
- return cloud_config.CloudConfig(
- name=cloud_name,
- region=config['region_name'],
- config=config,
- force_ipv4=force_ipv4,
- auth_plugin=auth_plugin,
- openstack_config=self,
- session_constructor=self._session_constructor,
- app_name=self._app_name,
- app_version=self._app_version,
- )
-
- def get_one_cloud_osc(
- self,
- cloud=None,
- validate=True,
- argparse=None,
- **kwargs
- ):
- """Retrieve a single cloud configuration and merge additional options
-
- :param string cloud:
- The name of the configuration to load from clouds.yaml
- :param boolean validate:
- Validate the config. Setting this to False causes no auth plugin
- to be created. It's really only useful for testing.
- :param Namespace argparse:
- An argparse Namespace object; allows direct passing in of
- argparse options to be added to the cloud config. Values
- of None and '' will be removed.
- :param region_name: Name of the region of the cloud.
- :param kwargs: Additional configuration options
-
- :raises: keystoneauth1.exceptions.MissingRequiredOptions
- on missing required auth parameters
- """
-
- args = self._fix_args(kwargs, argparse=argparse)
-
- if cloud is None:
- if 'cloud' in args:
- cloud = args['cloud']
- else:
- cloud = self.default_cloud
-
- config = self._get_base_cloud_config(cloud)
-
- # Get region specific settings
- if 'region_name' not in args:
- args['region_name'] = ''
- region = self._get_region(cloud=cloud, region_name=args['region_name'])
- args['region_name'] = region['name']
- region_args = copy.deepcopy(region['values'])
-
- # Regions is a list that we can use to create a list of cloud/region
- # objects. It does not belong in the single-cloud dict
- config.pop('regions', None)
-
- # Can't just do update, because None values take over
- for arg_list in region_args, args:
- for (key, val) in iter(arg_list.items()):
- if val is not None:
- if key == 'auth' and config[key] is not None:
- config[key] = _auth_update(config[key], val)
- else:
- config[key] = val
-
- config = self.magic_fixes(config)
-
- # NOTE(dtroyer): OSC needs a hook into the auth args before the
- # plugin is loaded in order to maintain backward-
- # compatible behaviour
- config = self.auth_config_hook(config)
-
- if validate:
- loader = self._get_auth_loader(config)
- config = self._validate_auth_correctly(config, loader)
- auth_plugin = loader.load_from_options(**config['auth'])
- else:
- auth_plugin = None
-
- # If any of the defaults reference other values, we need to expand
- for (key, value) in config.items():
- if hasattr(value, 'format') and key not in FORMAT_EXCLUSIONS:
- config[key] = value.format(**config)
-
- force_ipv4 = config.pop('force_ipv4', self.force_ipv4)
- prefer_ipv6 = config.pop('prefer_ipv6', True)
- if not prefer_ipv6:
- force_ipv4 = True
-
- if cloud is None:
- cloud_name = ''
- else:
- cloud_name = str(cloud)
- return cloud_config.CloudConfig(
- name=cloud_name,
- region=config['region_name'],
- config=self._normalize_keys(config),
- force_ipv4=force_ipv4,
- auth_plugin=auth_plugin,
- openstack_config=self,
- )
-
- @staticmethod
- def set_one_cloud(config_file, cloud, set_config=None):
- """Set a single cloud configuration.
- :param string config_file:
- The path to the config file to edit. If this file does not exist
- it will be created.
- :param string cloud:
- The name of the configuration to save to clouds.yaml
- :param dict set_config: Configuration options to be set
- """
- set_config = set_config or {}
- cur_config = {}
- try:
- with open(config_file) as fh:
- cur_config = yaml.safe_load(fh)
- except IOError as e:
- # Not no such file
- if e.errno != 2:
- raise
- pass
+class OpenStackConfig(loader.OpenStackConfig):
- clouds_config = cur_config.get('clouds', {})
- cloud_config = _auth_update(clouds_config.get(cloud, {}), set_config)
- clouds_config[cloud] = cloud_config
- cur_config['clouds'] = clouds_config
+ _cloud_region_class = cloud_config.CloudConfig
+ _defaults_module = defaults
- with open(config_file, 'w') as fh:
- yaml.safe_dump(cur_config, fh, default_flow_style=False)
+ get_one_cloud = loader.OpenStackConfig.get_one
+ get_all_clouds = loader.OpenStackConfig.get_all
if __name__ == '__main__':
diff --git a/os_client_config/defaults.py b/os_client_config/defaults.py
index 1231cce..3256478 100644
--- a/os_client_config/defaults.py
+++ b/os_client_config/defaults.py
@@ -12,41 +12,13 @@
# License for the specific language governing permissions and limitations
# under the License.
-import json
import os
-import threading
+
+from openstack.config import defaults
_json_path = os.path.join(
os.path.dirname(os.path.realpath(__file__)), 'defaults.json')
-_defaults = None
-_defaults_lock = threading.Lock()
def get_defaults():
- global _defaults
- if _defaults is not None:
- return _defaults.copy()
- with _defaults_lock:
- if _defaults is not None:
- # Did someone else just finish filling it?
- return _defaults.copy()
- # Python language specific defaults
- # These are defaults related to use of python libraries, they are
- # not qualities of a cloud.
- #
- # NOTE(harlowja): update a in-memory dict, before updating
- # the global one so that other callers of get_defaults do not
- # see the partially filled one.
- tmp_defaults = dict(
- api_timeout=None,
- verify=True,
- cacert=None,
- cert=None,
- key=None,
- )
- with open(_json_path, 'r') as json_file:
- updates = json.load(json_file)
- if updates is not None:
- tmp_defaults.update(updates)
- _defaults = tmp_defaults
- return tmp_defaults.copy()
+ return defaults.get_defaults(json_path=_json_path)
diff --git a/os_client_config/exceptions.py b/os_client_config/exceptions.py
index 556dd49..898292a 100644
--- a/os_client_config/exceptions.py
+++ b/os_client_config/exceptions.py
@@ -12,9 +12,9 @@
# License for the specific language governing permissions and limitations
# under the License.
+from openstack import exceptions
-class OpenStackConfigException(Exception):
- """Something went wrong with parsing your OpenStack Config."""
+OpenStackConfigException = exceptions.ConfigException
class OpenStackConfigVersionException(OpenStackConfigException):
diff --git a/os_client_config/schema.json b/os_client_config/schema.json
deleted file mode 100644
index 8110d58..0000000
--- a/os_client_config/schema.json
+++ /dev/null
@@ -1,121 +0,0 @@
-{
- "$schema": "http://json-schema.org/draft-04/schema#",
- "id": "https://git.openstack.org/cgit/openstack/cloud-data/plain/schema.json#",
- "type": "object",
- "properties": {
- "auth_type": {
- "name": "Auth Type",
- "description": "Name of authentication plugin to be used",
- "default": "password",
- "type": "string"
- },
- "disable_vendor_agent": {
- "name": "Disable Vendor Agent Properties",
- "description": "Image properties required to disable vendor agent",
- "type": "object",
- "properties": {}
- },
- "floating_ip_source": {
- "name": "Floating IP Source",
- "description": "Which service provides Floating IPs",
- "enum": [ "neutron", "nova", "None" ],
- "default": "neutron"
- },
- "image_api_use_tasks": {
- "name": "Image Task API",
- "description": "Does the cloud require the Image Task API",
- "default": false,
- "type": "boolean"
- },
- "image_format": {
- "name": "Image Format",
- "description": "Format for uploaded Images",
- "default": "qcow2",
- "type": "string"
- },
- "interface": {
- "name": "API Interface",
- "description": "Which API Interface should connections hit",
- "default": "public",
- "enum": [ "public", "internal", "admin" ]
- },
- "secgroup_source": {
- "name": "Security Group Source",
- "description": "Which service provides security groups",
- "default": "neutron",
- "enum": [ "neutron", "nova", "None" ]
- },
- "baremetal_api_version": {
- "name": "Baremetal API Service Type",
- "description": "Baremetal API Service Type",
- "default": "1",
- "type": "string"
- },
- "compute_api_version": {
- "name": "Compute API Version",
- "description": "Compute API Version",
- "default": "2",
- "type": "string"
- },
- "database_api_version": {
- "name": "Database API Version",
- "description": "Database API Version",
- "default": "1.0",
- "type": "string"
- },
- "dns_api_version": {
- "name": "DNS API Version",
- "description": "DNS API Version",
- "default": "2",
- "type": "string"
- },
- "identity_api_version": {
- "name": "Identity API Version",
- "description": "Identity API Version",
- "default": "2",
- "type": "string"
- },
- "image_api_version": {
- "name": "Image API Version",
- "description": "Image API Version",
- "default": "1",
- "type": "string"
- },
- "network_api_version": {
- "name": "Network API Version",
- "description": "Network API Version",
- "default": "2",
- "type": "string"
- },
- "object_store_api_version": {
- "name": "Object Storage API Version",
- "description": "Object Storage API Version",
- "default": "1",
- "type": "string"
- },
- "volume_api_version": {
- "name": "Volume API Version",
- "description": "Volume API Version",
- "default": "2",
- "type": "string"
- }
- },
- "required": [
- "auth_type",
- "baremetal_api_version",
- "compute_api_version",
- "database_api_version",
- "disable_vendor_agent",
- "dns_api_version",
- "floating_ip_source",
- "identity_api_version",
- "image_api_use_tasks",
- "image_api_version",
- "image_format",
- "interface",
- "network_api_version",
- "object_store_api_version",
- "secgroup_source",
- "volume_api_version"
- ]
-}
diff --git a/os_client_config/tests/base.py b/os_client_config/tests/base.py
index e672a0b..abbd256 100644
--- a/os_client_config/tests/base.py
+++ b/os_client_config/tests/base.py
@@ -215,6 +215,8 @@ class TestCase(base.BaseTestCase):
self.no_yaml = _write_yaml(NO_CONF)
self.useFixture(fixtures.MonkeyPatch(
'os_client_config.__version__', '1.2.3'))
+ self.useFixture(fixtures.MonkeyPatch(
+ 'openstack.version.__version__', '3.4.5'))
# Isolate the test runs from the environment
# Do this as two loops because you can't modify the dict in a loop
diff --git a/os_client_config/tests/test_cloud_config.py b/os_client_config/tests/test_cloud_config.py
index 86a71e2..d4a0b0f 100644
--- a/os_client_config/tests/test_cloud_config.py
+++ b/os_client_config/tests/test_cloud_config.py
@@ -16,6 +16,8 @@ from keystoneauth1 import exceptions as ksa_exceptions
from keystoneauth1 import session as ksa_session
import mock
+from openstack.config import cloud_region
+
from os_client_config import cloud_config
from os_client_config import defaults
from os_client_config import exceptions
@@ -26,7 +28,6 @@ fake_config_dict = {'a': 1, 'os_b': 2, 'c': 3, 'os_c': 4}
fake_services_dict = {
'compute_api_version': '2',
'compute_endpoint_override': 'http://compute.example.com',
- 'compute_region_name': 'region-bl',
'telemetry_endpoint': 'http://telemetry.example.com',
'interface': 'public',
'image_service_type': 'mage',
@@ -45,14 +46,14 @@ class TestCloudConfig(base.TestCase):
self.assertEqual("region-al", cc.region)
# Look up straight value
- self.assertEqual(1, cc.a)
+ self.assertEqual('1', cc.a)
# Look up prefixed attribute, fail - returns None
self.assertIsNone(cc.os_b)
# Look up straight value, then prefixed value
- self.assertEqual(3, cc.c)
- self.assertEqual(3, cc.os_c)
+ self.assertEqual('3', cc.c)
+ self.assertEqual('3', cc.os_c)
# Lookup mystery attribute
self.assertIsNone(cc.x)
@@ -139,7 +140,6 @@ class TestCloudConfig(base.TestCase):
self.assertEqual('admin', cc.get_interface('identity'))
self.assertEqual('region-al', cc.get_region_name())
self.assertEqual('region-al', cc.get_region_name('image'))
- self.assertEqual('region-bl', cc.get_region_name('compute'))
self.assertIsNone(cc.get_api_version('image'))
self.assertEqual('2', cc.get_api_version('compute'))
self.assertEqual('mage', cc.get_service_type('image'))
@@ -194,10 +194,10 @@ class TestCloudConfig(base.TestCase):
cc.get_session()
mock_session.assert_called_with(
auth=mock.ANY,
- verify=True, cert=None, timeout=None)
+ verify=True, cert=None, timeout=None, discovery_cache=None)
self.assertEqual(
- fake_session.additional_user_agent,
- [('os-client-config', '1.2.3')])
+ [('os-client-config', '1.2.3'), ('openstacksdk', '3.4.5')],
+ fake_session.additional_user_agent)
@mock.patch.object(ksa_session, 'Session')
def test_get_session_with_app_name(self, mock_session):
@@ -214,12 +214,12 @@ class TestCloudConfig(base.TestCase):
cc.get_session()
mock_session.assert_called_with(
auth=mock.ANY,
- verify=True, cert=None, timeout=None)
+ verify=True, cert=None, timeout=None, discovery_cache=None)
self.assertEqual(fake_session.app_name, "test_app")
self.assertEqual(fake_session.app_version, "test_version")
self.assertEqual(
- fake_session.additional_user_agent,
- [('os-client-config', '1.2.3')])
+ [('os-client-config', '1.2.3'), ('openstacksdk', '3.4.5')],
+ fake_session.additional_user_agent)
@mock.patch.object(ksa_session, 'Session')
def test_get_session_with_timeout(self, mock_session):
@@ -234,10 +234,10 @@ class TestCloudConfig(base.TestCase):
cc.get_session()
mock_session.assert_called_with(
auth=mock.ANY,
- verify=True, cert=None, timeout=9)
+ verify=True, cert=None, timeout=9, discovery_cache=None)
self.assertEqual(
- fake_session.additional_user_agent,
- [('os-client-config', '1.2.3')])
+ [('os-client-config', '1.2.3'), ('openstacksdk', '3.4.5')],
+ fake_session.additional_user_agent)
@mock.patch.object(ksa_session, 'Session')
def test_override_session_endpoint_override(self, mock_session):
@@ -259,7 +259,7 @@ class TestCloudConfig(base.TestCase):
cc.get_session_endpoint('telemetry'),
fake_services_dict['telemetry_endpoint'])
- @mock.patch.object(cloud_config.CloudConfig, 'get_session')
+ @mock.patch.object(cloud_region.CloudRegion, 'get_session')
def test_session_endpoint(self, mock_get_session):
mock_session = mock.Mock()
mock_get_session.return_value = mock_session
@@ -274,7 +274,7 @@ class TestCloudConfig(base.TestCase):
region_name='region-al',
service_type='orchestration')
- @mock.patch.object(cloud_config.CloudConfig, 'get_session')
+ @mock.patch.object(cloud_region.CloudRegion, 'get_session')
def test_session_endpoint_not_found(self, mock_get_session):
exc_to_raise = ksa_exceptions.catalog.EndpointNotFound
mock_get_session.return_value.get_endpoint.side_effect = exc_to_raise
@@ -282,9 +282,9 @@ class TestCloudConfig(base.TestCase):
"test1", "region-al", {}, auth_plugin=mock.Mock())
self.assertIsNone(cc.get_session_endpoint('notfound'))
- @mock.patch.object(cloud_config.CloudConfig, 'get_api_version')
- @mock.patch.object(cloud_config.CloudConfig, 'get_auth_args')
- @mock.patch.object(cloud_config.CloudConfig, 'get_session_endpoint')
+ @mock.patch.object(cloud_region.CloudRegion, 'get_api_version')
+ @mock.patch.object(cloud_region.CloudRegion, 'get_auth_args')
+ @mock.patch.object(cloud_region.CloudRegion, 'get_session_endpoint')
def test_legacy_client_object_store_password(
self,
mock_get_session_endpoint,
@@ -313,8 +313,8 @@ class TestCloudConfig(base.TestCase):
'endpoint_type': 'public',
})
- @mock.patch.object(cloud_config.CloudConfig, 'get_auth_args')
- @mock.patch.object(cloud_config.CloudConfig, 'get_session_endpoint')
+ @mock.patch.object(cloud_region.CloudRegion, 'get_auth_args')
+ @mock.patch.object(cloud_region.CloudRegion, 'get_session_endpoint')
def test_legacy_client_object_store_password_v2(
self, mock_get_session_endpoint, mock_get_auth_args):
mock_client = mock.Mock()
@@ -339,8 +339,8 @@ class TestCloudConfig(base.TestCase):
'endpoint_type': 'public',
})
- @mock.patch.object(cloud_config.CloudConfig, 'get_auth_args')
- @mock.patch.object(cloud_config.CloudConfig, 'get_session_endpoint')
+ @mock.patch.object(cloud_region.CloudRegion, 'get_auth_args')
+ @mock.patch.object(cloud_region.CloudRegion, 'get_session_endpoint')
def test_legacy_client_object_store(
self, mock_get_session_endpoint, mock_get_auth_args):
mock_client = mock.Mock()
@@ -360,8 +360,8 @@ class TestCloudConfig(base.TestCase):
'endpoint_type': 'public',
})
- @mock.patch.object(cloud_config.CloudConfig, 'get_auth_args')
- @mock.patch.object(cloud_config.CloudConfig, 'get_session_endpoint')
+ @mock.patch.object(cloud_region.CloudRegion, 'get_auth_args')
+ @mock.patch.object(cloud_region.CloudRegion, 'get_session_endpoint')
def test_legacy_client_object_store_timeout(
self, mock_get_session_endpoint, mock_get_auth_args):
mock_client = mock.Mock()
@@ -371,7 +371,8 @@ class TestCloudConfig(base.TestCase):
config_dict.update(fake_services_dict)
config_dict['api_timeout'] = 9
cc = cloud_config.CloudConfig(
- "test1", "region-al", config_dict, auth_plugin=mock.Mock())
+ name="test1", region_name="region-al", config=config_dict,
+ auth_plugin=mock.Mock())
cc.get_legacy_client('object-store', mock_client)
mock_client.assert_called_with(
session=mock.ANY,
@@ -382,7 +383,7 @@ class TestCloudConfig(base.TestCase):
'endpoint_type': 'public',
})
- @mock.patch.object(cloud_config.CloudConfig, 'get_auth_args')
+ @mock.patch.object(cloud_region.CloudRegion, 'get_auth_args')
def test_legacy_client_object_store_endpoint(
self, mock_get_auth_args):
mock_client = mock.Mock()
@@ -402,7 +403,7 @@ class TestCloudConfig(base.TestCase):
'endpoint_type': 'public',
})
- @mock.patch.object(cloud_config.CloudConfig, 'get_session_endpoint')
+ @mock.patch.object(cloud_region.CloudRegion, 'get_session_endpoint')
def test_legacy_client_image(self, mock_get_session_endpoint):
mock_client = mock.Mock()
mock_get_session_endpoint.return_value = 'http://example.com/v2'
@@ -422,7 +423,7 @@ class TestCloudConfig(base.TestCase):
service_type='mage'
)
- @mock.patch.object(cloud_config.CloudConfig, 'get_session_endpoint')
+ @mock.patch.object(cloud_region.CloudRegion, 'get_session_endpoint')
def test_legacy_client_image_override(self, mock_get_session_endpoint):
mock_client = mock.Mock()
mock_get_session_endpoint.return_value = 'http://example.com/v2'
@@ -443,7 +444,7 @@ class TestCloudConfig(base.TestCase):
service_type='mage'
)
- @mock.patch.object(cloud_config.CloudConfig, 'get_session_endpoint')
+ @mock.patch.object(cloud_region.CloudRegion, 'get_session_endpoint')
def test_legacy_client_image_versioned(self, mock_get_session_endpoint):
mock_client = mock.Mock()
mock_get_session_endpoint.return_value = 'http://example.com/v2'
@@ -465,7 +466,7 @@ class TestCloudConfig(base.TestCase):
service_type='mage'
)
- @mock.patch.object(cloud_config.CloudConfig, 'get_session_endpoint')
+ @mock.patch.object(cloud_region.CloudRegion, 'get_session_endpoint')
def test_legacy_client_image_unversioned(self, mock_get_session_endpoint):
mock_client = mock.Mock()
mock_get_session_endpoint.return_value = 'http://example.com/'
@@ -487,7 +488,7 @@ class TestCloudConfig(base.TestCase):
service_type='mage'
)
- @mock.patch.object(cloud_config.CloudConfig, 'get_session_endpoint')
+ @mock.patch.object(cloud_region.CloudRegion, 'get_session_endpoint')
def test_legacy_client_image_argument(self, mock_get_session_endpoint):
mock_client = mock.Mock()
mock_get_session_endpoint.return_value = 'http://example.com/v3'
@@ -509,7 +510,7 @@ class TestCloudConfig(base.TestCase):
service_type='mage'
)
- @mock.patch.object(cloud_config.CloudConfig, 'get_session_endpoint')
+ @mock.patch.object(cloud_region.CloudRegion, 'get_session_endpoint')
def test_legacy_client_network(self, mock_get_session_endpoint):
mock_client = mock.Mock()
mock_get_session_endpoint.return_value = 'http://example.com/v2'
@@ -527,7 +528,7 @@ class TestCloudConfig(base.TestCase):
session=mock.ANY,
service_name=None)
- @mock.patch.object(cloud_config.CloudConfig, 'get_session_endpoint')
+ @mock.patch.object(cloud_region.CloudRegion, 'get_session_endpoint')
def test_legacy_client_compute(self, mock_get_session_endpoint):
mock_client = mock.Mock()
mock_get_session_endpoint.return_value = 'http://example.com/v2'
@@ -545,7 +546,7 @@ class TestCloudConfig(base.TestCase):
session=mock.ANY,
service_name=None)
- @mock.patch.object(cloud_config.CloudConfig, 'get_session_endpoint')
+ @mock.patch.object(cloud_region.CloudRegion, 'get_session_endpoint')
def test_legacy_client_identity(self, mock_get_session_endpoint):
mock_client = mock.Mock()
mock_get_session_endpoint.return_value = 'http://example.com/v2'
@@ -564,7 +565,7 @@ class TestCloudConfig(base.TestCase):
session=mock.ANY,
service_name='locks')
- @mock.patch.object(cloud_config.CloudConfig, 'get_session_endpoint')
+ @mock.patch.object(cloud_region.CloudRegion, 'get_session_endpoint')
def test_legacy_client_identity_v3(self, mock_get_session_endpoint):
mock_client = mock.Mock()
mock_get_session_endpoint.return_value = 'http://example.com'
diff --git a/os_client_config/tests/test_config.py b/os_client_config/tests/test_config.py
index 8ea6ee1..09f603b 100644
--- a/os_client_config/tests/test_config.py
+++ b/os_client_config/tests/test_config.py
@@ -21,9 +21,10 @@ import fixtures
import testtools
import yaml
+from openstack.config import defaults
+
from os_client_config import cloud_config
from os_client_config import config
-from os_client_config import defaults
from os_client_config import exceptions
from os_client_config.tests import base
@@ -967,13 +968,6 @@ class TestConfigDefault(base.TestCase):
self._assert_cloud_details(cc)
self.assertEqual('password', cc.auth_type)
- def test_set_default_before_init(self):
- config.set_default('identity_api_version', '4')
- c = config.OpenStackConfig(config_files=[self.cloud_yaml],
- vendor_files=[self.vendor_yaml])
- cc = c.get_one_cloud(cloud='_test-cloud_', argparse=None)
- self.assertEqual('4', cc.identity_api_version)
-
class TestBackwardsCompatibility(base.TestCase):
diff --git a/os_client_config/tests/test_json.py b/os_client_config/tests/test_json.py
deleted file mode 100644
index f618f3b..0000000
--- a/os_client_config/tests/test_json.py
+++ /dev/null
@@ -1,62 +0,0 @@
-# Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import glob
-import json
-import os
-
-import jsonschema
-from testtools import content
-
-from os_client_config import defaults
-from os_client_config.tests import base
-
-
-class TestConfig(base.TestCase):
-
- def json_diagnostics(self, exc_info):
- self.addDetail('filename', content.text_content(self.filename))
- for error in sorted(self.validator.iter_errors(self.json_data)):
- self.addDetail('jsonschema', content.text_content(str(error)))
-
- def test_defaults_valid_json(self):
- _schema_path = os.path.join(
- os.path.dirname(os.path.realpath(defaults.__file__)),
- 'schema.json')
- schema = json.load(open(_schema_path, 'r'))
- self.validator = jsonschema.Draft4Validator(schema)
- self.addOnException(self.json_diagnostics)
-
- self.filename = os.path.join(
- os.path.dirname(os.path.realpath(defaults.__file__)),
- 'defaults.json')
- self.json_data = json.load(open(self.filename, 'r'))
-
- self.assertTrue(self.validator.is_valid(self.json_data))
-
- def test_vendors_valid_json(self):
- _schema_path = os.path.join(
- os.path.dirname(os.path.realpath(defaults.__file__)),
- 'vendor-schema.json')
- schema = json.load(open(_schema_path, 'r'))
- self.validator = jsonschema.Draft4Validator(schema)
- self.addOnException(self.json_diagnostics)
-
- _vendors_path = os.path.join(
- os.path.dirname(os.path.realpath(defaults.__file__)),
- 'vendors')
- for self.filename in glob.glob(os.path.join(_vendors_path, '*.json')):
- self.json_data = json.load(open(self.filename, 'r'))
-
- self.assertTrue(self.validator.is_valid(self.json_data))
diff --git a/os_client_config/vendor-schema.json b/os_client_config/vendor-schema.json
deleted file mode 100644
index 8193a19..0000000
--- a/os_client_config/vendor-schema.json
+++ /dev/null
@@ -1,223 +0,0 @@
-{
- "$schema": "http://json-schema.org/draft-04/schema#",
- "id": "https://git.openstack.org/cgit/openstack/cloud-data/plain/vendor-schema.json#",
- "type": "object",
- "properties": {
- "name": {
- "type": "string"
- },
- "profile": {
- "type": "object",
- "properties": {
- "auth": {
- "type": "object",
- "properties": {
- "auth_url": {
- "name": "Auth URL",
- "description": "URL of the primary Keystone endpoint",
- "type": "string"
- }
- }
- },
- "auth_type": {
- "name": "Auth Type",
- "description": "Name of authentication plugin to be used",
- "default": "password",
- "type": "string"
- },
- "disable_vendor_agent": {
- "name": "Disable Vendor Agent Properties",
- "description": "Image properties required to disable vendor agent",
- "type": "object",
- "properties": {}
- },
- "floating_ip_source": {
- "name": "Floating IP Source",
- "description": "Which service provides Floating IPs",
- "enum": [ "neutron", "nova", "None" ],
- "default": "neutron"
- },
- "image_api_use_tasks": {
- "name": "Image Task API",
- "description": "Does the cloud require the Image Task API",
- "default": false,
- "type": "boolean"
- },
- "image_format": {
- "name": "Image Format",
- "description": "Format for uploaded Images",
- "default": "qcow2",
- "type": "string"
- },
- "interface": {
- "name": "API Interface",
- "description": "Which API Interface should connections hit",
- "default": "public",
- "enum": [ "public", "internal", "admin" ]
- },
- "message": {
- "name": "Status message",
- "description": "Optional message with information related to status",
- "type": "string"
- },
- "requires_floating_ip": {
- "name": "Requires Floating IP",
- "description": "Whether the cloud requires a floating IP to route traffic off of the cloud",
- "default": null,
- "type": ["boolean", "null"]
- },
- "secgroup_source": {
- "name": "Security Group Source",
- "description": "Which service provides security groups",
- "enum": [ "neutron", "nova", "None" ],
- "default": "neutron"
- },
- "status": {
- "name": "Vendor status",
- "description": "Status of the vendor's cloud",
- "enum": [ "active", "deprecated", "shutdown"],
- "default": "active"
- },
- "compute_service_name": {
- "name": "Compute API Service Name",
- "description": "Compute API Service Name",
- "type": "string"
- },
- "database_service_name": {
- "name": "Database API Service Name",
- "description": "Database API Service Name",
- "type": "string"
- },
- "dns_service_name": {
- "name": "DNS API Service Name",
- "description": "DNS API Service Name",
- "type": "string"
- },
- "identity_service_name": {
- "name": "Identity API Service Name",
- "description": "Identity API Service Name",
- "type": "string"
- },
- "image_service_name": {
- "name": "Image API Service Name",
- "description": "Image API Service Name",
- "type": "string"
- },
- "volume_service_name": {
- "name": "Volume API Service Name",
- "description": "Volume API Service Name",
- "type": "string"
- },
- "network_service_name": {
- "name": "Network API Service Name",
- "description": "Network API Service Name",
- "type": "string"
- },
- "object_service_name": {
- "name": "Object Storage API Service Name",
- "description": "Object Storage API Service Name",
- "type": "string"
- },
- "baremetal_service_name": {
- "name": "Baremetal API Service Name",
- "description": "Baremetal API Service Name",
- "type": "string"
- },
- "compute_service_type": {
- "name": "Compute API Service Type",
- "description": "Compute API Service Type",
- "type": "string"
- },
- "database_service_type": {
- "name": "Database API Service Type",
- "description": "Database API Service Type",
- "type": "string"
- },
- "dns_service_type": {
- "name": "DNS API Service Type",
- "description": "DNS API Service Type",
- "type": "string"
- },
- "identity_service_type": {
- "name": "Identity API Service Type",
- "description": "Identity API Service Type",
- "type": "string"
- },
- "image_service_type": {
- "name": "Image API Service Type",
- "description": "Image API Service Type",
- "type": "string"
- },
- "volume_service_type": {
- "name": "Volume API Service Type",
- "description": "Volume API Service Type",
- "type": "string"
- },
- "network_service_type": {
- "name": "Network API Service Type",
- "description": "Network API Service Type",
- "type": "string"
- },
- "object_service_type": {
- "name": "Object Storage API Service Type",
- "description": "Object Storage API Service Type",
- "type": "string"
- },
- "baremetal_service_type": {
- "name": "Baremetal API Service Type",
- "description": "Baremetal API Service Type",
- "type": "string"
- },
- "compute_api_version": {
- "name": "Compute API Version",
- "description": "Compute API Version",
- "type": "string"
- },
- "database_api_version": {
- "name": "Database API Version",
- "description": "Database API Version",
- "type": "string"
- },
- "dns_api_version": {
- "name": "DNS API Version",
- "description": "DNS API Version",
- "type": "string"
- },
- "identity_api_version": {
- "name": "Identity API Version",
- "description": "Identity API Version",
- "type": "string"
- },
- "image_api_version": {
- "name": "Image API Version",
- "description": "Image API Version",
- "type": "string"
- },
- "volume_api_version": {
- "name": "Volume API Version",
- "description": "Volume API Version",
- "type": "string"
- },
- "network_api_version": {
- "name": "Network API Version",
- "description": "Network API Version",
- "type": "string"
- },
- "object_api_version": {
- "name": "Object Storage API Version",
- "description": "Object Storage API Version",
- "type": "string"
- },
- "baremetal_api_version": {
- "name": "Baremetal API Version",
- "description": "Baremetal API Version",
- "type": "string"
- }
- }
- }
- },
- "required": [
- "name",
- "profile"
- ]
-}
diff --git a/os_client_config/vendors/__init__.py b/os_client_config/vendors/__init__.py
index 3e1d20a..231f619 100644
--- a/os_client_config/vendors/__init__.py
+++ b/os_client_config/vendors/__init__.py
@@ -12,26 +12,4 @@
# License for the specific language governing permissions and limitations
# under the License.
-import glob
-import json
-import os
-
-import yaml
-
-_vendors_path = os.path.dirname(os.path.realpath(__file__))
-_vendor_defaults = None
-
-
-def get_profile(profile_name):
- global _vendor_defaults
- if _vendor_defaults is None:
- _vendor_defaults = {}
- for vendor in glob.glob(os.path.join(_vendors_path, '*.yaml')):
- with open(vendor, 'r') as f:
- vendor_data = yaml.safe_load(f)
- _vendor_defaults[vendor_data['name']] = vendor_data['profile']
- for vendor in glob.glob(os.path.join(_vendors_path, '*.json')):
- with open(vendor, 'r') as f:
- vendor_data = json.load(f)
- _vendor_defaults[vendor_data['name']] = vendor_data['profile']
- return _vendor_defaults.get(profile_name)
+from openstack.config.vendors import get_profile # noqa
diff --git a/os_client_config/vendors/auro.json b/os_client_config/vendors/auro.json
deleted file mode 100644
index 410a8e1..0000000
--- a/os_client_config/vendors/auro.json
+++ /dev/null
@@ -1,11 +0,0 @@
-{
- "name": "auro",
- "profile": {
- "auth": {
- "auth_url": "https://api.van1.auro.io:5000/v2.0"
- },
- "identity_api_version": "2",
- "region_name": "van1",
- "requires_floating_ip": true
- }
-}
diff --git a/os_client_config/vendors/betacloud.json b/os_client_config/vendors/betacloud.json
deleted file mode 100644
index 8678334..0000000
--- a/os_client_config/vendors/betacloud.json
+++ /dev/null
@@ -1,14 +0,0 @@
-{
- "name": "betacloud",
- "profile": {
- "auth": {
- "auth_url": "https://api-1.betacloud.io:5000/v3"
- },
- "regions": [
- "betacloud-1"
- ],
- "identity_api_version": "3",
- "image_format": "raw",
- "volume_api_version": "3"
- }
-}
diff --git a/os_client_config/vendors/bluebox.json b/os_client_config/vendors/bluebox.json
deleted file mode 100644
index 647c842..0000000
--- a/os_client_config/vendors/bluebox.json
+++ /dev/null
@@ -1,7 +0,0 @@
-{
- "name": "bluebox",
- "profile": {
- "volume_api_version": "1",
- "region_name": "RegionOne"
- }
-}
diff --git a/os_client_config/vendors/catalyst.json b/os_client_config/vendors/catalyst.json
deleted file mode 100644
index 3ad7507..0000000
--- a/os_client_config/vendors/catalyst.json
+++ /dev/null
@@ -1,15 +0,0 @@
-{
- "name": "catalyst",
- "profile": {
- "auth": {
- "auth_url": "https://api.cloud.catalyst.net.nz:5000/v2.0"
- },
- "regions": [
- "nz-por-1",
- "nz_wlg_2"
- ],
- "image_api_version": "1",
- "volume_api_version": "1",
- "image_format": "raw"
- }
-}
diff --git a/os_client_config/vendors/citycloud.json b/os_client_config/vendors/citycloud.json
deleted file mode 100644
index c9ac335..0000000
--- a/os_client_config/vendors/citycloud.json
+++ /dev/null
@@ -1,19 +0,0 @@
-{
- "name": "citycloud",
- "profile": {
- "auth": {
- "auth_url": "https://identity1.citycloud.com:5000/v3/"
- },
- "regions": [
- "Buf1",
- "La1",
- "Fra1",
- "Lon1",
- "Sto2",
- "Kna1"
- ],
- "requires_floating_ip": true,
- "volume_api_version": "1",
- "identity_api_version": "3"
- }
-}
diff --git a/os_client_config/vendors/conoha.json b/os_client_config/vendors/conoha.json
deleted file mode 100644
index 5636f09..0000000
--- a/os_client_config/vendors/conoha.json
+++ /dev/null
@@ -1,14 +0,0 @@
-{
- "name": "conoha",
- "profile": {
- "auth": {
- "auth_url": "https://identity.{region_name}.conoha.io"
- },
- "regions": [
- "sin1",
- "sjc1",
- "tyo1"
- ],
- "identity_api_version": "2"
- }
-}
diff --git a/os_client_config/vendors/dreamcompute.json b/os_client_config/vendors/dreamcompute.json
deleted file mode 100644
index 8244cf7..0000000
--- a/os_client_config/vendors/dreamcompute.json
+++ /dev/null
@@ -1,11 +0,0 @@
-{
- "name": "dreamcompute",
- "profile": {
- "auth": {
- "auth_url": "https://iad2.dream.io:5000"
- },
- "identity_api_version": "3",
- "region_name": "RegionOne",
- "image_format": "raw"
- }
-}
diff --git a/os_client_config/vendors/dreamhost.json b/os_client_config/vendors/dreamhost.json
deleted file mode 100644
index ea2ebac..0000000
--- a/os_client_config/vendors/dreamhost.json
+++ /dev/null
@@ -1,13 +0,0 @@
-{
- "name": "dreamhost",
- "profile": {
- "status": "deprecated",
- "message": "The dreamhost profile is deprecated. Please use the dreamcompute profile instead",
- "auth": {
- "auth_url": "https://keystone.dream.io"
- },
- "identity_api_version": "3",
- "region_name": "RegionOne",
- "image_format": "raw"
- }
-}
diff --git a/os_client_config/vendors/elastx.json b/os_client_config/vendors/elastx.json
deleted file mode 100644
index 1e72482..0000000
--- a/os_client_config/vendors/elastx.json
+++ /dev/null
@@ -1,10 +0,0 @@
-{
- "name": "elastx",
- "profile": {
- "auth": {
- "auth_url": "https://ops.elastx.net:5000"
- },
- "identity_api_version": "3",
- "region_name": "regionOne"
- }
-}
diff --git a/os_client_config/vendors/entercloudsuite.json b/os_client_config/vendors/entercloudsuite.json
deleted file mode 100644
index c58c478..0000000
--- a/os_client_config/vendors/entercloudsuite.json
+++ /dev/null
@@ -1,16 +0,0 @@
-{
- "name": "entercloudsuite",
- "profile": {
- "auth": {
- "auth_url": "https://api.entercloudsuite.com/"
- },
- "identity_api_version": "3",
- "image_api_version": "1",
- "volume_api_version": "1",
- "regions": [
- "it-mil1",
- "nl-ams1",
- "de-fra1"
- ]
- }
-}
diff --git a/os_client_config/vendors/fuga.json b/os_client_config/vendors/fuga.json
deleted file mode 100644
index 388500b..0000000
--- a/os_client_config/vendors/fuga.json
+++ /dev/null
@@ -1,15 +0,0 @@
-{
- "name": "fuga",
- "profile": {
- "auth": {
- "auth_url": "https://identity.api.fuga.io:5000",
- "user_domain_name": "Default",
- "project_domain_name": "Default"
- },
- "regions": [
- "cystack"
- ],
- "identity_api_version": "3",
- "volume_api_version": "3"
- }
-}
diff --git a/os_client_config/vendors/ibmcloud.json b/os_client_config/vendors/ibmcloud.json
deleted file mode 100644
index 90962c6..0000000
--- a/os_client_config/vendors/ibmcloud.json
+++ /dev/null
@@ -1,13 +0,0 @@
-{
- "name": "ibmcloud",
- "profile": {
- "auth": {
- "auth_url": "https://identity.open.softlayer.com"
- },
- "volume_api_version": "2",
- "identity_api_version": "3",
- "regions": [
- "london"
- ]
- }
-}
diff --git a/os_client_config/vendors/internap.json b/os_client_config/vendors/internap.json
deleted file mode 100644
index b67fc06..0000000
--- a/os_client_config/vendors/internap.json
+++ /dev/null
@@ -1,17 +0,0 @@
-{
- "name": "internap",
- "profile": {
- "auth": {
- "auth_url": "https://identity.api.cloud.iweb.com"
- },
- "regions": [
- "ams01",
- "da01",
- "nyj01",
- "sin01",
- "sjc01"
- ],
- "identity_api_version": "3",
- "floating_ip_source": "None"
- }
-}
diff --git a/os_client_config/vendors/otc.json b/os_client_config/vendors/otc.json
deleted file mode 100644
index b0c1b11..0000000
--- a/os_client_config/vendors/otc.json
+++ /dev/null
@@ -1,13 +0,0 @@
-{
- "name": "otc",
- "profile": {
- "auth": {
- "auth_url": "https://iam.%(region_name)s.otc.t-systems.com/v3"
- },
- "regions": [
- "eu-de"
- ],
- "identity_api_version": "3",
- "image_format": "vhd"
- }
-}
diff --git a/os_client_config/vendors/ovh.json b/os_client_config/vendors/ovh.json
deleted file mode 100644
index f17dc2b..0000000
--- a/os_client_config/vendors/ovh.json
+++ /dev/null
@@ -1,15 +0,0 @@
-{
- "name": "ovh",
- "profile": {
- "auth": {
- "auth_url": "https://auth.cloud.ovh.net/"
- },
- "regions": [
- "BHS1",
- "GRA1",
- "SBG1"
- ],
- "identity_api_version": "3",
- "floating_ip_source": "None"
- }
-}
diff --git a/os_client_config/vendors/rackspace.json b/os_client_config/vendors/rackspace.json
deleted file mode 100644
index 6a4590f..0000000
--- a/os_client_config/vendors/rackspace.json
+++ /dev/null
@@ -1,29 +0,0 @@
-{
- "name": "rackspace",
- "profile": {
- "auth": {
- "auth_url": "https://identity.api.rackspacecloud.com/v2.0/"
- },
- "regions": [
- "DFW",
- "HKG",
- "IAD",
- "ORD",
- "SYD",
- "LON"
- ],
- "database_service_type": "rax:database",
- "compute_service_name": "cloudServersOpenStack",
- "image_api_use_tasks": true,
- "image_format": "vhd",
- "floating_ip_source": "None",
- "secgroup_source": "None",
- "requires_floating_ip": false,
- "volume_api_version": "1",
- "disable_vendor_agent": {
- "vm_mode": "hvm",
- "xenapi_use_agent": false
- },
- "has_network": false
- }
-}
diff --git a/os_client_config/vendors/switchengines.json b/os_client_config/vendors/switchengines.json
deleted file mode 100644
index 46f6325..0000000
--- a/os_client_config/vendors/switchengines.json
+++ /dev/null
@@ -1,15 +0,0 @@
-{
- "name": "switchengines",
- "profile": {
- "auth": {
- "auth_url": "https://keystone.cloud.switch.ch:5000/v2.0"
- },
- "regions": [
- "LS",
- "ZH"
- ],
- "volume_api_version": "1",
- "image_api_use_tasks": true,
- "image_format": "raw"
- }
-}
diff --git a/os_client_config/vendors/ultimum.json b/os_client_config/vendors/ultimum.json
deleted file mode 100644
index 4bfd088..0000000
--- a/os_client_config/vendors/ultimum.json
+++ /dev/null
@@ -1,11 +0,0 @@
-{
- "name": "ultimum",
- "profile": {
- "auth": {
- "auth_url": "https://console.ultimum-cloud.com:5000/"
- },
- "identity_api_version": "3",
- "volume_api_version": "1",
- "region-name": "RegionOne"
- }
-}
diff --git a/os_client_config/vendors/unitedstack.json b/os_client_config/vendors/unitedstack.json
deleted file mode 100644
index ac8be11..0000000
--- a/os_client_config/vendors/unitedstack.json
+++ /dev/null
@@ -1,16 +0,0 @@
-{
- "name": "unitedstack",
- "profile": {
- "auth": {
- "auth_url": "https://identity.api.ustack.com/v3"
- },
- "regions": [
- "bj1",
- "gd1"
- ],
- "volume_api_version": "1",
- "identity_api_version": "3",
- "image_format": "raw",
- "floating_ip_source": "None"
- }
-}
diff --git a/os_client_config/vendors/vexxhost.json b/os_client_config/vendors/vexxhost.json
deleted file mode 100644
index 10e131d..0000000
--- a/os_client_config/vendors/vexxhost.json
+++ /dev/null
@@ -1,16 +0,0 @@
-{
- "name": "vexxhost",
- "profile": {
- "auth": {
- "auth_url": "https://auth.vexxhost.net"
- },
- "regions": [
- "ca-ymq-1"
- ],
- "dns_api_version": "1",
- "identity_api_version": "3",
- "image_format": "raw",
- "floating_ip_source": "None",
- "requires_floating_ip": false
- }
-}
diff --git a/os_client_config/vendors/zetta.json b/os_client_config/vendors/zetta.json
deleted file mode 100644
index 44e9711..0000000
--- a/os_client_config/vendors/zetta.json
+++ /dev/null
@@ -1,13 +0,0 @@
-{
- "name": "zetta",
- "profile": {
- "auth": {
- "auth_url": "https://identity.api.zetta.io/v3"
- },
- "regions": [
- "no-osl1"
- ],
- "identity_api_version": "3",
- "dns_api_version": "2"
- }
-}
diff --git a/releasenotes/notes/thin-shim-62c8e6f6942b83a5.yaml b/releasenotes/notes/thin-shim-62c8e6f6942b83a5.yaml
new file mode 100644
index 0000000..32298fa
--- /dev/null
+++ b/releasenotes/notes/thin-shim-62c8e6f6942b83a5.yaml
@@ -0,0 +1,13 @@
+---
+prelude: >
+ os-client-config is now a thin shim around openstacksdk. It exists and
+ will continue to exist provide backward compatibility.
+upgrade:
+ - |
+ ``get_region_name`` no longer supports pre-service region name overrides.
+ An ``os_client_config.cloud_config.CloudConfig`` object represents a region
+ of a cloud. The support was originally added for compatibility with the
+ similar feature in openstacksdk's ``Profile``. Both the support for
+ service regions and the ``Profile`` object have been removed from
+ openstacksdk, so there is no need to attempt to add a compatibility layer
+ here as there is nothing that has the ability to consume it.
diff --git a/requirements.txt b/requirements.txt
index b8f674c..5bfd5d8 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,7 +1,4 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
-PyYAML>=3.12 # MIT
-appdirs>=1.3.0 # MIT License
-keystoneauth1>=3.4.0 # Apache-2.0
-requestsexceptions>=1.2.0 # Apache-2.0
+openstacksdk>=0.13.0 # Apache-2.0