summaryrefslogtreecommitdiff
path: root/nova/quota.py
Commit message (Collapse)AuthorAgeFilesLines
* Unify placement client singleton implementationsDan Smith2022-08-181-5/+2
| | | | | | | | | | | | We have many places where we implement singleton behavior for the placement client. This unifies them into a single place and implementation. Not only does this DRY things up, but may cause us to initialize it fewer times and also allows for emitting a common set of error messages about expected failures for better troubleshooting. Change-Id: Iab8a791f64323f996e1d6e6d5a7e7a7c34eb4fb3 Related-Bug: #1846820
* Follow up for unified limitsmelanie witt2022-03-041-0/+20
| | | | | | | | | | This addresses remaining comments from the unified limits series to add type hints to new code and add a docstring to the is_qfd_populated() method in nova/quota.py. Related to blueprint unified-limits-nova Change-Id: I948647b04b260e888a4c71c1fa3c2a7be5d140c5
* Update quota apis with keystone limits and usageJohn Garbutt2022-02-241-21/+40
| | | | | | | | | | | | | | | This makes use of the keystone APIs to get limits from Keystone when showing the user the limits on their project. Note we also change the default in_use amount from -1, which is what the no op driver originally used, to 0, which matches what the db driver typically returns for deprecated quota values, like floating ip limits. This seems a more sane value to respond with, given we don't count the usage for those values. blueprint unified-limits-nova Change-Id: I933dc135a364b14ddadc8eee67b42d8e1278a9ae
* Tell oslo.limit how to count nova resourcesJohn Garbutt2022-02-241-8/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A follow on patch will use this code to enforce the limits, this patch provides integration with oslo.limit and a new internal nova API that is able to enforce those limits. The first part is providing a callback for oslo.limit to be able to count the resources being used. We only count resources grouped by project_id. For counting servers, we make use of the instance mappings list in the api database, just as the existing quota code does. While we do check to ensure the queued for delete migration has been completed, we simply error out if that is not the case, rather than attempting to fallback to any other counting system. We hope one day we can count this in placement using consumer records, or similar. For counting all other resource usage, they must refer to some usage relating to a resource class being consumed in placement. This is similar to how the count with placement variant of the existing placement code works today. This is not restricted to RAM and VCPU, it is open to any resource class that is known to placement. The second part is the enforcement method, that keeps a similar signature to the existing enforce_num_instnaces call that is use to check quotas using the legacy quota system. From the flavor we extract the current resource usage. This is considered the simplest first step that helps us deliver Ironic limits alongside all the existing RAM and VCPU limits. At a later date, we would ideally get passed a more complete view of what resources are being requested from placement. NOTE: given the instance object doesn't exist when enforce is called, we can't just pass the instance into here. A [workarounds] option is also available for operators who need the legacy quota usage behavior where VCPU = VCPU + PCPU. blueprint unified-limits-nova Change-Id: I272b59b7bc8975bfd602640789f80d2d5f7ee698
* Update quota sets APIsJohn Garbutt2022-02-241-0/+3
| | | | | | | | | | | | | | | Ensure the limit related APIs reflect the new reality of enforcing the API and DB limits based on keystone only. For now we skip all updates to the DB, as none mean anything to the new code, as we only look at keystone now. Note: this will need to be updated again once we add limits for cores, ram, instances, etc. blueprint unified-limits-nova Change-Id: I5ef968395b4bdc6f190e239a19a723316b1d5baf
* Update limit APIsJohn Garbutt2022-02-241-0/+30
| | | | | | | | | | | | | | | | Ensure the limit related APIs reflect the new reality of enforcing the API and DB limits based on keystone. Do this by implementing the get_project_quotas and get_user_quotas methods. This still leaves get_settable_quotas and get_defaults that are needed to fix the quota sets APIs. Note: this will need to be updated again once we add limits for cores, ram, instances, etc. blueprint unified-limits-nova Change-Id: I3c4c29740e6275449887c4136d2467eade04fb06
* Update quota_class APIs for db and api limitsJohn Garbutt2022-02-241-0/+25
| | | | | | | | | | | | | Implement a unified limits specific version of get_class_quotas as used by the quota_class API. This simply returns the limits defined in keystone that are now enforced when you enable unified limits. Note: this will need to be updated again once we add limits to things that use things like resource_class, etc. blueprint unified-limits-nova Change-Id: If9901662d30d15da13303a3da051e1b9fded72c0
* Make unified limits APIs return reserved of 0John Garbutt2022-02-241-3/+13
| | | | | | | | | | | | | | | | | | | Currently the noop driver returns a reserved value of -1, and before this patch the same is true with the unified limits driver. Given the way the quota system currently works, the reserved value can be ignored. With unified limits, we want to make the API look as identical as possible to the DB driver, when you have the same limits applied with the unified limits driver. As such, we change the API to return a reserved value of 0 for the unified limits driver. Longer term, when the API gets a new microversion to tidy up quotas, the reserved value will likely be removed in the new microversion, because it no longer has any real meaning. blueprint unified-limits-nova Change-Id: I28212857313ae72903d2139884750d5de690c6bd
* Assert quota related API behavior when noopJohn Garbutt2022-02-241-4/+14
| | | | | | | | | | | | | | | | | | | | Adding tests so its clear what happens with the noop driver when using the quota APIs. To make the unit tests work, we had to make the caching of the quota driver slightly more dynamic. We verify the current config matches the currently cached driver, and reload the driver if there is a miss-match. It also preserves the ability of some unit tests to pass in a fake quota driver. We also test the current unified limits driver, as it is currently identical in behaviour to the noop driver. As things evolve the tests will diverge, but will show the common approach to what is returned from the API in both cases. blueprint unified-limits-nova Change-Id: If3c58d6cbf0a0aee62766c7142beab165c1fb9a4
* Add stub unified limits driverJohn Garbutt2022-02-241-0/+13
| | | | | | | | | | | | | | | | | | | | | | | The unified limits driver is starting with the noop driver. This gives us the closest API behaviour to what we describe in the spec. The Unified Limits quota driver will has several purposes: * stop all existing quota enforcement, so we can replace it * stop checking the database for quota info * make the API do what it does today with the noop driver enabled The next few patches will assert the existing API behaviour with the unified limits quota driver. This is the exact same thing that happens today when you enable the noop driver. As we add back limits, using the new unified limits approach, we will update the API so users are informed about what limits are actually being enforced. blueprint unified-limits-nova Change-Id: Iab152a6b2bb58454c32889390ec9add43771fa62
* db: Post reshuffle cleanupStephen Finucane2021-08-091-3/+7
| | | | | | | | | | | | Introduce a new 'nova.db.api.api' module to hold API database-specific helpers, plus a generic 'nova.db.utils' module to hold code suitable for both main and API databases. This highlights a level of complexity around connection management that is present for the main database but not for the API database. This is because we need to handle the complexity of cells for the former but not the latter. Change-Id: Ia5304c552ce552ae3c5223a2bfb3a9cd543ec57c Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
* db: Move remaining 'nova.db.sqlalchemy' modulesStephen Finucane2021-08-091-1/+1
| | | | | | | | The two remaining modules, 'api_models' and 'api_migrations', are moved to the new 'nova.db.api' module. Change-Id: I138670fe36b07546db5518f78c657197780c5040 Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
* db: Unify 'nova.db.api', 'nova.db.sqlalchemy.api'Stephen Finucane2021-08-091-3/+2
| | | | | | | | | | | | | | | | | | | | | | Merge these, removing an unnecessary layer of abstraction, and place them in the new 'nova.db.main' directory. The resulting change is huge, but it's mainly the result of 's/sqlalchemy import api/main import api/' and 's/nova.db.api/nova.db.main.api/' with some necessary cleanup. We also need to rework how we do the blocking of API calls since we no longer have a 'DBAPI' object that we can monkey patch as we were doing before. This is now done via a global variable that is set by the 'main' function of 'nova.cmd.compute'. The main impact of this change is that it's no longer possible to set '[database] use_db_reconnect' and have all APIs automatically wrapped in a DB retry. Seeing as this behavior is experimental, isn't applied to any of the API DB methods (which don't use oslo.db's 'DBAPI' helper), and is used explicitly in what would appear to be the critical cases (via the explicit 'oslo_db.api.wrap_db_retry' decorator), this doesn't seem like a huge loss. Change-Id: Iad2e4da4546b80a016e477577d23accb2606a6e4 Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
* db: Use module-level imports for sqlalchemy (for real)Stephen Finucane2021-07-051-9/+6
| | | | | | | | Change If90d9295b231166a28c2cc350d324691821a696b kicked off this effort but only change the migrations. This change completes the job. Change-Id: Ic0f2c326ebce8d7c89b0debf5225cbe471daca03 Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
* Make quotas respect instance_list_per_project_cellsMohammed Naser2020-05-151-2/+6
| | | | | | | | | | | | | | This option was introduced in order to limit queries to cells which only belong to the project we're interacting with. However, The nova quota code was not respecting it and therefore it was always trying to get the cells assigned to a project even with that option disabled. This patch makes the quota code respect that option and adds testing to ensure that enabling the option does make sure it doesn't search all cells but only the ones for this project. Closes-Bug: #1878979 Change-Id: I2e0d48e799e70d550f912ad8a424c86df3ade3a2
* nova-net: Remove remaining nova-network quotasStephen Finucane2019-12-021-61/+15
| | | | | | | | | | | | | | | The 'security_group_rules' [1], 'floating_ips' [2], 'fixed_ips' [3] and 'security_groups' [4] quotas are all nova-network only and can be removed or, more specifically, set to unlimited and ignored until we eventually bump our minimum API microversion. [1] https://review.opendev.org/477700 [2] https://review.opendev.org/457862 [3] https://review.opendev.org/457861 [4] https://review.opendev.org/457860 Change-Id: I9a5362fdf29e3680c59f620c585f3d730e4f6adb Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
* nova-net: Remove 'networks' quotaStephen Finucane2019-11-221-12/+1
| | | | | | | | With the impending removal of nova-network, there's nothing that should be using this. Remove it. Change-Id: I352b71b5976d008c2b8fab8a6d6939c0e0b305be Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
* Log quota legacy method warning only if counting from placementmelanie witt2019-06-171-2/+2
| | | | | | | | | | | | The log warning message is being emitted unconditionally and should be limited to only when CONF.quota.count_usage_from_placement = True. This fixes the issue and adds missing unit test coverage that would have caught it. Closes-Bug: #1833130 Change-Id: I00ee0bd99c4f31a2b73fc69373fefb56740f5425
* Follow up for counting quota usage from placementmelanie witt2019-05-311-12/+11
| | | | | | | | | | | | | | | | This addresses comments from the series: * Remove usage-specific info from docstring * Add note to nova-next job description "changelog" * Add info about data migration to config option help * Consolidate code under count_usage_from_placement conditional * Consolidate variables for checking data migration doneness * Remove hard-coded user_id and project_id from func test * Re-word code comment about checking data migration doneness Related to blueprint count-quota-usage-from-placement Change-Id: Ida2de9256fcc9e092fb9977b8ac067fc1472c316
* Use instance mappings to count server group membersmelanie witt2019-05-311-6/+44
| | | | | | | | | | | | This adds a get_count_by_uuids_and_user() method to the InstanceMapping object and uses it to count instance mappings for the purpose of counting quota usage for server group members. By counting server group members via instance mappings, the count is resilient to down cells in a multi-cell environment. Part of blueprint count-quota-usage-from-placement Change-Id: I3ff39d5ed99a68ad8678e5ff62b343f3018b4768
* Count instances from mappings and cores/ram from placementmelanie witt2019-05-231-8/+124
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This counts instance mappings for counting quota usage for instances and adds calls to placement for counting quota usage for cores and ram. During an upgrade, if any un-migrated instance mappings are found (with NULL user_id or NULL queued_for_delete fields), we will fall back to the legacy counting method. Counting quota usage from placement is opt-in via the [quota]count_usage_from_placement configuration option because: * Though beneficial for multi-cell deployments to be resilient to down cells, the vast majority of deployments are single cell and will not be able to realize a down cells resiliency benefit and may prefer to keep legacy quota usage counting. * Usage for resizes will reflect resources being held on both the source and destination until the resize is confirmed or reverted. Operators may not want to enable counting from placement based on whether the behavior change is problematic for them. * Placement does not yet support the ability to partition resource providers from mulitple Nova deployments, so environments that are sharing a single placement deployment would see usage that aggregates all Nova deployments together. Such environments should not enable counting from placement. * Usage for unscheduled instances in ERROR state will not reflect resource consumption for cores and ram because the instance has no placement allocations. * Usage for instances in SHELVED_OFFLOADED state will not reflect resource consumption for cores and ram because the instance has no placement allocations. Note that because of this, it will be possible for a request to unshelve a server to be rejected if the user does not have enough quota available to support the cores and ram needed by the server to be unshelved. Part of blueprint count-quota-usage-from-placement Change-Id: Ie22b0acb5824a41da327abdcf9848d02fc9a92f5
* Merge "quota: remove defaults kwarg in get_project_quotas"Zuul2019-02-261-39/+9
|\
| * quota: remove defaults kwarg in get_project_quotasJay Pipes2018-11-011-39/+9
| | | | | | | | | | | | | | | | | | | | Both QuotaDriver.get_project_quotas() and QuotaDriver.get_user_quotas() had a default kwarg that was never used. The only call sites for this method are in the limits and quota-sets API handlers and none of the call sites overrides the defaults kwarg. So, remove it. It was only cluttering up the interface. Change-Id: I0b7acbe7bd818ef313aefcc5f8d58277d101ce3f
* | Merge "quota: remove QuotaEngine.register_resources()"Zuul2019-02-251-39/+46
|\ \ | |/
| * quota: remove QuotaEngine.register_resources()Jay Pipes2018-11-011-39/+46
| | | | | | | | | | | | | | | | | | | | | | The QuotaEngine.register_resources() method was not necessary. I've simply added a resources kwarg to the QuotaEngine constructor and we now construct the dict of resource objects to initialize the resources the quota engine cares about. One more method to the QuotaDriver, gone. Change-Id: I818bbfb9493714341283042a21d5aefd90c094cb
* | Make _instances_cores_ram_count() be smart about cellsSurya Seetharaman2018-11-131-2/+11
| | | | | | | | | | | | | | | | | | | | This makes the _instances_cores_ram_count() method only query for instances in cells that the tenant actually has instances landed in. We do this by getting a list of cell mappings that have instance mappings owned by the project and limiting the scatter/gather operation to just those cells. Change-Id: I0e2a9b2460145d3aee92f7fddc4f4da16af63ff8 Closes-Bug: #1771810
* | Merge "Refactor scatter-gather utility to return exception objects"Zuul2018-11-031-2/+1
|\ \ | |/ |/|
| * Refactor scatter-gather utility to return exception objectsSurya Seetharaman2018-10-311-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Scatter-gather utility returns a raised_exception_sentinel for all kinds of exceptions that are caught and often times there maybe situations where we may have to handle the different types of exceptions differently. To facilitate that, it might be more useful to return the Exception object itself instead of the dummy raised_exception_sentinel so that based on the result's exception type we can handle them differently. Related to blueprint handling-down-cell Change-Id: I861b223ee46b0f0a31f646a4b45f8a02410253cf
* | quota: remove default kwarg on get_class_quotas()Jay Pipes2018-10-241-18/+6
| | | | | | | | | | | | | | | | No code was calling the QuotaDriver.get_class_quotas() method with anything other the default=True kwarg value (other than unit tests) so this patch removes that complexity. Change-Id: If330320e92ba9249ccdad14582119923ac57f885
* | quota: remove QuotaDriver.destroy_all_by_project()Jay Pipes2018-10-241-59/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Removes the unnecessary QuotaDriver.destroy_all_by_project() and destroy_all_by_project_and_users() methods. nova.objects.Quotas already had these methods and this patch changes the one place that was calling the old QuotaDriver method (from the nova API /quota_sets endpoint) to just call the nova.objects.Quotas methods of the same name. Note that the NoopQuotaDriver's destroy_all_by_project() and destroy_all_by_project_and_user() methods were no-ops. Now the quota-sets API will be calling the objects.Quotas.destroy_xxx() methods which will raise ProjectUserQuotaNotFound instead of returning a 204. If the user is calling DELETE /os-quota-sets and there is the Noop quota driver configured, and the response is a 404 Not Found, do we really care? In the future, we should be getting rid of the os-quota-sets API entirely and using Keystone's /limits API. One more set of methods gone from QuotaDriver... Change-Id: Ifc0a409bd179807db18f2e7b59ea9d4d67e9a798
* | quota: remove unused Quota driver methodsJay Pipes2018-10-241-47/+0
| | | | | | | | | | | | | | | | | | | | | | | | The BaseResource.quota() method was the only method calling the QuotaDriver.get_by_project(), get_by_project_and_user() and get_by_class() methods. The BaseResource.quota() method was only being called from unit tests. The last patch in this series removed that method and now the QuotaDriver.get_by_project(), get_by_project_and_user() and get_by_class() methods can now be removed since they have no callers other than unit tests. Change-Id: Iec72af79867c1d992ad109a0d0528431c61f22b4
* | quota: remove unused codeJay Pipes2018-10-241-46/+0
|/ | | | | | | The BaseResource.quota() method was not being used anywhere other than unit tests. Remove it. Change-Id: I7a4d63aa11dc1883ed2f924f85b97b0dec30f9d3
* Merge "Avoid joins in _server_group_count_members_by_user"Zuul2018-08-071-1/+2
|\
| * Avoid joins in _server_group_count_members_by_userMatt Riedemann2018-07-061-1/+2
| | | | | | | | | | | | | | | | | | | | When pulling instances out of the cell databases we only care about counts, so there is no need for the default joins on security_groups and instance_info_caches tables from InstanceList.get_by_filters. The default joins happen in instance_get_all_by_filters_sort in the DB API. Change-Id: I04cff5f9b4bcca1b1d143b82d7490cbc4d88ebe4
* | Merge "Fix server_group_members quota check"Zuul2018-07-111-1/+14
|\ \
| * | Fix server_group_members quota checkChen2018-07-061-1/+14
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For example there are 3 instances in a server group (quota is 5). When doing multi-creating of 3 more instances in this group (would have 6 members), current quota checking scheme will fail to prevent this happening, which is not expected. This is due to the server_group_members quota check previously only counting group members that existed as instance records in cell databases and not accounting for build requests which are the temporary representation of the instance in the API database before the instance is scheduled to a cell. Co-Authored-By: Matt Riedemann <mriedem.os@gmail.com> Change-Id: If439f4486b8fe157c436c47aa408608e639a3e15 Closes-Bug: #1780373
* | Use nova.db.api directlyChris Dent2018-07-101-1/+1
|/ | | | | | | | | | | | | | | | | | | | | | | | nova/db/__init__.py was importing * from nova.db.api. This meant that any time any code anywhere within the nova.db package was imported then nova.db.api was too, leading to a cascade of imports that may not have been desired. Also, in general, code in __init__.py is a pain. Therefore, this change adjusts code that so that either: * nova.db.api is used directly * nova.db.api is imported as 'db' In either case, the functionality remains the same. The primary goal of this change was to make it possible to import the model files without having to import the db api. Moving the model files to a different place in the directory hierarchy was considered, but given that "code in __init__.py is a pain" this mode was chosen. This looks like a very large change, but it is essentially adjusting package names, many in mocks. Change-Id: Ic1fd7c87ceda05eeb96735da2a415ef37060bb1a
* Restrict CONF.quota.driver to DB and noop quota driversMatt Riedemann2018-06-011-9/+5
| | | | | | | | | | | | | | | The quota driver config option was deprecated in Newton via change 430638888c987a99e537e2ac956087a7310ecdc6. That was a bit wrong in that the config option shouldn't have been deprecated since we still use it, but we wanted to deprecate the ability to load custom quota driver classes, which is done in this change. To be clear, this doesn't remove the option nor the ability to configure the quota driver, it only removes the ability to class-load out of tree quota drivers. Change-Id: I021a2bcb923739409b393cbb2684ffdf20180f73
* Remove unnecessary variablesTakashi NATSUME2018-02-081-1/+1
| | | | | | | | The 'cell_uuid' is not used at all. So it can be removed. TrivialFix Change-Id: I42c658ba076f4f867505f80218de848ebc4161f8
* Follow up on removing old-style quotas codemelanie witt2017-12-081-2/+5
| | | | | | | | | | | | This is a follow up to another patch [1] that removed the old-style quotas code which is no longer in use. Here, we remove the 'reserved' key from quotas internally and set it in the APIs where it's expected. We also remove code that accesses the quota_usages and reservations tables as they're no longer used. [1] https://review.openstack.org/#/c/511689 Change-Id: I75291571468ddb79b7561810de0953bb462548e3
* Remove old-style quotas codemelanie witt2017-11-301-539/+2
| | | | | | | | | In Pike, we re-architected quotas to count resources for quota usage instead of tracking it separately and stopped using reservations. During an upgrade to Queens, old computes (Pike) will be using new-style quotas and so the old-style quotas code is no longer needed. Change-Id: Ie01ab1c3a1219f1d123f0ecedc66a00dfb2eb2c1
* Remove useless periodic task that expires quota reservationsKevin_Zheng2017-09-231-32/+0
| | | | | | | | | | | Quota reservations were removed in the Pike release, but there is still a periodic task in the scheduler manager that runs every minute to expire reservations, which won't actually do anything. This patch remove this periodic task and related codes. Change-Id: Idae069e8cf6ce69e112de08a22c94b6b590f9a69 Closes-bug: #1719048
* Merge "Enhancement comments on CountableResource"Jenkins2017-08-151-3/+4
|\
| * Enhancement comments on CountableResourcejichenjc2017-05-281-3/+4
| | | | | | | | | | | | | | | | The comments for CountableResource give a misleading sample list, better to given exactly same resource as the sample list like security_group_member or keypair etc Change-Id: Ie36498f33a4f3cb5d9175b7d7044d35601cb7b6d
* | Make Quotas object favor the API databasemelanie witt2017-07-201-30/+28
| | | | | | | | | | | | | | | | | | | | This makes the Quotas object load first from the API database, falling back to the main database as necessary. Creates happen in the API database only now. Part of blueprint cells-quota-api-db Change-Id: Ifc42eb55033f4755e4a756a545eb63ce8abfec20
* | Count instances to check quotamelanie witt2017-07-181-57/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This changes instances, cores, and ram from ReservableResources to CountableResources and replaces quota reserve/commit/rollback with check_deltas accordingly. All of the reservation and usage related unit tests are removed because: 1. They rely on some global QuotaEngine resources being ReservableResources and every ReservableResource has been removed. 2. Reservations and usages are no longer in use anywhere in the codebase. Part of blueprint cells-count-resources-to-check-quota-in-api Change-Id: I9269ffa2b80e48db96c622d0dc0817738854f602
* | Count floating ips to check quotamelanie witt2017-06-191-2/+8
| | | | | | | | | | | | | | | | | | | | | | | | This changes floating ips from a ReservableResource to a CountableResource and replaces quota reserve/commit/rollback with check_deltas accordingly. Note that floating ip quota is only relevant to nova-network and will be obsolete when nova-network is removed. Part of blueprint cells-count-resources-to-check-quota-in-api Change-Id: I9e6c16ebe73f2af11bcc47899f25289f08c1204a
* | Count fixed ips to check quotamelanie witt2017-06-191-1/+7
| | | | | | | | | | | | | | | | | | | | | | This changes fixed ips from a ReservableResource to a CountableResource and replaces quota reserve/commit/rollback with check_deltas accordingly. Note that fixed ip quota is only relevant to nova-network and will be obsolete when nova-network is removed. Part of blueprint cells-count-resources-to-check-quota-in-api Change-Id: Ia9e8142435888f6bc600e40bc7b0bf24b19576fd
* | Count security groups to check quotamelanie witt2017-06-191-2/+19
|/ | | | | | | | | | | | This changes security groups from a ReservableResource to a CountableResource and replaces quota reserve/commit/rollback with check_deltas accordingly. Note that security group quota is only relevant to nova-network and will be obsolete when nova-network is removed. Part of blueprint cells-count-resources-to-check-quota-in-api Change-Id: I51b74b863fafd14ef31b1abd9fd9820deb622aeb
* Count server group members to check quotamelanie witt2017-06-131-2/+22
| | | | | | | | | | This changes server group members from a ReservableResource to a CountableResource and replaces quota reserve/commit/rollback with check_deltas accordingly. Part of blueprint cells-count-resources-to-check-quota-in-api Change-Id: I19d3dab5c849a664f2241abbeafd03efbbaa1764