summaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
Diffstat (limited to 'doc')
-rw-r--r--doc/api_samples/os-evacuate/v2.95/server-evacuate-find-host-req.json1
-rw-r--r--doc/api_samples/os-evacuate/v2.95/server-evacuate-req.json3
-rw-r--r--doc/source/admin/compute-node-identification.rst83
-rw-r--r--doc/source/admin/cpu-topologies.rst91
-rw-r--r--doc/source/admin/index.rst1
-rw-r--r--doc/source/admin/live-migration-usage.rst2
-rw-r--r--doc/source/admin/remote-console-access.rst10
-rw-r--r--doc/source/admin/upgrades.rst20
-rw-r--r--doc/source/cli/nova-compute.rst2
-rw-r--r--doc/source/contributor/ptl-guide.rst73
10 files changed, 246 insertions, 40 deletions
diff --git a/doc/api_samples/os-evacuate/v2.95/server-evacuate-find-host-req.json b/doc/api_samples/os-evacuate/v2.95/server-evacuate-find-host-req.json
index ae9fb0a67b..8ad929226e 100644
--- a/doc/api_samples/os-evacuate/v2.95/server-evacuate-find-host-req.json
+++ b/doc/api_samples/os-evacuate/v2.95/server-evacuate-find-host-req.json
@@ -1,5 +1,4 @@
{
"evacuate": {
- "targetState": "stopped"
}
}
diff --git a/doc/api_samples/os-evacuate/v2.95/server-evacuate-req.json b/doc/api_samples/os-evacuate/v2.95/server-evacuate-req.json
index a9f809c830..d192892cdc 100644
--- a/doc/api_samples/os-evacuate/v2.95/server-evacuate-req.json
+++ b/doc/api_samples/os-evacuate/v2.95/server-evacuate-req.json
@@ -1,6 +1,5 @@
{
"evacuate": {
- "host": "testHost",
- "targetState": "stopped"
+ "host": "testHost"
}
}
diff --git a/doc/source/admin/compute-node-identification.rst b/doc/source/admin/compute-node-identification.rst
new file mode 100644
index 0000000000..31d4802d0b
--- /dev/null
+++ b/doc/source/admin/compute-node-identification.rst
@@ -0,0 +1,83 @@
+===========================
+Compute Node Identification
+===========================
+
+Nova requires that compute nodes maintain a constant and consistent identity
+during their lifecycle. With the exception of the ironic driver, starting in
+the 2023.1 release, this is achieved by use of a file containing the node
+unique identifier that is persisted on disk. Prior to 2023.1, a combination of
+the compute node's hostname and the :oslo.config:option:`host` value in the
+configuration file were used.
+
+The 2023.1 and later compute node identification file must remain unchanged
+during the lifecycle of the compute node. Changing the value or removing the
+file will result in a failure to start and may require advanced techniques
+for recovery. The file is read once at `nova-compute`` startup, at which point
+it is validated for formatting and the corresponding node is located or
+created in the database.
+
+.. note::
+
+ Even after 2023.1, the compute node's hostname may not be changed after
+ the initial registration with the controller nodes, it is just not used
+ as the primary method for identification.
+
+The behavior of ``nova-compute`` is different when using the ironic driver,
+as the (UUID-based) identity and mapping of compute nodes to compute manager
+service hosts is dynamic. In that case, no single node identity is maintained
+by the compute host and thus no identity file is read or written. Thus none
+of the sections below apply to hosts with :oslo.config:option:`compute_driver`
+set to `ironic`.
+
+Self-provisioning of the node identity
+--------------------------------------
+
+By default, ``nova-compute`` will automatically generate and write a UUID to
+disk the first time it starts up, and will use that going forward as its
+stable identity. Using the :oslo.config:option:`state_path`
+(which is ``/var/lib/nova`` on most systems), a ``compute_id`` file will be
+created with a generated UUID.
+
+Since this file (and it's parent directory) is writable by nova, it may be
+desirable to move this to one of the other locations that nova looks for the
+identification file.
+
+Deployment provisioning of the node identity
+--------------------------------------------
+
+In addition to the location mentioned above, nova will also search the parent
+directories of any config file in use (either the defaults or provided on
+the command line) for a ``compute_id`` file. Thus, a deployment tool may, on
+most systems, pre-provision the node's UUID by writing one to
+``/etc/nova/compute_id``.
+
+The contents of the file should be a single UUID in canonical textual
+representation with no additional whitespace or other characters. The following
+should work on most Linux systems:
+
+.. code-block:: shell
+
+ $ uuidgen > /etc/nova/compute_id
+
+.. note::
+
+ **Do not** execute the above command blindly in every run of a deployment
+ tool, as that will result in overwriting the ``compute_id`` file each time,
+ which *will* prevent nova from working properly.
+
+Upgrading from pre-2023.1
+-------------------------
+
+Before release 2023.1, ``nova-compute`` only used the hostname (combined with
+:oslo.config:option:`host`, if set) to identify its compute node objects in
+the database. When upgrading from a prior release, the compute node will
+perform a one-time migration of the hostname-matched compute node UUID to the
+``compute_id`` file in the :oslo.config:option:`state_path` location.
+
+.. note::
+
+ It is imperative that you allow the above migration to run and complete on
+ compute nodes that are being upgraded. Skipping this step by
+ pre-provisioning a ``compute_id`` file before the upgrade will **not** work
+ and will be equivalent to changing the compute node UUID after it has
+ already been created once.
diff --git a/doc/source/admin/cpu-topologies.rst b/doc/source/admin/cpu-topologies.rst
index 9770639c3a..082c88f655 100644
--- a/doc/source/admin/cpu-topologies.rst
+++ b/doc/source/admin/cpu-topologies.rst
@@ -730,6 +730,97 @@ CPU policy, meanwhile, will consume ``VCPU`` inventory.
.. _configure-hyperv-numa:
+Configuring CPU power management for dedicated cores
+----------------------------------------------------
+
+.. versionchanged:: 27.0.0
+
+ This feature was only introduced by the 2023.1 Antelope release
+
+.. important::
+
+ The functionality described below is currently only supported by the
+ libvirt/KVM driver.
+
+For power saving reasons, operators can decide to turn down the power usage of
+CPU cores whether they are in use or not. For obvious reasons, Nova only allows
+to change the power consumption of a dedicated CPU core and not a shared one.
+Accordingly, usage of this feature relies on the reading of
+:oslo.config:option:`compute.cpu_dedicated_set` config option to know which CPU
+cores to handle.
+The main action to enable the power management of dedicated cores is to set
+:oslo.config:option:`libvirt.cpu_power_management` config option to ``True``.
+
+By default, if this option is enabled, Nova will lookup the dedicated cores and
+power them down at the compute service startup. Then, once an instance starts
+by being attached to a dedicated core, this below core will be powered up right
+before the libvirt guest starts. On the other way, once an instance is stopped,
+migrated or deleted, then the corresponding dedicated core will be powered down.
+
+There are two distinct strategies for powering up or down :
+
+- the default is to offline the CPU core and online it when needed.
+- an alternative strategy is to use two distinct CPU governors for the up state
+ and the down state.
+
+The strategy can be chosen using
+:oslo.config:option:`libvirt.cpu_power_management_strategy` config option.
+``cpu_state`` supports the first online/offline strategy, while ``governor``
+sets the alternative strategy.
+We default to turning off the cores as it provides you the best power savings
+while there could be other tools outside Nova to manage the governor, like
+tuned. That being said, we also provide a way to automatically change the
+governors on the fly, as explained below.
+
+If the strategy is set to ``governor``, a couple of config options are provided
+to define which exact CPU govenor to use for each of the up and down states :
+
+- :oslo.config:option:`libvirt.cpu_power_governor_low` will define the governor
+ to use for the powerdown state (defaults to ``powersave``)
+- :oslo.config:option:`libvirt.cpu_power_governor_high` will define the
+ governor to use for the powerup state (defaults to ``performance``)
+
+.. important::
+ This is the responsibility of the operator to ensure that the govenors
+ defined by the configuration options are currently supported by the OS
+ underlying kernel that runs the compute service.
+
+ As a side note, we recommend the ``schedutil`` governor as an alternative for
+ the high-power state (if the kernel supports it) as the CPU frequency is
+ dynamically set based on CPU task states. Other governors may be worth to
+ be tested, including ``conservative`` and ``ondemand`` which are quite a bit
+ more power consuming than ``schedutil`` but more efficient than
+ ``performance``. See `Linux kernel docs`_ for further explanations.
+
+.. _`Linux kernel docs`: https://www.kernel.org/doc/Documentation/cpu-freq/governors.txt
+
+As an example, a ``nova.conf`` part of configuration would look like::
+
+ [compute]
+ cpu_dedicated_set=2-17
+
+ [libvirt]
+ cpu_power_management=True
+ cpu_power_management_strategy=cpu_state
+
+.. warning::
+
+ The CPU core #0 has a special meaning in most of the recent Linux kernels.
+ This is always highly discouraged to use it for CPU pinning but please
+ refrain to have it power managed or you could have surprises if Nova turns
+ it off !
+
+One last important note : you can decide to change the CPU management strategy
+during the compute lifecycle, or you can currently already manage the CPU
+states. For ensuring that Nova can correctly manage the CPU performances, we
+added a couple of checks at startup that refuse to start nova-compute service
+if those arbitrary rules aren't enforced :
+
+- if the operator opts for ``cpu_state`` strategy, then all dedicated CPU
+ governors *MUST* be identical.
+- if they decide using ``governor``, then all dedicated CPU cores *MUST* be
+ online.
+
Configuring Hyper-V compute nodes for instance NUMA policies
------------------------------------------------------------
diff --git a/doc/source/admin/index.rst b/doc/source/admin/index.rst
index 93b4e6a554..8cb5bf7156 100644
--- a/doc/source/admin/index.rst
+++ b/doc/source/admin/index.rst
@@ -206,6 +206,7 @@ instance for these kind of workloads.
secure-boot
sev
managing-resource-providers
+ compute-node-identification
resource-limits
cpu-models
libvirt-misc
diff --git a/doc/source/admin/live-migration-usage.rst b/doc/source/admin/live-migration-usage.rst
index 783ab5e27c..32c67c2b0a 100644
--- a/doc/source/admin/live-migration-usage.rst
+++ b/doc/source/admin/live-migration-usage.rst
@@ -102,7 +102,7 @@ Manual selection of the destination host
.. code-block:: console
- $ openstack server migrate d1df1b5a-70c4-4fed-98b7-423362f2c47c --live HostC
+ $ openstack server migrate d1df1b5a-70c4-4fed-98b7-423362f2c47c --live-migration --host HostC
#. Confirm that the instance has been migrated successfully:
diff --git a/doc/source/admin/remote-console-access.rst b/doc/source/admin/remote-console-access.rst
index 015c6522d0..9b28646d27 100644
--- a/doc/source/admin/remote-console-access.rst
+++ b/doc/source/admin/remote-console-access.rst
@@ -366,6 +366,16 @@ Replace ``IP_ADDRESS`` with the IP address from which the proxy is accessible
by the outside world. For example, this may be the management interface IP
address of the controller or the VIP.
+Optionally, the :program:`nova-compute` service supports the following
+additional options to configure compression settings (algorithms and modes)
+for SPICE consoles.
+
+- :oslo.config:option:`spice.image_compression`
+- :oslo.config:option:`spice.jpeg_compression`
+- :oslo.config:option:`spice.zlib_compression`
+- :oslo.config:option:`spice.playback_compression`
+- :oslo.config:option:`spice.streaming_mode`
+
Serial
------
diff --git a/doc/source/admin/upgrades.rst b/doc/source/admin/upgrades.rst
index 00a714970b..61fd0cf258 100644
--- a/doc/source/admin/upgrades.rst
+++ b/doc/source/admin/upgrades.rst
@@ -41,21 +41,27 @@ Rolling upgrade process
To reduce downtime, the compute services can be upgraded in a rolling fashion.
It means upgrading a few services at a time. This results in a condition where
both old (N) and new (N+1) nova-compute services co-exist for a certain time
-period. Note that, there is no upgrade of the hypervisor here, this is just
+period (or even N with N+2 upgraded nova-compute services, see below).
+Note that, there is no upgrade of the hypervisor here, this is just
upgrading the nova services. If reduced downtime is not a concern (or lower
complexity is desired), all services may be taken down and restarted at the
same time.
.. important::
- Nova does not currently support the coexistence of N and N+2 or greater
- :program:`nova-compute` or :program:`nova-conductor` services in the same
- deployment. The `nova-conductor`` service will fail to start when a
- ``nova-compute`` service that is older than the previous release (N-2 or
- greater) is detected. Similarly, in a :doc:`deployment with multiple cells
+ As of OpenStack 2023.1 (Antelope), Nova supports the coexistence of N and
+ N-2 (Yoga) :program:`nova-compute` or :program:`nova-conductor` services in
+ the same deployment. The `nova-conductor`` service will fail to start when
+ a ``nova-compute`` service that is older than the support envelope is
+ detected. This varies by release and the support envelope will be explained
+ in the release notes. Similarly, in a :doc:`deployment with multiple cells
</admin/cells>`, neither the super conductor service nor any per-cell
conductor service will start if any other conductor service in the
- deployment is older than the previous release.
+ deployment is older than the N-2 release.
+
+ Releases older than 2023.1 will only support rolling upgrades for a single
+ release difference between :program:`nova-compute` and
+ :program:`nova-conductor` services.
#. Before maintenance window:
diff --git a/doc/source/cli/nova-compute.rst b/doc/source/cli/nova-compute.rst
index f190949efa..1346dab92e 100644
--- a/doc/source/cli/nova-compute.rst
+++ b/doc/source/cli/nova-compute.rst
@@ -41,6 +41,8 @@ Files
* ``/etc/nova/policy.d/``
* ``/etc/nova/rootwrap.conf``
* ``/etc/nova/rootwrap.d/``
+* ``/etc/nova/compute_id``
+* ``/var/lib/nova/compute_id``
See Also
========
diff --git a/doc/source/contributor/ptl-guide.rst b/doc/source/contributor/ptl-guide.rst
index 813f1bc83e..b530b100bc 100644
--- a/doc/source/contributor/ptl-guide.rst
+++ b/doc/source/contributor/ptl-guide.rst
@@ -29,7 +29,11 @@ New PTL
* Get acquainted with the release schedule
- * Example: https://wiki.openstack.org/wiki/Nova/Stein_Release_Schedule
+ * Example: https://releases.openstack.org/antelope/schedule.html
+
+ * Also, note that we usually create a specific wiki page for each cycle like
+ https://wiki.openstack.org/wiki/Nova/2023.1_Release_Schedule but it's
+ preferred to use the main release schedule above.
Project Team Gathering
----------------------
@@ -37,30 +41,34 @@ Project Team Gathering
* Create PTG planning etherpad, retrospective etherpad and alert about it in
nova meeting and dev mailing list
- * Example: https://etherpad.openstack.org/p/nova-ptg-stein
+ * Example: https://etherpad.opendev.org/p/nova-antelope-ptg
* Run sessions at the PTG
-* Have a priorities discussion at the PTG
+* Do a retro of the previous cycle
- * Example: https://etherpad.openstack.org/p/nova-ptg-stein-priorities
+* Make agreement on the agenda for this release, including but not exhaustively:
-* Sign up for group photo at the PTG (if applicable)
+ * Number of review days, for either specs or implementation
+ * Define the Spec approval and Feature freeze dates
+ * Modify the release schedule if needed by adding the new dates.
+ As an example : https://review.opendev.org/c/openstack/releases/+/877094
+
+* Discuss the implications of the `SLURP or non-SLURP`__ current release
-* Open review runways for the cycle
+.. __: https://governance.openstack.org/tc/resolutions/20220210-release-cadence-adjustment.html
+
+* Sign up for group photo at the PTG (if applicable)
- * Example: https://etherpad.openstack.org/p/nova-runways-stein
After PTG
---------
* Send PTG session summaries to the dev mailing list
-* Make sure the cycle priorities spec gets reviewed and merged
-
- * Example: https://specs.openstack.org/openstack/nova-specs/priorities/stein-priorities.html
+* Add `RFE bugs`__ if you have action items that are simple to do but without a owner yet.
-* Run the count-blueprints script daily to gather data for the cycle burndown chart
+.. __: https://bugs.launchpad.net/nova/+bugs?field.tag=rfe
A few weeks before milestone 1
------------------------------
@@ -70,12 +78,13 @@ A few weeks before milestone 1
* Periodically check the series goals others have proposed in the “Set series
goals” link:
- * Example: https://blueprints.launchpad.net/nova/stein/+setgoals
+ * Example: https://blueprints.launchpad.net/nova/antelope/+setgoals
Milestone 1
-----------
-* Do milestone release of nova and python-novaclient (in launchpad only)
+* Do milestone release of nova and python-novaclient (in launchpad only, can be
+ optional)
* This is launchpad bookkeeping only. With the latest release team changes,
projects no longer do milestone releases. See: https://releases.openstack.org/reference/release_models.html#cycle-with-milestones-legacy
@@ -87,6 +96,8 @@ Milestone 1
the minor version to leave room for future stable branch releases
* os-vif
+ * placement
+ * os-traits / os-resource-classes
* Release stable branches of nova
@@ -117,28 +128,26 @@ Summit
* Prepare the on-boarding session materials. Enlist help of others
+* Prepare the operator meet-and-greet session. Enlist help of others
+
A few weeks before milestone 2
------------------------------
* Plan a spec review day (optional)
-* Periodically check the series goals others have proposed in the “Set series
- goals” link:
-
- * Example: https://blueprints.launchpad.net/nova/stein/+setgoals
-
Milestone 2
-----------
-* Spec freeze
+* Spec freeze (if agreed)
-* Release nova and python-novaclient
+* Release nova and python-novaclient (if new features were merged)
* Release other libraries as needed
* Stable branch releases of nova
* For nova, set the launchpad milestone release as “released” with the date
+ (can be optional)
Shortly after spec freeze
-------------------------
@@ -146,7 +155,7 @@ Shortly after spec freeze
* Create a blueprint status etherpad to help track, especially non-priority
blueprint work, to help things get done by Feature Freeze (FF). Example:
- * https://etherpad.openstack.org/p/nova-stein-blueprint-status
+ * https://etherpad.opendev.org/p/nova-antelope-blueprint-status
* Create or review a patch to add the next release’s specs directory so people
can propose specs for next release after spec freeze for current release
@@ -155,13 +164,15 @@ Non-client library release freeze
---------------------------------
* Final release for os-vif
+* Final release for os-traits
+* Final release for os-resource-classes
Milestone 3
-----------
* Feature freeze day
-* Client library freeze, release python-novaclient
+* Client library freeze, release python-novaclient and osc-placement
* Close out all blueprints, including “catch all” blueprints like mox,
versioned notifications
@@ -170,7 +181,7 @@ Milestone 3
* For nova, set the launchpad milestone release as “released” with the date
-* Write the `cycle highlights
+* Start writing the `cycle highlights
<https://docs.openstack.org/project-team-guide/release-management.html#cycle-highlights>`__
Week following milestone 3
@@ -199,7 +210,7 @@ A few weeks before RC
* Make a RC1 todos etherpad and tag bugs as ``<release>-rc-potential`` and keep
track of them, example:
- * https://etherpad.openstack.org/p/nova-stein-rc-potential
+ * https://etherpad.opendev.org/p/nova-antelope-rc-potential
* Go through the bug list and identify any rc-potential bugs and tag them
@@ -242,7 +253,7 @@ RC
* Example: https://review.opendev.org/644412
-* Write the cycle-highlights in marketing-friendly sentences and propose to the
+* Push the cycle-highlights in marketing-friendly sentences and propose to the
openstack/releases repo. Usually based on reno prelude but made more readable
and friendly
@@ -257,11 +268,13 @@ Immediately after RC
* https://wiki.openstack.org/wiki/Nova/ReleaseChecklist
- * Drop old RPC compat code (if there was a RPC major version bump)
+ * Drop old RPC compat code (if there was a RPC major version bump and if
+ agreed on at the PTG)
* Example: https://review.opendev.org/543580
- * Bump the oldest supported compute service version
+ * Bump the oldest supported compute service version (if master branch is now
+ on a non-SLURP version)
* https://review.opendev.org/#/c/738482/
@@ -275,7 +288,9 @@ Immediately after RC
* Set the previous to last series status to “supported”
-* Repeat launchpad steps ^ for python-novaclient
+* Repeat launchpad steps ^ for python-novaclient (optional)
+
+* Repeat launchpad steps ^ for placement
* Register milestones in launchpad for the new cycle based on the new cycle
release schedule
@@ -293,7 +308,7 @@ Immediately after RC
* Create new release wiki:
- * Example: https://wiki.openstack.org/wiki/Nova/Train_Release_Schedule
+ * Example: https://wiki.openstack.org/wiki/Nova/2023.1_Release_Schedule
* Update the contributor guide for the new cycle