summaryrefslogtreecommitdiff
path: root/releasenotes/notes/cpu-resources-d4e6a0c12681fa87.yaml
blob: e8bb221e12f7ed9a1c57a900e80c8b9c3814dc9a (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
features:
  - |
    Compute nodes using the libvirt driver can now report ``PCPU`` inventory.
    This is consumed by instances with dedicated (pinned) CPUs. This can be
    configured using the ``[compute] cpu_dedicated_set`` config option. The
    scheduler will automatically translate the legacy ``hw:cpu_policy`` flavor
    extra spec or ``hw_cpu_policy`` image metadata property to ``PCPU``
    requests, falling back to ``VCPU`` requests only if no ``PCPU`` candidates
    are found. Refer to the help text of the ``[compute] cpu_dedicated_set``,
    ``[compute] cpu_shared_set`` and ``vcpu_pin_set`` config options for more
    information.
  - |
    Compute nodes using the libvirt driver will now report the
    ``HW_CPU_HYPERTHREADING`` trait if the host has hyperthreading. The
    scheduler will automatically translate the legacy ``hw:cpu_thread_policy``
    flavor extra spec or ``hw_cpu_thread_policy`` image metadata property to
    either require or forbid this trait.
  - |
    A new configuration option, ``[compute] cpu_dedicated_set``, has been
    added. This can be used to configure the host CPUs that should be used for
    ``PCPU`` inventory. Refer to the help text of the ``[compute]
    cpu_dedicated_set`` config option for more information.
  - |
    The ``[compute] cpu_shared_set`` configuration option will now be used to
    configure the host CPUs that should be used for ``VCPU`` inventory,
    replacing the deprecated ``vcpu_pin_set`` option. Refer to the help text of
    the ``[compute] cpu_shared_set`` config option for more infomration.
  - |
    A new configuration option, ``[workarounds] disable_fallback_pcpu_query``,
    has been added. When creating or moving pinned instances, the scheduler will
    attempt to provide a ``PCPU``-based allocation, but can also fallback to a legacy
    ``VCPU``-based allocation. This fallback behavior is enabled by
    default to ensure it is possible to upgrade without having to modify compute
    node configuration, but it results in an additional request for allocation
    candidates from placement. This can have a slight performance impact and is
    unnecessary on new or upgraded deployments where all compute nodes have been
    correctly configured to report ``PCPU`` inventory. The ``[workarounds]
    disable_fallback_pcpu_query`` config option can be used to disable this
    fallback allocation candidate request, meaning only ``PCPU``-based
    allocation candidates will be retrieved. Refer to the help text of the
    ``[workarounds] disable_fallback_pcpu_query`` config option for more
    information.
deprecations:
  - |
    The ``vcpu_pin_set`` configuration option has been deprecated. You should
    migrate host CPU configuration to the ``[compute] cpu_dedicated_set`` or
    ``[compute] cpu_shared_set`` config options, or both. Refer to the help
    text of these config options for more information.
upgrade:
  - |
    Previously, if the ``vcpu_pin_set`` configuration option was not defined,
    the libvirt driver would count all available host CPUs when calculating
    ``VCPU`` inventory, regardless of whether those CPUs were online or not.
    The driver will now only report the total number of online CPUs. This
    should result in fewer build failures on hosts with offlined CPUs.
  - |
    Previously, if an instance was using the ``isolate`` CPU thread policy on a
    host with SMT (hyperthreading) enabled, the libvirt driver would fake a
    non-SMT host by marking the thread sibling(s) for each host CPU used by the
    instance as reserved and unusable. This is no longer the case. Instead,
    instances using the policy will be scheduled only to hosts that do not
    report the ``HW_CPU_HYPERTHREADING`` trait. If you have workloads that
    require the ``isolate`` policy, you should configure some or all of your
    hosts to disable SMT.