summaryrefslogtreecommitdiff
path: root/releasenotes/notes/ironic-driver-hash-ring-7d763d87b9236e5d.yaml
blob: 8e25d001f89421e5cd1f959fe359ae630622ec47 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
---
features:
  - |
    Adds a new feature to the ironic virt driver, which allows
    multiple nova-compute services to be run simultaneously. This uses
    consistent hashing to divide the ironic nodes between the nova-compute
    services, with the hash ring being refreshed each time the resource tracker
    runs.

    Note that instances will still be owned by the same nova-compute service
    for the entire life of the instance, and so the ironic node that instance
    is on will also be managed by the same nova-compute service until the node
    is deleted. This also means that removing a nova-compute service will
    leave instances managed by that service orphaned, and as such most
    instance actions will not work until a nova-compute service with the same
    hostname is brought (back) online.

    When nova-compute services are brought up or down, the ring will eventually
    re-balance (when the resource tracker runs on each compute). This may
    result in duplicate compute_node entries for ironic nodes while the
    nova-compute service pool is re-balancing. However, because any
    nova-compute service running the ironic virt driver can manage any ironic
    node, if a build request goes to the compute service not currently managing
    the node the build request is for, it will still succeed.

    There is no configuration to do to enable this feature; it is always
    enabled.  There are no major changes when only one compute service is
    running. If more compute services are brought online, the bigger changes
    come into play.

    Note that this is tested when running with only one nova-compute service,
    but not more than one. As such, this should be used with caution for
    multiple compute hosts until it is properly tested in CI.