summaryrefslogtreecommitdiff
path: root/doc/source/admin/dhcp-less.rst
blob: fe3aa055f9e1795a3f11c0d6a6c376ef76e5ba96 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
Layer 3 or DHCP-less ramdisk booting
====================================

Booting nodes via PXE, while universally supported, suffers from one
disadvantage: it requires a direct L2 connectivity between the node and the
control plane for DHCP. Using virtual media it is possible to avoid not only
the unreliable TFTP protocol, but DHCP altogether.

When network data is provided for a node as explained below, the generated
virtual media ISO will also serve as a configdrive_, and the network data will
be stored in the standard OpenStack location.

The simple-init_ element needs to be used when creating the deployment ramdisk.
The Glean_ tool will look for a media labeled as ``config-2``. If found, the
network information from it will be read, and the node's networking stack will
be configured accordingly.

.. code-block:: console

   ironic-python-agent-builder -o /output/ramdisk \
        debian-minimal -e simple-init

.. warning::
   The simple-init_ element is found to conflict to NetworkManager, which makes
   this feature not operational with ramdisks based on CentOS, RHEL and Fedora.
   The ``debian-minimal`` and ``centos`` elements seem to work correctly. For
   CentOS, only CentOS 7 based ramdisks are known to work.

.. note::
   If desired, some interfaces can still be configured to use DHCP.

Hardware type support
---------------------

This feature is known to work with the following hardware types:

* :doc:`Redfish </admin/drivers/redfish>` with ``redfish-virtual-media`` boot
* :doc:`iLO </admin/drivers/ilo>` with ``ilo-virtual-media`` boot

Configuring network data
------------------------

When the Bare Metal service is running within OpenStack, no additional
configuration is required - the network configuration will be fetched from the
Network service.

Alternatively, the user can build and pass network configuration in form of
a network_data_ JSON to a node via the ``network_data`` field. Node-based
configuration takes precedence over the configuration generated by the
Network service and also works in standalone mode.

.. code-block:: bash

  baremetal node set --network-data ~/network_data.json <node>

An example network data:

.. code-block:: json

    {
        "links": [
            {
                "id": "port-92750f6c-60a9-4897-9cd1-090c5f361e18",
                "type": "phy",
                "ethernet_mac_address": "52:54:00:d3:6a:71"
            }
        ],
        "networks": [
            {
                "id": "network0",
                "type": "ipv4",
                "link": "port-92750f6c-60a9-4897-9cd1-090c5f361e18",
                "ip_address": "192.168.122.42",
                "netmask": "255.255.255.0",
                "network_id": "network0",
                "routes": []
            }
        ],
        "services": []
    }

.. note::
   Some fields are redundant with the port information. We're looking into
   simplifying the format, but currently all these fields are mandatory.

.. _configdrive: https://docs.openstack.org/nova/queens/user/config-drive.html
.. _Glean: https://docs.openstack.org/infra/glean/
.. _simple-init: https://docs.openstack.org/diskimage-builder/latest/elements/simple-init/README.html
.. _network_data: https://specs.openstack.org/openstack/nova-specs/specs/liberty/implemented/metadata-service-network-info.html

.. _l3-external-ip:

Deploying outside of the provisioning network
---------------------------------------------

If you need to combine traditional deployments using a provisioning network
with virtual media deployments over L3, you may need to provide an alternative
IP address for the remote nodes to connect to:

.. code-block:: ini

   [deploy]
   http_url = <HTTP server URL internal to the provisioning network>
   external_http_url = <HTTP server URL with a routable IP address>

You may also need to override the callback URL, which is normally fetched from
the service catalog or configured in the ``[service_catalog]`` section:

.. code-block:: ini

   [deploy]
   external_callback_url = <Bare Metal API URL with a routable IP address>