summaryrefslogtreecommitdiff
path: root/doc/source/admin/cleaning.rst
blob: 470710cdd2b9290dac0eb0e958eadf6e9dace2b5 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
.. _cleaning:

=============
Node cleaning
=============

Overview
========
Ironic provides two modes for node cleaning: ``automated`` and ``manual``.

``Automated cleaning`` is automatically performed before the first
workload has been assigned to a node and when hardware is recycled from
one workload to another.

``Manual cleaning`` must be invoked by the operator.


.. _automated_cleaning:

Automated cleaning
==================

When hardware is recycled from one workload to another, ironic performs
automated cleaning on the node to ensure it's ready for another workload. This
ensures the tenant will get a consistent bare metal node deployed every time.

Ironic implements automated cleaning by collecting a list of cleaning steps
to perform on a node from the Power, Deploy, Management, and RAID interfaces
of the driver assigned to the node. These steps are then ordered by priority
and executed on the node when the node is moved
to ``cleaning`` state, if automated cleaning is enabled.

With automated cleaning, nodes move to ``cleaning`` state when moving from
``active`` -> ``available`` state (when the hardware is recycled from one
workload to another). Nodes also traverse cleaning when going from
``manageable`` -> ``available`` state (before the first workload is
assigned to the nodes). For a full understanding of all state transitions
into cleaning, please see :ref:`states`.

Ironic added support for automated cleaning in the Kilo release.

.. _enabling-cleaning:

Enabling automated cleaning
---------------------------
To enable automated cleaning, ensure that your ironic.conf is set as follows.
(Prior to Mitaka, this option was named 'clean_nodes'.)::

  [conductor]
  automated_clean=true

This will enable the default set of cleaning steps, based on your hardware and
ironic drivers. If you're using an agent_* driver, this includes, by default,
erasing all of the previous tenant's data.

You may also need to configure a `Cleaning Network`_.

Cleaning steps
--------------

Cleaning steps used for automated cleaning are ordered from higher to lower
priority, where a larger integer is a higher priority. In case of a conflict
between priorities across drivers, the following resolution order is used:
Power, Management, Deploy, and RAID interfaces.

You can skip a cleaning step by setting the priority for that cleaning step
to zero or 'None'.

You can reorder the cleaning steps by modifying the integer priorities of the
cleaning steps.

See `How do I change the priority of a cleaning step?`_ for more information.


.. _manual_cleaning:

Manual cleaning
===============

``Manual cleaning`` is typically used to handle long running, manual, or
destructive tasks that an operator wishes to perform either before the first
workload has been assigned to a node or between workloads. When initiating a
manual clean, the operator specifies the cleaning steps to be performed.
Manual cleaning can only be performed when a node is in the ``manageable``
state. Once the manual cleaning is finished, the node will be put in the
``manageable`` state again.

Ironic added support for manual cleaning in the 4.4 (Mitaka series)
release.

Setup
-----

In order for manual cleaning to work, you may need to configure a
`Cleaning Network`_.

Starting manual cleaning via API
--------------------------------

Manual cleaning can only be performed when a node is in the ``manageable``
state. The REST API request to initiate it is available in API version 1.15 and
higher::

    PUT /v1/nodes/<node_ident>/states/provision

(Additional information is available `here <https://developer.openstack.org/api-ref/baremetal/index.html?expanded=change-node-provision-state-detail#change-node-provision-state>`_.)

This API will allow operators to put a node directly into ``cleaning``
provision state from ``manageable`` state via 'target': 'clean'.
The PUT will also require the argument 'clean_steps' to be specified. This
is an ordered list of cleaning steps. A cleaning step is represented by a
dictionary (JSON), in the form::

  {
      "interface": "<interface>",
      "step": "<name of cleaning step>",
      "args": {"<arg1>": "<value1>", ..., "<argn>": <valuen>}
  }

The 'interface' and 'step' keys are required for all steps. If a cleaning step
method takes keyword arguments, the 'args' key may be specified. It
is a dictionary of keyword variable arguments, with each keyword-argument entry
being <name>: <value>.

If any step is missing a required keyword argument, manual cleaning will not be
performed and the node will be put in ``clean failed`` provision state with an
appropriate error message.

If, during the cleaning process, a cleaning step determines that it has
incorrect keyword arguments, all earlier steps will be performed and then the
node will be put in ``clean failed`` provision state with an appropriate error
message.

An example of the request body for this API::

  {
    "target":"clean",
    "clean_steps": [{
      "interface": "raid",
      "step": "create_configuration",
      "args": {"create_nonroot_volumes": false}
    },
    {
      "interface": "deploy",
      "step": "erase_devices"
    }]
  }

In the above example, the driver's RAID interface would configure hardware
RAID without non-root volumes, and then all devices would be erased
(in that order).

Starting manual cleaning via "openstack baremetal" CLI
------------------------------------------------------

Manual cleaning is available via the ``openstack baremetal node clean``
command, starting with Bare Metal API version 1.15.

The argument ``--clean-steps`` must be specified. Its value is one of:

- a JSON string
- path to a JSON file whose contents are passed to the API
- '-', to read from stdin. This allows piping in the clean steps.
  Using '-' to signify stdin is common in Unix utilities.

The following examples assume that the Bare Metal API version was set via
the ``OS_BAREMETAL_API_VERSION`` environment variable. (The alternative is to
add ``--os-baremetal-api-version 1.15`` to the command.)::

    export OS_BAREMETAL_API_VERSION=1.15

Examples of doing this with a JSON string::

    openstack baremetal node clean <node> \
        --clean-steps '[{"interface": "deploy", "step": "erase_devices_metadata"}]'

    openstack baremetal node clean <node> \
        --clean-steps '[{"interface": "deploy", "step": "erase_devices"}]'

Or with a file::

    openstack baremetal node clean <node> \
        --clean-steps my-clean-steps.txt

Or with stdin::

    cat my-clean-steps.txt | openstack baremetal node clean <node> \
        --clean-steps -

Cleaning Network
================

If you are using the Neutron DHCP provider (the default) you will also need to
ensure you have configured a cleaning network. This network will be used to
boot the ramdisk for in-band cleaning. You can use the same network as your
tenant network. For steps to set up the cleaning network, please see
:ref:`configure-cleaning`.

.. _InbandvsOutOfBandCleaning:

In-band vs out-of-band
======================
Ironic uses two main methods to perform actions on a node: in-band and
out-of-band. Ironic supports using both methods to clean a node.

In-band
-------
In-band steps are performed by ironic making API calls to a ramdisk running
on the node using a Deploy driver. Currently, all the drivers using
ironic-python-agent ramdisk support in-band cleaning. By default,
ironic-python-agent ships with a minimal cleaning configuration, only erasing
disks. However, with this ramdisk, you can add your own cleaning steps and/or
override default cleaning steps with a custom Hardware Manager.

Out-of-band
-----------
Out-of-band are actions performed by your management controller, such as IPMI,
iLO, or DRAC. Out-of-band steps will be performed by ironic using a Power or
Management driver. Which steps are performed depends on the driver and hardware.

For Out-of-Band cleaning operations supported by iLO drivers, refer to
:ref:`ilo_node_cleaning`.

FAQ
===

How are cleaning steps ordered?
-------------------------------
For automated cleaning, cleaning steps are ordered by integer priority, where
a larger integer is a higher priority. In case of a conflict between priorities
across drivers, the following resolution order is used: Power, Management,
Deploy, and RAID interfaces.

For manual cleaning, the cleaning steps should be specified in the desired
order.

How do I skip a cleaning step?
------------------------------
For automated cleaning, cleaning steps with a priority of 0 or None are skipped.


How do I change the priority of a cleaning step?
------------------------------------------------
For manual cleaning, specify the cleaning steps in the desired order.

For automated cleaning, it depends on whether the cleaning steps are
out-of-band or in-band.

Most out-of-band cleaning steps have an explicit configuration option for
priority.

Changing the priority of an in-band (ironic-python-agent) cleaning step
requires use of a custom HardwareManager. The only exception is
``erase_devices``, which can have its priority set in ironic.conf. For instance,
to disable erase_devices, you'd set the following configuration option::

  [deploy]
  erase_devices_priority=0

To enable/disable the in-band disk erase using ``agent_ilo`` driver, use the
following configuration option::

  [ilo]
  clean_priority_erase_devices=0

The generic hardware manager first tries to perform ATA disk erase by using
``hdparm`` utility.  If ATA disk erase is not supported, it performs software
based disk erase using ``shred`` utility.  By default, the number of iterations
performed by ``shred`` for software based disk erase is 1.  To configure
the number of iterations, use the following configuration option::

  [deploy]
  erase_devices_iterations=1


What cleaning step is running?
------------------------------
To check what cleaning step the node is performing or attempted to perform and
failed, run the following command; it will return the value in the node's
``driver_internal_info`` field::

    openstack baremetal node show $node_ident -f value -c driver_internal_info

The ``clean_steps`` field will contain a list of all remaining steps with their
priorities, and the first one listed is the step currently in progress or that
the node failed before going into ``clean failed`` state.

Should I disable automated cleaning?
------------------------------------
Automated cleaning is recommended for ironic deployments, however, there are
some tradeoffs to having it enabled. For instance, ironic cannot deploy a new
instance to a node that is currently cleaning, and cleaning can be a time
consuming process. To mitigate this, we suggest using disks with support for
cryptographic ATA Security Erase, as typically the erase_devices step in the
deploy driver takes the longest time to complete of all cleaning steps.

Why can't I power on/off a node while it's cleaning?
----------------------------------------------------
During cleaning, nodes may be performing actions that shouldn't be
interrupted, such as BIOS or Firmware updates. As a result, operators are
forbidden from changing power state via the ironic API while a node is
cleaning.


Troubleshooting
===============
If cleaning fails on a node, the node will be put into ``clean failed`` state
and placed in maintenance mode, to prevent ironic from taking actions on the
node.

Nodes in ``clean failed`` will not be powered off, as the node might be in a
state such that powering it off could damage the node or remove useful
information about the nature of the cleaning failure.

A ``clean failed`` node can be moved to ``manageable`` state, where it cannot
be scheduled by nova and you can safely attempt to fix the node. To move a node
from ``clean failed`` to ``manageable``::

  openstack baremetal node manage $node_ident

You can now take actions on the node, such as replacing a bad disk drive.

Strategies for determining why a cleaning step failed include checking the
ironic conductor logs, viewing logs on the still-running ironic-python-agent
(if an in-band step failed), or performing general hardware troubleshooting on
the node.

When the node is repaired, you can move the node back to ``available`` state,
to allow it to be scheduled by nova.

::

  # First, move it out of maintenance mode
  openstack baremetal node maintenance unset $node_ident

  # Now, make the node available for scheduling by nova
  openstack baremetal node provide $node_ident

The node will begin automated cleaning from the start, and move to
``available`` state when complete.