Layer 3 or DHCP-less ramdisk booting
Booting nodes via PXE, while universally supported, suffers from one
disadvantage: it requires a direct L2 connectivity between the node and
the control plane for DHCP. Using virtual media it is possible to avoid
not only the unreliable TFTP protocol, but DHCP altogether.
When network data is provided for a node as explained below, the
generated virtual media ISO will also serve as a configdrive,
and the network data will be stored in the standard OpenStack
location.
The simple-init
element needs to be used when creating the deployment ramdisk. The Glean tool will look
for a media labeled as config-2
. If found, the network
information from it will be read, and the node’s networking stack will
be configured accordingly.
ironic-python-agent-builder -o /output/ramdisk \
debian-minimal -e simple-init
Warning
Ramdisks based on distributions with NetworkManager require Glean 1.19.0 or newer
to work.
Note
If desired, some interfaces can still be configured to use DHCP.
Hardware type support
This feature is known to work with the following hardware types:
Redfish </admin/drivers/redfish>
with
redfish-virtual-media
bootiLO </admin/drivers/ilo>
with
ilo-virtual-media
boot
Configuring network data
When the Bare Metal service is running within OpenStack, no
additional configuration is required – the network configuration will be
fetched from the Network service.
Alternatively, the user can build and pass network configuration in
form of a network_data
JSON to a node via the network_data
field. Node-based
configuration takes precedence over the configuration generated by the
Network service and also works in standalone mode.
An example network data:
{
"links": [
{
"id": "port-92750f6c-60a9-4897-9cd1-090c5f361e18",
"type": "phy",
"ethernet_mac_address": "52:54:00:d3:6a:71"
}
],
"networks": [
{
"id": "network0",
"type": "ipv4",
"link": "port-92750f6c-60a9-4897-9cd1-090c5f361e18",
"ip_address": "192.168.122.42",
"netmask": "255.255.255.0",
"network_id": "network0",
"routes": []
}
],
"services": []
}
Note
Some fields are redundant with the port information. We’re looking
into simplifying the format, but currently all these fields are
mandatory.
You’ll need the deployed image to support network data, e.g. by
pre-installing cloud-init or Glean on it (most
cloud images have the former). Then you can provide the network data
when deploying, for example:
Some first-boot services, such as Ignition, don’t support
network data. You can provide their configuration as part of user data
instead:
Deploying outside of the provisioning
network
If you need to combine traditional deployments using a provisioning
network with virtual media deployments over L3, you may need to provide
an alternative IP address for the remote nodes to connect to:
[deploy]
http_url = <HTTP server URL internal to the provisioning network>
external_http_url = <HTTP server URL with a routable IP address>
You may also need to override the callback URL, which is normally
fetched from the service catalog or configured in the
[service_catalog]
section:
In case you need specific URLs for each node, you can use the
driver_info[external_http_url]
node property. When used it
overrides the [deploy]http_url
and
[deploy]external_http_url
settings in the configuration
file.