Taikun Logo

Taikun OCP Guide

Table of Contents

Networking with neutron

While nova uses the OpenStack Networking service (neutron) <>
to provide network connectivity for instances, nova itself provides some
additional features not possible with neutron alone. These are described



The feature described below was first introduced in the Juno

The SR-IOV specification defines a standardized mechanism to
virtualize PCIe devices. This mechanism can virtualize a single PCIe
Ethernet controller to appear as multiple PCIe devices. Each device can
be directly assigned to an instance, bypassing the hypervisor and
virtual switch layer. As a result, users are able to achieve low latency
and near-line wire speed.

A full guide on configuring and using SR-IOV is provided in the OpenStack Networking service documentation


Nova only supports PCI addresses where the fields are restricted to
the following maximum value:

  • domain – 0xFFFF
  • bus – 0xFF
  • slot – 0x1F
  • function – 0x7

Nova will ignore PCI devices reported by the hypervisor if the
address is outside of these ranges.


For information on creating servers with remotely-managed SR-IOV
network interfaces of SmartNIC DPUs, refer to the relevant section in
Networking Guide <admin/ovn/smartnic_dpu>.


  • Only VFs are supported and they must be tagged in the Nova Compute
    configuration in the pci.device_spec option as
    remote_managed: "true". There is no auto-discovery of this
    based on vendor and product IDs;
  • Either VF or its respective PF must expose a PCI VPD capability with
    a unique card serial number according to the PCI/PCIe specifications
    (see the Libvirt
    to get an example of how VPD data is represented and what to
    expect). If this is not the case, those devices will not appear in
    allocation pools;
  • Only the Libvirt driver is capable of supporting this feature at the
    time of writing;
  • The support for VPD capability handling in Libvirt was added in
    release 7.9.0 – older
    versions are not supported by this feature;
  • All compute nodes must be upgraded to the Yoga release in order for
    scheduling of nodes with VNIC_TYPE_REMOTE_MANAGED ports to
  • The same limitations apply to operations like live migration as with
    SR-IOV ports
  • Clearing a VLAN by programming VLAN 0 must not result in errors in
    the VF kernel driver at the compute host. Before v8.1.0 Libvirt clears a
    VLAN by programming VLAN 0 before passing a VF through to the guest
    which may result in an error depending on your driver and kernel version
    (see, for example, this
    which discusses a case relevant to one driver). As of Libvirt
    v8.1.0, EPERM errors encountered while programming VLAN 0 are ignored if
    VLAN clearning is not explicitly requested in the device XML (i.e. VLAN
    0 is not specified explicitly).

NUMA Affinity


The feature described below was first introduced in the Rocky


The functionality described below is currently only supported by the
libvirt/KVM driver.

As described in cpu-topologies, NUMA is a computer architecture where
memory accesses to certain regions of system memory can have higher
latencies than other regions, depending on the CPU(s) your process is
running on. This effect extends to devices connected to the PCIe bus, a
concept known as NUMA I/O. Many Network Interface Cards (NICs) connect
using the PCIe interface, meaning they are susceptible to the
ill-effects of poor NUMA affinitization. As a result, NUMA locality must
be considered when creating an instance where high dataplane performance
is a requirement.

Fortunately, nova provides functionality to ensure NUMA
affinitization is provided for instances using neutron. How this works
depends on the type of port you are trying to use.

For SR-IOV ports, virtual functions, which are PCI devices, are
attached to the instance. This means the instance can benefit from the
NUMA affinity guarantees provided for PCI devices. This happens
automatically and is described in detail in pci-numa-affinity-policy.

For all other types of ports, some manual configuration is

  1. Identify the type of network(s) you wish to provide NUMA affinity

    • If a network is an L2-type network
      (provider:network_type of flat or
      vlan), affinity of the network to given NUMA node(s) can
      vary depending on value of the provider:physical_network
      attribute of the network, commonly referred to as the physnet
      of the network. This is because most neutron drivers map each
      physnet to a different bridge, to which multiple NICs are
      attached, or to a different (logical) NIC.
    • If a network is an L3-type networks
      (provider:network_type of vxlan,
      gre or geneve), all traffic will use the
      device to which the endpoint IP is assigned. This means all L3
      networks on a given host will have affinity to the same NUMA node(s).
      Refer to the neutron documentation
      for more
  2. Determine the NUMA affinity of the NICs attached to the given

    How this should be achieved varies depending on the switching
    solution used and whether the network is a L2-type network or an L3-type

    Consider an L2-type network using the Linux Bridge mechanism driver.
    As noted in the neutron documentation
    , physnets are
    mapped to interfaces using the
    [linux_bridge] physical_interface_mappings configuration
    option. For example:

    physical_interface_mappings = provider:PROVIDER_INTERFACE

    Once you have the device name, you can query sysfs to
    retrieve the NUMA affinity for this device. For example:

    $ cat /sys/class/net/PROVIDER_INTERFACE/device/numa_node

    For an L3-type network using the Linux Bridge mechanism driver, the
    device used will be configured using protocol-specific endpoint IP
    configuration option. For VXLAN, this is the
    [vxlan] local_ip option. For example:


    Once you have the IP address in question, you can use ip to identify the device
    that has been assigned this IP address and from there can query the NUMA
    affinity using sysfs as above.


    The example provided above is merely that: an example. How one should
    identify this information can vary massively depending on the driver
    used, whether bonding is used, the type of network used, etc.

  3. Configure NUMA affinity in nova.conf.

    Once you have identified the NUMA affinity of the devices used for
    your networks, you need to configure this in nova.conf. As
    before, how this should be achieved varies depending on the type of

    For L2-type networks, NUMA affinity is defined based on the
    provider:physical_network attribute of the network. There
    are two configuration options that must be set:

    [neutron] physnets

    This should be set to the list of physnets for which you wish to
    provide NUMA affinity. Refer to the documentation
    for more information.

    [neutron_physnet_{physnet}] numa_nodes

    This should be set to the list of NUMA node(s) that networks with the
    given {physnet} should be affinitized to.

    For L3-type networks, NUMA affinity is defined globally for all
    tunneled networks on a given host. There is only one configuration
    option that must be set:

    [neutron_tunnel] numa_nodes

    This should be set to a list of one or NUMA nodes to which instances
    using tunneled networks will be affinitized.

  4. Configure a NUMA topology for instance flavor(s)

    For network NUMA affinity to have any effect, the instance must have
    a NUMA topology itself. This can be configured explicitly, using the
    hw:numa_nodes extra spec, or implicitly through the use of
    CPU pinning (hw:cpu_policy=dedicated) or PCI devices. For
    more information, refer to cpu-topologies.


Take an example for deployment using L2-type networks first.

physnets = foo,bar

numa_nodes = 0

numa_nodes = 2, 3

This configuration will ensure instances using one or more L2-type
networks with provider:physical_network=foo must be
scheduled on host cores from NUMA nodes 0, while instances using one or
more networks with provider:physical_network=bar must be
scheduled on host cores from both NUMA nodes 2 and 3. For the latter
case, it will be necessary to split the guest across two or more host
NUMA nodes using the hw:numa_nodes extra spec, as discussed

Now, take an example for a deployment using L3 networks.

numa_nodes = 0

This is much simpler as all tunneled traffic uses the same logical
interface. As with the L2-type networks, this configuration will ensure
instances using one or more L3-type networks must be scheduled on host
cores from NUMA node 0. It is also possible to define more than one NUMA
node, in which case the instance must be split across these nodes.

virtio-net Multiqueue

12.0.0 (Liberty)

25.0.0 (Yoga)

Support for configuring multiqueue via the
hw:vif_multiqueue_enabled flavor extra spec was introduced
in the Yoga (25.0.0) release.


The functionality described below is currently only supported by the
libvirt/KVM driver.

Virtual NICs using the virtio-net driver support the multiqueue
feature. By default, these vNICs will only use a single virtio-net TX/RX
queue pair, meaning guests will not transmit or receive packets in
parallel. As a result, the scale of the protocol stack in a guest may be
restricted as the network performance will not scale as the number of
vCPUs increases and per-queue data processing limits in the underlying
vSwitch are encountered. The solution to this issue is to enable
virtio-net multiqueue, which can allow the guest instances to increase
the total network throughput by scaling the number of receive and
transmit queue pairs with CPU count.

Multiqueue virtio-net isn’t always necessary, but it can provide a
significant performance benefit when:

  • Traffic packets are relatively large.
  • The guest is active on many connections at the same time, with
    traffic running between guests, guest to host, or guest to an external
  • The number of queues is equal to the number of vCPUs. This is
    because multi-queue support optimizes RX interrupt affinity and TX queue
    selection in order to make a specific queue private to a specific

However, while the virtio-net multiqueue feature will often provide a
welcome performance benefit, it has some limitations and therefore
should not be unconditionally enabled:

  • Enabling virtio-net multiqueue increases the total network
    throughput, but in parallel it also increases the CPU consumption.
  • Enabling virtio-net multiqueue in the host QEMU config does not
    enable the functionality in the guest OS. The guest OS administrator
    needs to manually turn it on for each guest NIC that requires this
    feature, using ethtool.
  • In case the number of vNICs in a guest instance is proportional to
    the number of vCPUs, enabling the multiqueue feature is less

Having considered these points, multiqueue can be enabled or
explicitly disabled using either the hw:vif_multiqueue_enabled flavor extra
spec or equivalent hw_vif_multiqueue_enabled image metadata
property. For example, to enable virtio-net multiqueue for a chosen

$ openstack flavor set --property hw:vif_multiqueue_enabled=true $FLAVOR

Alternatively, to explicitly disable multiqueue for a chosen

$ openstack image set --property hw_vif_multiqueue_enabled=false $IMAGE


If both the flavor extra spec and image metadata property are
provided, their values must match or an error will be raised.

Once the guest has started, you must enable multiqueue using ethtool. For example:

$ ethtool -L $devname combined $N

where $devname is the name of the network device, and
$N is the number of TX/RX queue pairs to configure
corresponding to the number of instance vCPUs. Alternatively, you can
configure this persistently using udev. For example, to configure four
TX/RX queue pairs for network device eth0:

# cat /etc/udev/rules.d/50-ethtool.rules
ACTION=="add", SUBSYSTEM=="net", NAME=="eth0", RUN+="/sbin/ethtool -L eth0 combined 4"

For more information on this feature, refer to the original