Set of bridges managed by the daemon.
SSL used globally by the daemon.
A unique identifier for the Open vSwitch's physical host.
The form of the identifier depends on the type of the host.
On a Citrix XenServer, this will likely be the same as
.
The Citrix XenServer universally unique identifier for the physical
host as displayed by xe host-list
.
The hostname for the host running Open vSwitch. This is a fully
qualified domain name since version 2.6.2.
In Open vSwitch 2.8 and later, the run directory of the running Open
vSwitch daemon. This directory is used for runtime state such as
control and management sockets. The value of is relative to this
directory.
Interval for updating statistics to the database, in milliseconds.
This option will affect the update of the statistics
column in the following tables: Port
, Interface
, Mirror
.
Default value is 5000 ms.
Getting statistics more frequently can be achieved via OpenFlow.
When ovs-vswitchd
starts up, it has an empty flow table
and therefore it handles all arriving packets in its default fashion
according to its configuration, by dropping them or sending them to
an OpenFlow controller or switching them as a standalone switch.
This behavior is ordinarily desirable. However, if
ovs-vswitchd
is restarting as part of a ``hot-upgrade,''
then this leads to a relatively long period during which packets are
mishandled.
This option allows for improvement. When ovs-vswitchd
starts with this value set as true
, it will neither
flush or expire previously set datapath flows nor will it send and
receive any packets to or from the datapath. When this value is
later set to false
, ovs-vswitchd
will
start receiving packets from the datapath and re-setup the flows.
Thus, with this option, the procedure for a hot-upgrade of
ovs-vswitchd
becomes roughly the following:
-
Stop
ovs-vswitchd
.
-
Set
to true
.
-
Start
ovs-vswitchd
.
-
Use
ovs-ofctl
(or some other program, such as an
OpenFlow controller) to restore the OpenFlow flow table
to the desired state.
-
Set
to false
(or remove it entirely from the database).
The ovs-ctl
's ``restart'' and ``force-reload-kmod''
functions use the above config option during hot upgrades.
The maximum
number of flows allowed in the datapath flow table. Internally OVS
will choose a flow limit which will likely be lower than this number,
based on real time network conditions. Tweaking this value is
discouraged unless you know exactly what you're doing.
The default is 200000.
The maximum time (in ms) that idle flows will remain cached in the
datapath. Internally OVS will check the validity and activity for
datapath flows regularly and may expire flows quicker than this
number, based on real time network conditions. Tweaking this
value is discouraged unless you know exactly what you're doing.
The default is 10000.
Set this value to true
to enable runtime support for
DPDK ports. The vswitch must have compile-time support for DPDK as
well.
The default value is false
. Changing this value requires
restarting the daemon
If this value is false
at startup, any dpdk ports which
are configured in the bridge will fail due to memory errors.
Specifies the CPU cores where dpdk lcore threads should be spawned.
The DPDK lcore threads are used for DPDK library tasks, such as
library internal message processing, logging, etc. Value should be in
the form of a hex string (so '0x123') similar to the 'taskset' mask
input.
The lowest order bit corresponds to the first CPU core. A set bit
means the corresponding core is available and an lcore thread will be
created and pinned to it. If the input does not cover all cores,
those uncovered cores are considered not set.
For performance reasons, it is best to set this to a single core on
the system, rather than allow lcore threads to float.
If not specified, the value will be determined by choosing the lowest
CPU core from initial cpu affinity list. Otherwise, the value will be
passed directly to the DPDK library.
Specifies CPU mask for setting the cpu affinity of PMD (Poll
Mode Driver) threads. Value should be in the form of hex string,
similar to the dpdk EAL '-c COREMASK' option input or the 'taskset'
mask input.
The lowest order bit corresponds to the first CPU core. A set bit
means the corresponding core is available and a pmd thread will be
created and pinned to it. If the input does not cover all cores,
those uncovered cores are considered not set.
If not specified, one pmd thread will be created for each numa node
and pinned to any available core on the numa node by default.
Specifies the amount of memory to preallocate from the hugepage pool,
regardless of socket. It is recommended that dpdk-socket-mem is used
instead.
Specifies the amount of memory to preallocate from the hugepage pool,
on a per-socket basis.
The specifier is a comma-separated string, in ascending order of CPU
socket. E.g. On a four socket system 1024,0,2048 would set socket 0
to preallocate 1024MB, socket 1 to preallocate 0MB, socket 2 to
preallocate 2048MB and socket 3 (no value given) to preallocate 0MB.
If dpdk-socket-mem and dpdk-alloc-mem are not specified, dpdk-socket-mem
will be used and the default value is 1024,0. If dpdk-socket-mem and
dpdk-alloc-mem are specified at same time, dpdk-socket-mem will be
used as default. Changing this value requires restarting the daemon.
Specifies the path to the hugetlbfs mount point.
If not specified, this will be guessed by the DPDK library (default
is /dev/hugepages). Changing this value requires restarting the
daemon.
Specifies additional eal command line arguments for DPDK.
The default is empty. Changing this value requires restarting the
daemon
Specifies a relative path from to the vhost-user unix domain socket files. If this
value is unset, the sockets are put directly in .
Changing this value requires restarting the daemon.
Specifies the number of threads for software datapaths to use for
handling new flows. The default the number of online CPU cores minus
the number of revalidators.
This configuration is per datapath. If you have more than one
software datapath (e.g. some system
bridges and some
netdev
bridges), then the total number of threads is
n-handler-threads
times the number of software
datapaths.
Specifies the number of threads for software datapaths to use for
revalidating flows in the datapath. Typically, there is a direct
correlation between the number of revalidator threads, and the number
of flows allowed in the datapath. The default is the number of cpu
cores divided by four plus one. If n-handler-threads
is
set, the default changes to the number of cpu cores minus the number
of handler threads.
This configuration is per datapath. If you have more than one
software datapath (e.g. some system
bridges and some
netdev
bridges), then the total number of threads is
n-handler-threads
times the number of software
datapaths.
Specifies the inverse probability (1/emc-insert-inv-prob) of a flow
being inserted into the Exact Match Cache (EMC). On average one in
every emc-insert-inv-prob
packets that generate a unique
flow will cause an insertion into the EMC.
A value of 1 will result in an insertion for every flow (1/1 = 100%)
whereas a value of zero will result in no insertions and essentially
disable the EMC.
Defaults to 100 ie. there is (1/100 =) 1% chance of EMC insertion.
Limits the number of VLAN headers that can be matched to the
specified number. Further VLAN headers will be treated as payload,
e.g. a packet with more 802.1q headers will match Ethernet type
0x8100.
Value 0
means unlimited. The actual number of supported
VLAN headers is the smallest of vlan-limit
, the number
of VLANs supported by Open vSwitch userspace (currently 2), and the
number supported by the datapath.
If this value is absent, the default is currently 1. This maintains
backward compatibility with controllers that were designed for use
with Open vSwitch versions earlier than 2.8, which only supported one
VLAN.