This document aims to comprehensively document all of the fields, both standard and non-standard, supported by OpenFlow or Open vSwitch, regardless of origin.
A field is a property of a packet. Most familiarly, data fields are fields that can be extracted from a packet. Most data fields are copied directly from protocol headers, e.g. at layer 2, the Ethernet source and destination addresses, or the VLAN ID; at layer 3, the IPv4 or IPv6 source and destination; and at layer 4, the TCP or UDP ports. Other data fields are computed, e.g. describes whether a packet is a fragment but it is not copied directly from the IP header.
Data fields that are always present as a consequence of the basic networking technology in use are called called root fields. Open vSwitch 2.7 and earlier considered Ethernet fields to be root fields, and this remains the default mode of operation for Open vSwitch bridges. When a packet is received from a non-Ethernet interfaces, such as a layer-3 LISP tunnel, Open vSwitch 2.7 and earlier force-fit the packet to this Ethernet-centric point of view by pretending that an Ethernet header is present whose Ethernet type that indicates the packet's actual type (and whose source and destination addresses are all-zero).
Open vSwitch 2.8 and later implement the ``packet type-aware pipeline''
concept introduced in OpenFlow 1.5. Such a pipeline does not have any root
fields. Instead, a new metadata field, ,
indicates the basic type of the packet, which can be Ethernet, IPv4, IPv6,
or another type. For backward compatibility, by default Open vSwitch 2.8
imitates the behavior of Open vSwitch 2.7 and earlier. Later versions of
Open vSwitch may change the default, and in the meantime controllers can
turn off this legacy behavior, on a port-by-port basis, by setting
options:packet_type
to ptap
in the
Interface
table. This is significant only for ports that can
handle non-Ethernet packets, which is currently just LISP, VXLAN-GPE, and
GRE tunnel ports. See ovs-vwitchd.conf.db
(5) for more
information.
Non-root data fields are not always present. A packet contains ARP fields, for example, only when its packet type is ARP or when it is an Ethernet packet whose Ethernet header indicates the Ethertype for ARP, 0x0806. In this documentation, we say that a field is applicable when it is present in a packet, and inapplicable when it is not. (These are not standard terms.) We refer to the conditions that determine whether a field is applicable as prerequisites. Some VLAN-related fields are a special case: these fields are always applicable for Ethernet packets, but have a designated value or bit that indicates whether a VLAN header is present, with the remaining values or bits indicating the VLAN header's content (if it is present).
An inapplicable field does not have a value, not even a nominal ``value'' such as all-zero-bits. In many circumstances, OpenFlow and Open vSwitch allow references only to applicable fields. For example, one may match (see Matching, below) a given field only if the match includes the field's prerequisite, e.g. matching an ARP field is only allowed if one also matches on Ethertype 0x0806 or the for ARP in a packet type-aware bridge.
Sometimes a packet may contain multiple instances of a header. For example, a packet may contain multiple VLAN or MPLS headers, and tunnels can cause any data field to recur. OpenFlow and Open vSwitch do not address these cases uniformly. For VLAN and MPLS headers, only the outermost header is accessible, so that inner headers may be accessed only by ``popping'' (removing) the outer header. (Open vSwitch supports only a single VLAN header in any case.) For tunnels, e.g. GRE or VXLAN, the outer header and inner headers are treated as different data fields.
Many network protocols are built in layers as a stack of concatenated headers. Each header typically contains a ``next type'' field that indicates the type of the protocol header that follows, e.g. Ethernet contains an Ethertype and IPv4 contains a IP protocol type. The exceptional cases, where protocols are layered but an outer layer does not indicate the protocol type for the inner layer, or gives only an ambiguous indication, are troublesome. An MPLS header, for example, only indicates whether another MPLS header or some other protocol follows, and in the latter case the inner protocol must be known from the context. In these exceptional cases, OpenFlow and Open vSwitch cannot provide insight into the inner protocol data fields without additional context, and thus they treat all later data fields as inapplicable until an OpenFlow action explicitly specifies what protocol follows. In the case of MPLS, the OpenFlow ``pop MPLS'' action that removes the last MPLS header from a packet provides this context, as the Ethertype of the payload. See Layer 2.5: MPLS for more information.
OpenFlow and Open vSwitch support some fields other than data fields. Metadata fields relate to the origin or treatment of a packet, but they are not extracted from the packet data itself. One example is the physical port on which a packet arrived at the switch. Register fields act like variables: they give an OpenFlow switch space for temporary storage while processing a packet. Existing metadata and register fields have no prerequisites.
A field's value consists of an integral number of bytes. For data fields, sometimes those bytes are taken directly from the packet. Other data fields are copied from a packet with padding (usually with zeros and in the most significant positions). The remaining data fields are transformed in other ways as they are copied from the packets, to make them more useful for matching.
The most important use of fields in OpenFlow is matching, to determine whether particular field values agree with a set of constraints called a match. A match consists of zero or more constraints on individual fields, all of which must be met to satisfy the match. (A match that contains no constraints is always satisfied.) OpenFlow and Open vSwitch support a number of forms of matching on individual fields:
nw_src=10.1.2.3
Only a particular value of the field is matched; for example, only one
particular source IP address. Exact matches are written as
field=value
. The forms accepted for
value depend on the field.
All fields support exact matches.
nw_src=10.1.0.0/255.255.0.0
Specific bits in the field must have specified values; for example,
only source IP addresses in a particular subnet. Bitwise matches are
written as
field=value/mask
, where
value and mask take one of the forms accepted for
an exact match on field. Some fields accept other forms for
bitwise matches; for example, nw_src=10.1.0.0/255.255.0.0
may also be written nw_src=10.1.0.0/16
.
Most OpenFlow switches do not allow every bitwise matching on every field (and before OpenFlow 1.2, the protocol did not even provide for the possibility for most fields). Even switches that do allow bitwise matching on a given field may restrict the masks that are allowed, e.g. by allowing matches only on contiguous sets of bits starting from the most significant bit, that is, ``CIDR'' masks [RFC 4632]. Open vSwitch does not allows bitwise matching on every field, but it allows arbitrary bitwise masks on any field that does support bitwise matching. (Older versions had some restrictions, as documented in the descriptions of individual fields.)
nw_src
''
The value of the field is not constrained. Wildcarded fields may be
written as field=*
, although it is unusual to
mention them at all. (When specifying a wildcard explicitly in a
command invocation, be sure to using quoting to protect against shell
expansion.)
There is a tiny difference between wildcarding a field and not specifying any match on a field: wildcarding a field requires satisfying the field's prerequisites.
Some types of matches on individual fields cannot be expressed directly with OpenFlow and Open vSwitch. These can be expressed indirectly:
tcp_dst
∈ {80, 443,
8080}''The value of a field is one of a specified set of values; for example, the TCP destination port is 80, 443, or 8080.
For matches used in flows (see Flows, below), multiple flows can simulate set matches.
tcp_dst
≤
1999''The value of the field must lie within a numerical range, for example, TCP destination ports between 1000 and 1999.
Range matches can be expressed as a collection of bitwise matches. For example, suppose that the goal is to match TCP source ports 1000 to 1999, inclusive. The binary representations of 1000 and 1999 are:
01111101000 11111001111
The following series of bitwise matches will match 1000 and 1999 and all the values in between:
01111101xxx 0111111xxxx 10xxxxxxxxx 110xxxxxxxx 1110xxxxxxx 11110xxxxxx 1111100xxxx
which can be written as the following matches:
tcp,tp_src=0x03e8/0xfff8 tcp,tp_src=0x03f0/0xfff0 tcp,tp_src=0x0400/0xfe00 tcp,tp_src=0x0600/0xff00 tcp,tp_src=0x0700/0xff80 tcp,tp_src=0x0780/0xffc0 tcp,tp_src=0x07c0/0xfff0
tcp_dst
≠ 80''The value of the field differs from a specified value, for example, all TCP destination ports except 80.
An inequality match on an n-bit field can be expressed as a
disjunction of n 1-bit matches. For example, the inequality
match ``vlan_pcp
≠ 5'' can be expressed as
``vlan_pcp
= 0/4 or vlan_pcp
= 2/2 or
vlan_pcp
= 0/1.'' For matches used in flows (see
Flows, below), sometimes one can more compactly express
inequality as a higher-priority flow that matches the exceptional case
paired with a lower-priority flow that matches the general case.
Alternatively, an inequality match may be converted to a pair of range
matches, e.g. tcp_src ≠ 80
may be expressed as ``0 ≤
tcp_src
< 80 or 80 < tcp_src
≤ 65535'',
and then each range match may in turn be converted to a bitwise match.
tcp_src
∈ {80, 443, 8080} and tcp_dst
∈ {80, 443, 8080}''All of these supported forms of matching are special cases of bitwise matching. In some cases this influences the design of field values. is the most prominent example: it is designed to make all of the practically useful checks for IP fragmentation possible as a single bitwise match.
Some matches are very commonly used, so Open vSwitch accepts shorthand notations. In some cases, Open vSwitch also uses shorthand notations when it displays matches. The following shorthands are defined, with their long forms shown on the right side:
eth
packet_type=(0,0)
(Open vSwitch 2.8 and later)ip
eth_type=0x0800
ipv6
eth_type=0x86dd
icmp
eth_type=0x0800,ip_proto=1
icmp6
eth_type=0x86dd,ip_proto=58
tcp
eth_type=0x0800,ip_proto=6
tcp6
eth_type=0x86dd,ip_proto=6
udp
eth_type=0x0800,ip_proto=17
udp6
eth_type=0x86dd,ip_proto=17
sctp
eth_type=0x0800,ip_proto=132
sctp6
eth_type=0x86dd,ip_proto=132
arp
eth_type=0x0806
rarp
eth_type=0x8035
mpls
eth_type=0x8847
mplsm
eth_type=0x8848
The discussion so far applies to all OpenFlow and Open vSwitch versions. This section starts to draw in specific information by explaining, in broad terms, the treatment of fields and matches in each OpenFlow version.
OpenFlow 1.0 defined the OpenFlow protocol format of a match as a fixed-length data structure that could match on the following fields:
Each supported field corresponded to some member of the data structure. Some members represented multiple fields, in the case of the TCP, UDP, ICMPv4, and ARP fields whose presence is mutually exclusive. This also meant that some members were poor fits for their fields: only the low 8 bits of the 16-bit ARP opcode could be represented, and the ICMPv4 type and code were padded with 8 bits of zeros to fit in the 16-bit members primarily meant for TCP and UDP ports. An additional bitmap member indicated, for each member, whether its field should be an ``exact'' or ``wildcarded'' match (see Matching), with additional support for CIDR prefix matching on the IPv4 source and destination fields.
Simplicity was recognized early on as the main virtue of this approach. Obviously, any fixed-length data structure cannot support matching new protocols that do not fit. There was no room, for example, for matching IPv6 fields, which was not a priority at the time. Lack of room to support matching the Ethernet addresses inside ARP packets actually caused more of a design problem later, leading to an Open vSwitch extension action specialized for dropping ``spoofed'' ARP packets in which the frame and ARP Ethernet source addressed differed. (This extension was never standardized. Open vSwitch dropped support for it a few releases after it added support for full ARP matching.)
The design of the OpenFlow fixed-length matches also illustrates compromises, in both directions, between the strengths and weaknesses of software and hardware that have always influenced the design of OpenFlow. Support for matching ARP fields that do fit in the data structure was only added late in the design process (and remained optional in OpenFlow 1.0), for example, because common switch ASICs did not support matching these fields.
The compromises in favor of software occurred for more complicated reasons. The OpenFlow designers did not know how to implement matching in software that was fast, dynamic, and general. (A way was later found [Srinivasan].) Thus, the designers sought to support dynamic, general matching that would be fast in realistic special cases, in particular when all of the matches were microflows, that is, matches that specify every field present in a packet, because such matches can be implemented as a single hash table lookup. Contemporary research supported the feasibility of this approach: the number of microflows in a campus network had been measured to peak at about 10,000 [Casado, section 3.2]. (Calculations show that this can only be true in a lightly loaded network [Pepelnjak].)
As a result, OpenFlow 1.0 required switches to treat microflow matches as the highest possible priority. This let software switches perform the microflow hash table lookup first. Only on failure to match a microflow did the switch need to fall back to checking the more general and presumed slower matches. Also, the OpenFlow 1.0 flow match was minimally flexible, with no support for general bitwise matching, partly on the basis that this seemed more likely amenable to relatively efficient software implementation. (CIDR masking for IPv4 addresses was added relatively late in the OpenFlow 1.0 design process.)
Microflow matching was later discovered to aid some hardware implementations. The TCAM chips used for matching in hardware do not support priority in the same way as OpenFlow but instead tie priority to ordering [Pagiamtzis]. Thus, adding a new match with a priority between the priorities of existing matches can require reordering an arbitrary number of TCAM entries. On the other hand, when microflows are highest priority, they can be managed as a set-aside portion of the TCAM entries.
The emphasis on matching microflows also led designers to carefully consider the bandwidth requirements between switch and controller: to maximize the number of microflow setups per second, one must minimize the size of each flow's description. This favored the fixed-length format in use, because it expressed common TCP and UDP microflows in fewer bytes than more flexible ``type-length-value'' (TLV) formats. (Early versions of OpenFlow also avoided TLVs in general to head off protocol fragmentation.)
OpenFlow 1.0 does not clearly specify how to treat inapplicable fields. The members for inapplicable fields are always present in the match data structure, as are the bits that indicate whether the fields are matched, and the ``correct'' member and bit values for inapplicable fields is unclear. OpenFlow 1.0 implementations changed their behavior over time as priorities shifted. The early OpenFlow reference implementation, motivated to make every flow a microflow to enable hashing, treated inapplicable fields as exact matches on a value of 0. Initially, this behavior was implemented in the reference controller only.
Later, the reference switch was also changed to actually force any wildcarded inapplicable fields into exact matches on 0. The latter behavior sometimes caused problems, because the modified flow was the one reported back to the controller later when it queried the flow table, and the modifications sometimes meant that the controller could not properly recognize the flow that it had added. In retrospect, perhaps this problem should have alerted the designers to a design error, but the ability to use a single hash table was held to be more important than almost every other consideration at the time.
When more flexible match formats were introduced much later, they disallowed any mention of inapplicable fields as part of a match. This raised the question of how to translate between this new format and the OpenFlow 1.0 fixed format. It seemed somewhat inconsistent and backward to treat fields as exact-match in one format and forbid matching them in the other, so instead the treatment of inapplicable fields in the fixed-length format was changed from exact match on 0 to wildcarding. (A better classifier had by now eliminated software performance problems with wildcards.)
The OpenFlow 1.0.1 errata (released only in 2012) added some additional explanation [OpenFlow 1.0.1, section 3.4], but it did not mandate specific behavior because of variation among implementations.
The OpenFlow 1.1 protocol match format was designed as a type/length/value
(TLV) format to allow for future flexibility. The specification
standardized only a single type OFPMT_STANDARD
(0) with a
fixed-size payload, described here. The additional fields and bitwise
masks in OpenFlow 1.1 cause this match structure to be over twice as large
as in OpenFlow 1.0, 88 bytes versus 40.
OpenFlow 1.1 added support for the following fields:
OpenFlow 1.1 increased the width of the ingress port number field (and all other port numbers in the protocol) from 16 bits to 32 bits.
OpenFlow 1.1 increased matching flexibility by introducing arbitrary bitwise matching on Ethernet and IPv4 address fields and on the new ``metadata'' register field. Switches were not required to support all possible masks [OpenFlow 1.1, section 4.3].
By a strict reading of the specification, OpenFlow 1.1 removed support for matching ICMPv4 type and code [OpenFlow 1.1, section A.2.3], but this is likely an editing error because ICMP matching is described elsewhere [OpenFlow 1.1, Table 3, Table 4, Figure 4]. Open vSwitch does support ICMPv4 type and code matching with OpenFlow 1.1.
OpenFlow 1.1 avoided the pitfalls of inapplicable fields that OpenFlow 1.0 encountered, by requiring the switch to ignore the specified field values [OpenFlow 1.1, section A.2.3]. It also implied that the switch should ignore the bits that indicate whether to match inapplicable fields.
OpenFlow 1.1 introduced a new pseudo-field, the physical ingress port. The physical ingress port is only a pseudo-field because it cannot be used for matching. It appears only one place in the protocol, in the ``packet-in'' message that passes a packet received at the switch to an OpenFlow controller.
A packet's ingress port and physical ingress port are identical except for packets processed by a switch feature such as bonding or tunneling that makes a packet appear to arrive on a ``virtual'' port associated with the bond or the tunnel. For such packets, the ingress port is the virtual port and the physical ingress port is, naturally, the physical port. Open vSwitch implements both bonding and tunneling, but its bonding implementation does not use virtual ports and its tunnels are typically not on the same OpenFlow switch as their physical ingress ports (which need not be part of any switch), so the ingress port and physical ingress port are always the same in Open vSwitch.
OpenFlow 1.2 abandoned the fixed-length approach to matching. One reason was size, since adding support for IPv6 address matching (now seen as important), with bitwise masks, would have added 64 bytes to the match length, increasing it from 88 bytes in OpenFlow 1.1 to over 150 bytes. Extensibility had also become important as controller writers increasingly wanted support for new fields without having to change messages throughout the OpenFlow protocol. The challenges of carefully defining fixed-length matches to avoid problems with inapplicable fields had also become clear over time.
Therefore, OpenFlow 1.2 adopted a flow format using a flexible
type-length-value (TLV) representation, in which each TLV expresses a match
on one field. These TLVs were in turn encapsulated inside the outer TLV
wrapper introduced in OpenFlow 1.1 with the new identifier
OFPMT_OXM
(1). (This wrapper fulfilled its intended purpose
of reducing the amount of churn in the protocol when changing match
formats; some messages that included matches remained unchanged from
OpenFlow 1.1 to 1.2 and later versions.)
OpenFlow 1.2 added support for the following fields:
The OpenFlow 1.2 format, called OXM (OpenFlow Extensible Match), was modeled closely on an extension to OpenFlow 1.0 introduced in Open vSwitch 1.1 called NXM (Nicira Extended Match). Each OXM or NXM TLV has the following format:
The most significant 16 bits of the NXM or OXM header, called
vendor
by NXM and class
by OXM, identify
an organization permitted to allocate identifiers for fields. NXM
allocates only two vendors, 0x0000 for fields supported by
OpenFlow 1.0 and 0x0001 for fields implemented as an Open vSwitch
extension. OXM assigns classes as follows:
OFPXMC_NXM_0
).OFPXMC_NXM_1
).OFPXMC_OPENFLOW_BASIC
)OFPXMC_PACKET_REGS
)OFPXMC_EXPERIMENTER
)
When class
is 0xffff, the OXM header is extended to 64 bits by
using the first 32 bits of the body as an experimenter
field
whose most significant byte is zero and whose remaining bytes are an
Organizationally Unique Identifier (OUI) assigned by the IEEE [IEEE OUI],
as shown below.
OpenFlow says that support for experimenter fields is optional. Open vSwitch 2.4 and later does support them, so that it can support the following experimenter classes:
ONFOXM_ET
)NXOXM_NSH
)
Taken as a unit, class
(or vendor
),
field
, and experimenter
(when present) uniquely
identify a particular field.
When hasmask
(abbreviated HM
above) is 0, the OXM
is an exact match on an entire field. In this case, the body (excluding
the experimenter field, if present) is a single value to be matched.
When hasmask
is 1, the OXM is a bitwise match. The body
(excluding the experimenter field) consists of a value to match, followed
by the bitwise mask to apply. A 1-bit in the mask indicates that the
corresponding bit in the value should be matched and a 0-bit that it should
be ignored. For example, for an IP address field, a value of 192.168.0.0
followed by a mask of 255.255.0.0 would match addresses in the
196.168.0.0/16 subnet.
OFPBMC_BAD_WILDCARDS
, as required by OpenFlow 1.3 and later.
The length
identifies the number of bytes in the body,
including the 4-byte experimenter
header, if it is present.
Each OXM TLV has a fixed length; that is, given class
,
field
, experimenter
(if present), and
hasmask
, length
is a constant. The
length
is included explicitly to allow software to minimally
parse OXM TLVs of unknown types.
OXM TLVs must be ordered so that a field's prerequisites are satisfied before it is parsed. For example, an OXM TLV that matches on the IPv4 source address field is only allowed following an OXM TLV that matches on the Ethertype for IPv4. Similarly, an OXM TLV that matches on the TCP source port must follow a TLV that matches an Ethertype of IPv4 or IPv6 and one that matches an IP protocol of TCP (in that order). The order of OXM TLVs is not otherwise restricted; no canonical ordering is defined.
A given field may be matched only once in a series of OXM TLVs.
OpenFlow 1.3 showed OXM to be largely successful, by adding new fields without making any changes to how flow matches otherwise worked. It added OXMs for the following fields supported by Open vSwitch:
OpenFlow 1.3 also added OXMs for the following fields not documented here and not yet implemented by Open vSwitch:
OpenFlow 1.4 added OXMs for the following fields not documented here and not yet implemented by Open vSwitch:
OpenFlow 1.5 added OXMs for the following fields supported by Open vSwitch:
The following sections document the fields that Open vSwitch supports. Each section provides introductory material on a group of related fields, followed by information on each individual field. In addition to field-specific information, each field begins with a table with entries for the following important properties:
ovs-ofctl
commands. For historical reasons, some fields
have an additional name that is accepted as an alternative in parsing.
This name, when there is one, is listed as well, e.g. ``tun
(aka tunnel_id
).''
How a value for the field is formatted or parsed by, e.g.,
ovs-ofctl
. Some possibilities are generic:
0x
.
0x
. On
input, accepts decimal numbers or hexadecimal numbers prefixed by
0x
. (The default for parsing is not
hexadecimal: only a 0x
prefix causes input to be treated
as hexadecimal.)
xx:xx:xx:xx:xx:xx
.
a.b.c.d
.
For bitwise matches, formats and accepts
address/length
CIDR notation in
addition to address/mask
.
IN_PORT
) in uppercase or lowercase.
Other, field-specific formats are explained along with their fields.
Requirements that must be met to match on this field. For example,
has IPv4 as a prerequisite, meaning that a match
must include eth_type=0x0800
to match on the IPv4 source
address. The following prerequisites, with their requirements, are
currently in use:
vlan_tci=0x1000/0x1000
(i.e. a VLAN header is
present)eth_type=0x0806
(ARP) or eth_type=0x8035
(RARP)eth_type=0x0800
eth_type=0x86dd
eth_type=0x8847
or eth_type=0x8848
ip_proto=6
ip_proto=17
ip_proto=132
ip_proto=1
ip_proto=58
icmp_type=135
and icmp_code=0
icmp_type=136
and icmp_code=0
The TCP, UDP, and SCTP prerequisites also have the special requirement
that nw_frag
is not being used to select ``later
fragments.'' This is because only the first fragment of a fragmented
IPv4 or IPv6 datagram contains the TCP or UDP header.
set_field
can modify them. Fields that are
``read-only'' cannot be modified in these general-purpose ways, although
there may be other ways that actions can modify them.
These rows report the OXM and NXM code points that correspond to a given field. Either or both may be ``none.''
A field that has only an OXM code point is usually one that was standardized before it was added to Open vSwitch. A field that has only an NXM code point is usually one that is not yet standardized. When a field has both OXM and NXM code points, it usually indicates that it was introduced as an Open vSwitch extension under the NXM code point, then later standardized under the OXM code point. A field can have more than one OXM code point if it was standardized in OpenFlow 1.4 or later and additionally introduced as an official ONF extension for OpenFlow 1.3. (A field that has neither OXM nor NXM code point is typically an obsolete field that is supported in some other form using OXM or NXM.)
Each code point in these rows is described in the form
``NAME
(number) since OpenFlow spec
and Open vSwitch version,''
e.g. ``OXM_OF_ETH_TYPE
(5) since OpenFlow 1.2 and Open
vSwitch 1.7.'' First, NAME
, which specifies a name for
the code point, starts with a prefix that designates a class and, in
some cases, a vendor, as listed in the following table:
For more information on OXM/NXM classes and vendors, refer back to OpenFlow 1.2 under Evolution of OpenFlow Fields. The number is the field number within the class and vendor. The OpenFlow spec is the version of OpenFlow that standardized the code point. It is omitted for NXM code points because they are nonstandard. The version is the version of Open vSwitch that first supported the code point.
An individual OpenFlow flow can match only a single value for each field. However, situations often arise where one wants to match one of a set of values within a field or fields. For matching a single field against a set, it is straightforward and efficient to add multiple flows to the flow table, one for each value in the set. For example, one might use the following flows to send packets with IP source address a, b, c, or d to the OpenFlow controller:
ip,ip_src=a actions=controller ip,ip_src=b actions=controller ip,ip_src=c actions=controller ip,ip_src=d actions=controller
Similarly, these flows send packets with IP destination address e, f, g, or h to the OpenFlow controller:
ip,ip_dst=e actions=controller ip,ip_dst=f actions=controller ip,ip_dst=g actions=controller ip,ip_dst=h actions=controller
Installing all of the above flows in a single flow table yields a
disjunctive effect: a packet is sent to the controller if
ip_src
∈ {a,b,c,d}
or ip_dst
∈
{e,f,g,h} (or both).
(Pedantically, if both of the above sets of flows are present in the flow
table, they should have different priorities, because OpenFlow says that
the results are undefined when two flows with same priority can both match
a single packet.)
Suppose, on the other hand, one wishes to match conjunctively, that is, to
send a packet to the controller only if both ip_src
∈
{a,b,c,d} and
ip_dst
∈
{e,f,g,h}. This requires 4 × 4
= 16 flows, one for each possible pairing of ip_src
and
ip_dst
. That is acceptable for our small example, but it does
not gracefully extend to larger sets or greater numbers of dimensions.
The conjunction
action is a solution for conjunctive matches
that is built into Open vSwitch. A conjunction
action ties groups of
individual OpenFlow flows into higher-level ``conjunctive flows''. Each
group corresponds to one dimension, and each flow within the group matches
one possible value for the dimension. A packet that matches one flow from
each group matches the conjunctive flow.
To implement a conjunctive flow with conjunction
, assign the
conjunctive flow a 32-bit id, which must be unique within an
OpenFlow table. Assign each of the n ≥ 2 dimensions a unique
number from 1 to n; the ordering is unimportant. Add one flow
to the OpenFlow flow table for each possible value of each dimension with
conjunction(id, k/n)
as the
flow's actions, where k is the number assigned to the flow's
dimension. Together, these flows specify the conjunctive flow's match
condition. When the conjunctive match condition is met, Open vSwitch looks
up one more flow that specifies the conjunctive flow's actions and receives
its statistics. This flow is found by setting conj_id
to the
specified id and then again searching the flow table.
The following flows provide an example. Whenever the IP source is one of
the values in the flows that match on the IP source (dimension 1 of 2),
and the IP destination is one of the values in the flows that
match on IP destination (dimension 2 of 2), Open vSwitch searches for a
flow that matches conj_id
against the conjunction ID (1234),
finding the first flow listed below.
conj_id=1234 actions=controller ip,ip_src=10.0.0.1 actions=conjunction(1234, 1/2) ip,ip_src=10.0.0.4 actions=conjunction(1234, 1/2) ip,ip_src=10.0.0.6 actions=conjunction(1234, 1/2) ip,ip_src=10.0.0.7 actions=conjunction(1234, 1/2) ip,ip_dst=10.0.0.2 actions=conjunction(1234, 2/2) ip,ip_dst=10.0.0.5 actions=conjunction(1234, 2/2) ip,ip_dst=10.0.0.7 actions=conjunction(1234, 2/2) ip,ip_dst=10.0.0.8 actions=conjunction(1234, 2/2)
Many subtleties exist:
ip_src
and dimension 2 on
ip_dst
, but this is not a requirement. Different flows
within a dimension may match on different bits within a field (e.g. IP
network prefixes of different lengths, or TCP/UDP port ranges as bitwise
matches), or even on entirely different fields (e.g. to match packets for
TCP source port 80 or TCP destination port 80).
conjunction
actions, with different
id
values. This is useful for multiple conjunctive flows with
overlapping sets. If one conjunctive flow matches packets with both
ip_src
∈ {a,b} and ip_dst
∈
{d,e} and a second conjunctive flow matches ip_src
∈ {b,c} and ip_dst
∈ {f,g}, for
example, then the flow that matches ip_src=
b would have two
conjunction
actions, one for each conjunctive flow. The order
of conjunction
actions within a list of actions is not
significant.
conjunction
actions may also include note
actions for annotations, but not any other kind of actions. (They
would not be useful because they would never be executed.)
conj_id=
id is done in the same general-purpose way as
other flow table searches, so one can use flows with
conj_id=
id to act differently depending on
circumstances. (One exception is that the search for the
conj_id=
id flow itself ignores conjunctive flows, to
avoid recursion.) If the search with conj_id=
id fails,
Open vSwitch acts as if the conjunctive flow had not matched at all, and
continues searching the flow table for other matching flows.
OpenFlow prerequisite checking occurs for the flow with
conj_id=
id in the same way as any other flow, e.g. in
an OpenFlow 1.1+ context, putting a mod_nw_src
action into the example
above would require adding an ip
match, like this:
conj_id=1234,ip actions=mod_nw_src:1.2.3.4,controller
Sometimes there is a choice of which flows include a particular match.
For example, suppose that we added an extra constraint to our example,
to match on ip_src
∈
{a,b,c,d} and
ip_dst
∈
{e,f,g,h} and
tcp_dst
= i. One way to implement this is to
add the new constraint to the conj_id
flow, like this:
conj_id=1234,tcp,tcp_dst=i actions=mod_nw_src:1.2.3.4,controller
but this is not recommended because of the cost of the extra flow table lookup. Instead, add the constraint to the individual flows, either in one of the dimensions or (slightly better) all of them.
The fields in this group relate to tunnels, which Open vSwitch supports in several forms (GRE, VXLAN, and so on). Most of these fields do appear in the wire format of a packet, so they are data fields from that point of view, but they are metadata from an OpenFlow flow table point of view because they do not appear in packets that are forwarded to the controller or to ordinary (non-tunnel) output ports.
Open vSwitch supports a spectrum of usage models for mapping tunnels to OpenFlow ports:
In this model, an OpenFlow port represents one tunnel: it matches a particular type of tunnel traffic between two IP endpoints, with a particular tunnel key (if keys are in use). In this situation, suffices to distinguish one tunnel from another, so the tunnel header fields have little importance for OpenFlow processing. (They are still populated and may be used if it is convenient.) The tunnel header fields play no role in sending packets out such an OpenFlow port, either, because the OpenFlow port itself fully specifies the tunnel headers.
The following Open vSwitch commands create a bridge
br-int
, add port tap0
to the bridge as
OpenFlow port 1, establish a port-based GRE tunnel between the local
host and remote IP 192.168.1.1 using GRE key 5001 as OpenFlow port 2,
and arranges to forward all traffic from tap0
to the
tunnel and vice versa:
ovs-vsctl add-br br-int ovs-vsctl add-port br-int tap0 -- set interface tap0 ofport_request=1 ovs-vsctl add-port br-int gre0 -- set interface gre0 ofport_request=2 type=gre \ options:remote_ip=192.168.1.1 options:key=5001 ovs-ofctl add-flow br-int in_port=1,actions=2 ovs-ofctl add-flow br-int in_port=2,actions=1
In this model, one OpenFlow port represents all possible tunnels of a given type with an endpoint on the current host, for example, all GRE tunnels. In this situation, only indicates that traffic was received on the particular kind of tunnel. This is where the tunnel header fields are most important: they allow the OpenFlow tables to discriminate among tunnels based on their IP endpoints or keys. Tunnel header fields also determine the IP endpoints and keys of packets sent out such a tunnel port.
The following Open vSwitch commands create a bridge
br-int
, add port tap0
to the
bridge as OpenFlow port 1, establish a flow-based GRE tunnel
port 3, and arranges to forward all traffic from
tap0
to remote IP 192.168.1.1 over a GRE tunnel
with key 5001 and vice versa:
ovs-vsctl add-br br-int ovs-vsctl add-port br-int tap0 -- set interface tap0 ofport_request=1 ovs-vsctl add-port br-int allgre -- set interface gre0 ofport_request=3 type=gre \ options:remote_ip=flow options:key=flow ovs-ofctl add-flow br-int \ 'in_port=1 actions=set_tunnel:5001,set_field:192.168.1.1->tun_dst,3' ovs-ofctl add-flow br-int 'in_port=3,tun_src=192.168.1.1,tun_id=5001 actions=1'
One may define both flow-based and port-based tunnels at the
same time. For example, it is valid and possibly useful to
create and configure both gre0
and
allgre
tunnel ports described above.
Traffic is attributed on ingress to the most specific
matching tunnel. For example, gre0
is more
specific than allgre
. Therefore, if both
exist, then gre0
will be the ingress port for any
GRE traffic received from 192.168.1.1 with key 5001.
On egress, traffic may be directed to any appropriate tunnel
port. If both gre0
and allgre
are
configured as already described, then the actions
2
and
set_tunnel:5001,set_field:192.168.1.1->tun_dst,3
send the same tunnel traffic.
ovs-vswitchd.conf.db
(5) describes all the details of tunnel
configuration.
These fields do not have any prerequisites, which means that a flow may match on any or all of them, in any combination.
These fields are zeros for packets that did not arrive on a tunnel.
Many kinds of tunnels support a tunnel ID:
When a packet is received from a tunnel, this field holds the tunnel ID in its least significant bits, zero-extended to fit. This field is zero if the tunnel does not support an ID, or if no ID is in use for a tunnel type that has an optional ID, or if an ID of zero received, or if the packet was not received over a tunnel.
When a packet is output to a tunnel port, the tunnel configuration determines whether the tunnel ID is taken from this field or bound to a fixed value. See the earlier description of ``port-based'' and ``flow-based'' tunnels for more information.
The following diagram shows the origin of this field in a typical keyed GRE tunnel:
When a packet is received from a tunnel, this field is the source address in the outer IP header of the tunneled packet. This field is zero if the packet was not received over a tunnel.
When a packet is output to a flow-based tunnel port, this field influences the IPv4 source address used to send the packet. If it is zero, then the kernel chooses an appropriate IP address based using the routing table.
The following diagram shows the origin of this field in a typical keyed GRE tunnel:
When a packet is received from a tunnel, this field is the destination address in the outer IP header of the tunneled packet. This field is zero if the packet was not received over a tunnel.
When a packet is output to a flow-based tunnel port, this field specifies the destination to which the tunnel packet is sent.
The following diagram shows the origin of this field in a typical keyed GRE tunnel:
The VXLAN header is defined as follows [RFC 7348], where the
I
bit must be set to 1, unlabeled bits or those labeled
reserved
must be set to 0, and Open vSwitch makes the VNI
available via :
VXLAN Group-Based Policy [VXLAN Group Policy Option] adds new interpretations to existing bits in the VXLAN header, reinterpreting it as follows, with changes highlighted:
Open vSwitch makes GBP fields and flags available through the following fields. Only packets that arrive over a VXLAN tunnel with the GBP extension enabled have these fields set. In other packets they are zero on receive and ignored on transmit.
For a packet tunneled over VXLAN with the Group-Based Policy (GBP) extension, this field represents the GBP policy ID, as shown above.
For a packet tunneled over VXLAN with the Group-Based Policy (GBP) extension, this field represents the GBP policy flags, as shown above.
The field has the format shown below:
Unlabeled bits are reserved and must be transmitted as 0. The VXLAN GBP draft defines the other bits' meanings as:
D
(Don't Learn)A
(Applied)These fields provide access to additional features in the Geneve tunneling protocol [Geneve]. Their names are somewhat generic in the hope that the same fields could be reused for other protocols in the future; for example, the NSH protocol [NSH] supports TLV options whose form is identical to that for Geneve options.
The above information specifically covers generic tunnel option 0, but Open vSwitch supports 64 options, numbered 0 through 63, whose NXM field numbers are 40 through 103.
These fields provide OpenFlow access to the generic type-length-value options defined by the Geneve tunneling protocol or other protocols with options in the same TLV format as Geneve options. Each of these options has the following wire format:
Taken together, the class
and type
in the
option format mean that there are about 16 million distinct kinds of
TLV options, too many to give individual OXM code points. Thus, Open
vSwitch requires the user to define the TLV options of interest, by
binding up to 64 TLV options to generic tunnel option NXM code points.
Each option may have up to 124 bytes in its body, the maximum allowed
by the TLV format, but bound options may total at most 252 bytes of
body.
Open vSwitch extensions to the OpenFlow protocol bind TLV options to
NXM code points. The ovs-ofctl
(8) program offers one way
to use these extensions, e.g. to configure a mapping from a TLV option
with class
0xffff
, type
0
, and a body length of 4 bytes:
ovs-ofctl add-tlv-map br0 "{class=0xffff,type=0,len=4}->tun_metadata0"
Once a TLV option is properly bound, it can be accessed and modified like any other field, e.g. to send packets that have value 1234 for the option described above to the controller:
ovs-ofctl add-flow br0 tun_metadata0=1234,actions=controller
An option not received or not bound is matched as all zeros.
Flags indicating various aspects of the tunnel encapsulation.
Matches on this field are most conveniently written in terms of
symbolic names (given in the diagram below), each preceded by either
+
for a flag that must be set, or -
for a
flag that must be unset, without any other delimiters between the
flags. Flags not mentioned are wildcarded. For example,
tun_flags=+oam
matches only OAM packets. Matches can also
be written as flags/mask
, where
flags and mask are 16-bit numbers in decimal or
in hexadecimal prefixed by 0x
.
Currently, only one flag is defined:
oam
The switch may reject matches against unknown flags.
Newer versions of Open vSwitch may introduce additional flags with new meanings. It is therefore not recommended to use an exact match on this field since the behavior of these new flags is unknown and should be ignored.
For non-tunneled packets, the value is 0.
These fields relate to the origin or treatment of a packet, but they are not extracted from the packet data itself.
The OpenFlow port on which the packet being processed arrived. This is a 16-bit field that holds an OpenFlow 1.0 port number. For receiving a packet, the only values that appear in this field are:
0xfeff
(65,279), inclusive.OFPP_LOCAL
(0xfffe
or 65,534).The ``local'' port, which in Open vSwitch is always named the same as the bridge itself. This represents a connection between the switch and the local TCP/IP stack. This port is where an IP address is most commonly configured on an Open vSwitch switch.
OpenFlow does not require a switch to have a local port, but all existing versions of Open vSwitch have always included a local port. Future Directions: Future versions of Open vSwitch might be able to optionally omit the local port, if someone submits code to implement such a feature.
OFPP_NONE
(OpenFlow 1.0) or OFPP_ANY
(OpenFlow 1.1+) (0xffff
or 65,535).OFPP_CONTROLLER
(0xfffd
or 65,533).When a controller injects a packet into an OpenFlow switch with a ``packet-out'' request, it can specify one of these ingress ports to indicate that the packet was generated internally rather than having been received on some port.
OpenFlow 1.0 specified OFPP_NONE
for this
purpose. Despite that, some controllers used
OFPP_CONTROLLER
, and some switches only
accepted OFPP_CONTROLLER
, so OpenFlow 1.0.2
required support for both ports. OpenFlow 1.1 and later
were more clearly drafted to allow only
OFPP_CONTROLLER
. For maximum compatibility,
Open vSwitch allows both ports with all OpenFlow versions.
Values not mentioned above will never appear when receiving a packet, including the following notable values:
OFPP_MAX
(0xff00
or 65,280).OFPP_MAX
to be used as a port number, so
packets will never be received on this port. (Other
OpenFlow switches, of course, might use it.)
OFPP_UNSET
(0xfff7
or 65,527)OFPP_IN_PORT
(0xfff8
or 65,528)OFPP_TABLE
(0xfff9
or 65,529)OFPP_NORMAL
(0xfffa
or 65,530)OFPP_FLOOD
(0xfffb
or 65,531)OFPP_ALL
(0xfffc
or 65,532)These port numbers are used only in output actions and never appear as ingress ports.
Most of these port numbers were defined in OpenFlow 1.0, but
OFPP_UNSET
was only introduced in OpenFlow 1.5.
Values that will never appear when receiving a packet may still be matched against in the flow table. There are still circumstances in which those flows can be matched:
resubmit
Open vSwitch extension action allows a
flow table lookup with an arbitrary ingress port.
load
or set_field
,
followed by an action or instruction that performs another
flow table lookup, such as resubmit
or
goto_table
.
This field is heavily used for matching in OpenFlow tables, but for packet egress, it has only very limited roles:
OpenFlow requires suppressing output actions to . That is, the following two flows both drop all packets that arrive on port 1:
in_port=1,actions=1 in_port=1,actions=drop
(This behavior is occasionally useful for flooding to a
subset of ports. Specifying actions=1,2,3,4
,
for example, outputs to ports 1, 2, 3, and 4, omitting the
ingress port.)
OFPP_IN_PORT
(with
value 0xfff8) that outputs to the ingress port. For example,
in a switch that has four ports numbered 1 through 4,
actions=1,2,3,4,in_port
outputs to ports 1, 2,
3, and 4, including the ingress port.
Because the ingress port field has so little influence on packet
processing, it does not ordinarily make sense to modify the
ingress port field. The field is writable only to support the
occasional use case where the ingress port's roles in packet
egress, described above, become troublesome. For example,
actions=load:0->NXM_OF_IN_PORT[],output:123
will output to port 123 regardless of whether it is in the
ingress port. If the ingress port is important, then one may save
and restore it on the stack:
actions=push:NXM_OF_IN_PORT[],load:0->NXM_OF_IN_PORT[],output:123,pop:NXM_OF_IN_PORT[]
or, in Open vSwitch 2.7 or later, use the clone
action to
save and restore it:
actions=clone(load:0->NXM_OF_IN_PORT[],output:123)
The ability to modify the ingress port is an Open vSwitch extension to OpenFlow.
OpenFlow 1.1 and later use a 32-bit port number, so this field supplies a 32-bit view of the ingress port. Current versions of Open vSwitch support only a 16-bit range of ports:
0x0000
to
0xfeff
, inclusive, map to OpenFlow 1.1
port numbers with the same values.
0xff00
to
0xffff
, inclusive, map to OpenFlow 1.1 port
numbers 0xffffff00
to 0xffffffff
.
0x0000ff00
to
0xfffffeff
are not mapped and not supported.
and are two views of the same information, so all of the comments on apply to too. Modifying changes , and vice versa.
Setting to an unsupported value yields unspecified behavior.
Future Directions: Open vSwitch implements the output queue as a
field, but does not currently expose it through OXM or NXM for matching
purposes. If this turns out to be a useful feature, it could be
implemented in future versions. Only the set_queue
,
enqueue
, and pop_queue
actions currently
influence the output queue.
This field influences how packets in the flow will be queued, for quality of service (QoS) purposes, when they egress the switch. Its range of meaningful values, and their meanings, varies greatly from one OpenFlow implementation to another. Even within a single implementation, there is no guarantee that all OpenFlow ports have the same queues configured or that all OpenFlow ports in an implementation can be configured the same way queue-wise.
Configuring queues on OpenFlow is not well standardized. On
Linux, Open vSwitch supports queue configuration via OVSDB,
specifically the QoS
and Queue
tables (see ovs-vswitchd.conf.db(5)
for details).
Ports of Open vSwitch to other platforms might require queue
configuration through some separate protocol (such as a CLI).
Even on Linux, Open vSwitch exposes only a fraction of the
kernel's queuing features through OVSDB, so advanced or
unusual uses might require use of separate utilities
(e.g. tc
). OpenFlow switches other than Open
vSwitch might use OF-CONFIG or any of the configuration
methods mentioned above. Finally, some OpenFlow switches have
a fixed number of fixed-function queues (e.g. eight queues
with strictly defined priorities) and others do not support
any control over queuing.
The only output queue that all OpenFlow implementations must support is zero, to identify a default queue, whose properties are implementation-defined. Outputting a packet to a queue that does not exist on the output port yields unpredictable behavior: among the possibilities are that the packet might be dropped or transmitted with a very high or very low priority.
OpenFlow 1.0 only allowed output queues to be specified as part of an
enqueue
action that specified both a queue and an output
port. That is, OpenFlow 1.0 treats the queue as an argument to an
action, not as a field.
To increase flexibility, OpenFlow 1.1 added an action to set the output queue. This model was carried forward, without change, through OpenFlow 1.5.
Open vSwitch implements the native queuing model of each OpenFlow version it supports. Open vSwitch also includes an extension for setting the output queue as an action in OpenFlow 1.0.
When a packet ingresses into an OpenFlow switch, the output queue is ordinarily set to 0, indicating the default queue. However, Open vSwitch supports various ways to forward a packet from one OpenFlow switch to another within a single host. In these cases, Open vSwitch maintains the output queue across the forwarding step. For example:
When a flow sets the output queue then outputs to an OpenFlow tunnel port, the encapsulation preserves the output queue. If the kernel TCP/IP stack routes the encapsulated packet directly to a physical interface, then that output honors the output queue. Alternatively, if the kernel routes the encapsulated packet to another Open vSwitch bridge, then the output queue set previously becomes the initial output queue on ingress to the second bridge and will thus be used for further output actions (unless overridden by a new ``set queue'' action).
(This description reflects the current behavior of Open vSwitch on Linux. This behavior relies on details of the Linux TCP/IP stack. It could be difficult to make ports to other operating systems behave the same way.)
Packet mark comes to Open vSwitch from the Linux kernel, in
which the sk_buff
data structure that represents
a packet contains a 32-bit member named skb_mark
.
The value of skb_mark
propagates along with the
packet it accompanies wherever the packet goes in the kernel.
It has no predefined semantics but various kernel-user
interfaces can set and match on it, which makes it suitable
for ``marking'' packets at one point in their handling and
then acting on the mark later. With iptables
,
for example, one can mark some traffic specially at ingress
and then handle that traffic differently at egress based on
the marked value.
Packet mark is an attempt at a generalization of the
skb_mark
concept beyond Linux, at least through more
generic naming. Like , packet mark is
preserved across forwarding steps within a machine. Unlike , packet mark has no direct effect on packet
forwarding: the value set in packet mark does not matter unless some
later OpenFlow table or switch matches on packet mark, or unless the
packet passes through some other kernel subsystem that has been
configured to interpret packet mark in specific ways, e.g. through
iptables
configuration mentioned above.
Preserving packet mark across kernel forwarding steps relies heavily on kernel support, which ports to non-Linux operating systems may not have. Regardless of operating system support, Open vSwitch supports packet mark within a single bridge and across patch ports.
The value of packet mark when a packet ingresses into the first Open vSwich bridge is typically zero, but it could be nonzero if its value was previously set by some kernel subsystem.
Holds the output port currently in the OpenFlow action set (i.e. from
an output
action within a write_actions
instruction). Its value is an OpenFlow port number. If there is no
output port in the OpenFlow action set, or if the output port will be
ignored (e.g. because there is an output group in the OpenFlow action
set), then the value will be OFPP_UNSET
.
Open vSwitch allows any table to match this field. OpenFlow, however, only requires this field to be matchable from within an OpenFlow egress table (a feature that Open vSwitch does not yet implement).
The type of the packet in the format specified in OpenFlow 1.5:
The upper 16 bits, ns, are a namespace. The meaning of
ns_type depends on the namespace. The packet type field is
specified and displayed in the format
(ns,ns_type)
.
Open vSwitch currently supports the following classes of packet types for matching:
(0,0)
(1,ethertype)
The specified ethertype. Open vSwitch can forward packets with any ethertype, but it can only match on and process data fields for the following supported packet types:
(1,0x800)
(1,0x806)
(1,0x86dd)
(1,0x8847)
(1,0x8848)
(1,0x8035)
(1,0x894f)
Consider the distinction between a packet with packet_type=(0,0),
dl_type=0x800
and one with packet_type=(1,0x800)
.
The former is an Ethernet frame that contains an IPv4 packet, like
this:
The latter is an IPv4 packet not encapsulated inside any outer frame, like this:
Matching on is a pre-requisite for matching
on any data field, but for backward compatibility, when a match on a
data field is present without a match, Open
vSwitch acts as though a match on (0,0)
(Ethernet) had
been supplied. Similarly, when Open vSwitch sends flow match
information to a controller, e.g. in a reply to a request to dump the
flow table, Open vSwitch omits a match on packet type (0,0) if it would
be implied by a data field match.
Open vSwitch 2.5 and later support ``connection tracking,'' which allows bidirectional streams of packets to be statefully grouped into connections. Open vSwitch connection tracking, for example, identifies the patterns of TCP packets that indicates a successfully initiated connection, as well as those that indicate that a connection has been torn down. Open vSwitch connection tracking can also identify related connections, such as FTP data connections spawned from FTP control connections.
An individual packet passing through the pipeline may be in one of two
states, ``untracked'' or ``tracked,'' which may be distinguished via the
``trk'' flag in . A packet is
untracked at the beginning of the Open vSwitch pipeline and
continues to be untracked until the pipeline invokes the ct
action. The connection tracking fields are all zeroes in an untracked
packet. When a flow in the Open vSwitch pipeline invokes the
ct
action, the action initializes the connection tracking
fields and the packet becomes tracked for the remainder of its
processing.
The connection tracker stores connection state in an internal table, but
it only adds a new entry to this table when a ct
action for
a new connection invokes ct
with the commit
parameter. For a given connection, when a pipeline has executed
ct
, but not yet with commit
, the connection is
said to be uncommitted. State for an uncommitted connection
is ephemeral and does not persist past the end of the pipeline, so some
features are only available to committed connections. A connection would
typically be left uncommitted as a way to drop its packets.
Connection tracking is an Open vSwitch extension to OpenFlow.
This field holds several flags that can be used to determine the state of the connection to which the packet belongs.
Matches on this field are most conveniently written in terms of
symbolic names (listed below), each preceded by either +
for a flag that must be set, or -
for a flag that must be
unset, without any other delimiters between the flags. Flags not
mentioned are wildcarded. For example,
tcp,ct_state=+trk-new
matches TCP packets that have been
run through the connection tracker and do not establish a new
connection. Matches can also be written as
flags/mask
, where flags
and mask are 32-bit numbers in decimal or in hexadecimal
prefixed by 0x
.
The following flags are defined:
new
(0x01)est
(0x02)rel
(0x04)Related to an existing connection, e.g. an ICMP ``destination unreachable'' message or an FTP data connections. This flag will only be 1 if the connection to which this one is related is committed.
Connections identified as rel
are separate from the
originating connection and must be committed separately. All
packets for a related connection will have the rel
flag set, not just the initial packet.
rpl
(0x08)inv
(0x10)The state is invalid, meaning that the connection tracker couldn't identify the connection. This flag is a catch-all for problems in the connection or the connection tracker, such as:
nf_conntrack_ipv4
or nf_conntrack_ipv6
modules are not loaded.
trk
(0x20)snat
(0x40)ct
action. Open vSwitch 2.6 added this flag.
dnat
(0x80)ct
action. Open vSwitch 2.6 added this
flag.
There are additional constraints on these flags, listed in decreasing order of precedence below:
trk
is unset, no other flags are set.
trk
is set, one or more other flags may be set.
inv
is set, only the trk
flag is also
set.
new
and est
are mutually exclusive.
new
and rpl
are mutually exclusive.
rel
may be set in conjunction with any other flags.
Future versions of Open vSwitch may define new flags.
ct
action. Each zone is an independent connection tracking
context, so tracking the same packet in multiple contexts requires using
the ct
action multiple times.
exec
parameter to the ct
action, to the connection to which the
current packet belongs.
exec
parameter to the ct
action, to the connection to which the
current packet belongs.
Open vSwitch 2.8 introduced the matching support for connection tracker original direction 5-tuple fields.
For non-committed non-related connections the conntrack original
direction tuple fields always have the same values as the
corresponding headers in the packet itself. For any other packets of
a committed connection the conntrack original direction tuple fields
reflect the values from that initial non-committed non-related packet,
and thus may be different from the actual packet headers, as the
actual packet headers may be in reverse direction (for reply packets),
transformed by NAT (when nat
option was applied to the
connection), or be of different protocol (i.e., when an ICMP response
is sent to an UDP packet). In case of related connections, e.g., an
FTP data connection, the original direction tuple contains the
original direction headers from the master connection, e.g., an FTP
control connection.
The following fields are populated by the ct action, and require a
match to a valid connection tracking state as a prerequisite, in
addition to the IP or IPv6 ethertype match. Examples of valid
connection tracking state matches include ct_state=+new
,
ct_state=+est
, ct_state=+rel
, and
ct_state=+trk-inv
.
MFF_CT_NW_PROTO
has value 6 for TCP, 17 for UDP, or
132 for SCTP. When MFF_CT_NW_PROTO
has value 1 for
ICMP, or 58 for ICMPv6, the lower 8 bits of
MFF_CT_TP_SRC
matches the conntrack original
direction ICMP type. See the paragraphs above for general
description to the conntrack original direction
tuple. Introduced in Open vSwitch 2.8.
MFF_CT_NW_PROTO
has value 6 for TCP, 17 for UDP, or
132 for SCTP. When MFF_CT_NW_PROTO
has value 1 for
ICMP, or 58 for ICMPv6, the lower 8 bits of
MFF_CT_TP_DST
matches the conntrack original
direction ICMP code. See the paragraphs above for general
description to the conntrack original direction
tuple. Introduced in Open vSwitch 2.8.
These fields give an OpenFlow switch space for temporary storage while the pipeline is running. Whereas metadata fields can have a meaningful initial value and can persist across some hops across OpenFlow switches, registers are always initially 0 and their values never persist across inter-switch hops (not even across patch ports).
This field is the oldest standardized OpenFlow register field, introduced in OpenFlow 1.1. It was introduced to model the limited number of user-defined bits that some ASIC-based switches can carry through their pipelines. Because of hardware limitations, OpenFlow allows switches to support writing and masking only an implementation-defined subset of bits, even no bits at all. The Open vSwitch software switch always supports all 64 bits, but of course an Open vSwitch port to an ASIC would have the same restriction as the ASIC itself.
This field has an OXM code point, but OpenFlow 1.4 and earlier allow it to be modified only with a specialized instruction, not with a ``set-field'' action. OpenFlow 1.5 removes this restriction. Open vSwitch does not enforce this restriction, regardless of OpenFlow version.
This is the first of the registers introduced in OpenFlow 1.5. OpenFlow 1.5 calls these fields just the ``packet registers,'' but Open vSwitch already had 32-bit registers by that name, so Open vSwitch uses the name ``extended registers'' in an attempt to reduce confusion. The standard allows for up to 128 registers, each 64 bits wide, but Open vSwitch only implements 4 (in versions 2.4 and 2.5) or 8 (in version 2.6 and later).
Each of the 64-bit extended registers overlays two of the 32-bit
registers: xreg0
overlays reg0
and
reg1
, with reg0
supplying the
most-significant bits of xreg0
and reg1
the
least-significant. Similarly, xreg1
overlays
reg2
and reg3
, and so on.
The OpenFlow specification says, ``In most cases, the packet registers can not be matched in tables, i.e. they usually can not be used in the flow entry match structure'' [OpenFlow 1.5, section 7.2.3.10], but there is no reason for a software switch to impose such a restriction, and Open vSwitch does not.
This is the first of the double-extended registers introduce in Open
vSwitch 2.6. Each of the 128-bit extended registers overlays four of
the 32-bit registers: xxreg0
overlays reg0
through reg3
, with reg0
supplying the
most-significant bits of xxreg0
and reg3
the
least-significant. xxreg1
similarly overlays
reg4
through reg7
, and so on.
Ethernet is the only layer-2 protocol that Open vSwitch supports. As with most software, Open vSwitch and OpenFlow regard an Ethernet frame to begin with the 14-byte header and end with the final byte of the payload; that is, the frame check sequence is not considered part of the frame.
The Ethernet source address:
The Ethernet destination address:
Open vSwitch 1.8 and later support arbitrary masks for source and/or destination. Earlier versions only support masking the destination with the following masks:
01:00:00:00:00:00
dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
matches all
multicast (including broadcast) Ethernet packets, and
dl_dst=00:00:00:00:00:00/01:00:00:00:00:00
matches all
unicast Ethernet packets.
fe:ff:ff:ff:ff:ff
ff:ff:ff:ff:ff:ff
00:00:00:00:00:00
dl_dst=*
).
The most commonly seen Ethernet frames today use a format called ``Ethernet II,'' in which the last two bytes of the Ethernet header specify the Ethertype. For such a frame, this field is copied from those bytes of the header, like so:
Every Ethernet type has a value 0x600 (1,536) or greater.
When the last two bytes of the Ethernet header have a value
too small to be an Ethernet type, then the value found there
is the total length of the frame in bytes, excluding the
Ethernet header. An 802.2 LLC header typically follows the
Ethernet header. OpenFlow and Open vSwitch only support LLC
headers with DSAP and SSAP 0xaa
and control byte
0x03
, which indicate that a SNAP header follows
the LLC header. In turn, OpenFlow and Open vSwitch only
support a SNAP header with organization 0x000000
.
In such a case, this field is copied from the type field in
the SNAP header, like this:
When an 802.1Q header is inserted after the Ethernet source and destination, this field is populated with the encapsulated Ethertype, not the 802.1Q Ethertype. With an Ethernet II inner frame, the result looks like this:
LLC and SNAP encapsulation look like this with an 802.1Q header:
When a packet doesn't match any of the header formats described
above, Open vSwitch and OpenFlow set this field to
0x5ff
(OFP_DL_TYPE_NOT_ETH_TYPE
).
The 802.1Q VLAN header causes more trouble than any other 4 bytes in networking. OpenFlow 1.0, 1.1, and 1.2+ all treat VLANs differently. Open vSwitch extensions add another variant to the mix. Open vSwitch reconciles all four treatments as best it can.
An 802.1Q VLAN header consists of two 16-bit fields:
The first 16 bits of the VLAN header, the TPID (Tag Protocol
IDentifier), is an Ethertype. When the VLAN header is inserted just
after the source and destination MAC addresses in a Ethertype frame, the
TPID serves to identify the presence of the VLAN. The standard TPID, the
only one that Open vSwitch supports, is 0x8100
. OpenFlow
1.0 explicitly supports only TPID 0x8100
. OpenFlow 1.1, but
not earlier or later versions, also requires support for TPID
0x88a8
(Open vSwitch does not support this). OpenFlow 1.2
through 1.5 do not require support for specific TPIDs (the ``push vlan
header'' action does say that only 0x8100
and
0x88a8
should be pushed). No version of OpenFlow provides a
way to distinguish or match on the TPID.
The remaining 16 bits of the VLAN header, the TCI (Tag Control Information), is subdivided into three subfields:
CFI (Canonical Format Indicator), is a 1-bit field. On an Ethernet network, its value is always 0. This led to it later being repurposed under the name DEI (Drop Eligibility Indicator). By either name, OpenFlow and Open vSwitch don't provide any way to match or set this bit.
0xfff
(4,095) is reserved.
See for illustrations of a complete Ethernet frame with 802.1Q tag included.
Open vSwitch can match only a single VLAN header. If more than one VLAN header is present, then holds the TPID of the inner VLAN header. Open vSwitch stops parsing the packet after the inner TPID, so matching further into the packet (e.g. on the inner TCI or L3 fields) is not possible.
OpenFlow only directly supports matching a single VLAN header. In OpenFlow 1.1 or later, one OpenFlow table can match on the outermost VLAN header and pop it off, and a later OpenFlow table can match on the next outermost header. Open vSwitch does not support this.
The four variants have three different levels of expressiveness: OpenFlow 1.0 and 1.1 VLAN matching are less powerful than OpenFlow 1.2+ VLAN matching, which is less powerful than Open vSwitch extension VLAN matching.
OpenFlow 1.0 uses two fields, called dl_vlan
and
dl_vlan_pcp
, each of which can be either exact-matched or
wildcarded, to specify VLAN matches:
dl_vlan
and dl_vlan_pcp
are
wildcarded, the flow matches packets without an 802.1Q header or
with any 802.1Q header.
dl_vlan=0xffff
causes a flow to match only
packets without an 802.1Q header. Such a flow should also wildcard
dl_vlan_pcp
, since a packet without an 802.1Q header does
not have a PCP. OpenFlow does not specify what to do if a match on PCP
is actually present, but Open vSwitch ignores it.
Otherwise, the flow matches only packets with an 802.1Q
header. If dl_vlan
is not wildcarded, then the
flow only matches packets with the VLAN ID specified in
dl_vlan
's low 12 bits. If
dl_vlan_pcp
is not wildcarded, then the flow
only matches packets with the priority specified in
dl_vlan_pcp
's low 3 bits.
OpenFlow does not specify how to interpret the high 4 bits of
dl_vlan
or the high 5 bits of dl_vlan_pcp
.
Open vSwitch ignores them.
VLAN matching in OpenFlow 1.1 is similar to OpenFlow 1.0.
The one refinement is that when dl_vlan
matches on
0xfffe
(OFVPID_ANY
), the flow matches
only packets with an 802.1Q header, with any VLAN ID. If
dl_vlan_pcp
is wildcarded, the flow matches any
packet with an 802.1Q header, regardless of VLAN ID or priority.
If dl_vlan_pcp
is not wildcarded, then the flow
only matches packets with the priority specified in
dl_vlan_pcp
's low 3 bits.
OpenFlow 1.1 uses the name OFPVID_NONE
, instead of
OFP_VLAN_NONE
, for a dl_vlan
of
0xffff
, but it has the same meaning.
In OpenFlow 1.1, Open vSwitch reports error
OFPBMC_BAD_VALUE
for an attempt to match on
dl_vlan
between 4,096 and 0xfffd
,
inclusive, or dl_vlan_pcp
greater than 7.
The OpenFlow standard describes this field as consisting of
``12+1'' bits. On ingress, its value is 0 if no 802.1Q header
is present, and otherwise it holds the VLAN VID in its least
significant 12 bits, with bit 12 (0x1000
aka
OFPVID_PRESENT
) also set to 1. The three most
significant bits are always zero:
As a consequence of this field's format, one may use it to match the VLAN ID in all of the ways available with the OpenFlow 1.0 and 1.1 formats, and a few new ways:
0x0000
(OFPVID_NONE
), mask
0xffff
(or no mask)
0x1000
, mask 0x1000
0x1009
, mask 0xffff
(or no mask)
0x1001
, mask 0x1001
The 3 least significant bits may be used to match the PCP bits in an 802.1Q header. Other bits are always zero:
This field may only be used when is not wildcarded and does not exact match on 0 (which only matches when there is no 802.1Q header).
See VLAN Comparison Chart, below, for some examples.
The extension can describe more kinds of VLAN matches than the other variants. It is also simpler than the other variants.
For a packet without an 802.1Q header, this field is zero. For a
packet with an 802.1Q header, this field is the TCI with the bit in
CFI's position (marked P
for ``present'' below) forced to
1. Thus, for a packet in VLAN 9 with priority 7, it has the value
0xf009
:
Usage examples:
vlan_tci=0
vlan_tci=0x1000/0x1000
vlan_tci=0xf123
vlan_tci=0x1123/0x1fff
vlan_tci=0x5000/0xf000
vlan_tci=0/0xfff
vlan_tci=0x5000/0xe000
vlan_tci=0/0xefff
See VLAN Comparison Chart, below, for more examples.
The following table describes each of several possible matching criteria on 802.1Q header may be expressed with each variation of the VLAN matching fields:
All numbers in the table are expressed in hexadecimal. The columns in the table are interpreted as follows:
The matching criteria described by the table are:
Matches only packets without an 802.1Q header.
OpenFlow 1.0 doesn't define the behavior if is
set to 0xffff
and is not
wildcarded. (Open vSwitch always ignores
when is set to 0xffff
.)
OpenFlow 1.1 says explicitly to ignore
when is set to 0xffff
.
OpenFlow 1.2 doesn't say how to interpret a match with value 0 and a mask with
OFPVID_PRESENT
(0x1000
) set to 1 and some
other bits in the mask set to 1 also. Open vSwitch interprets it the
same way as a mask of 0x1000
.
Any NXM match with value 0 and the CFI bit set to 1 in the mask is equivalent to the one listed in the table.
Matches only packets that have an 802.1Q header with PCP
OpenFlow 1.0 doesn't clearly define the behavior for this case. Open vSwitch implements it this way.
In the NXM value,
Matches only packets that have an 802.1Q header with VID
In the NXM value,
One or more MPLS headers (more commonly called MPLS
labels) follow an Ethernet type field that specifies an
MPLS Ethernet type [RFC 3032]. Ethertype 0x8847
is
used for all unicast. Multicast MPLS is divided into two
specific classes, one of which uses Ethertype
0x8847
and the other 0x8848
[RFC
5332].
The most common overall packet format is Ethernet II, shown below (SNAP encapsulation may be used but is not ordinarily seen in Ethernet networks):
MPLS can be encapsulated inside an 802.1Q header, in which case the combination looks like this:
The fields within an MPLS label are:
0 indicates that another MPLS label follows this one.
1 indicates that this MPLS label is the last one in the stack, so that some other protocol follows this one.
Each hop across an MPLS network decrements the TTL by 1. If it reaches 0, the packet is discarded.
OpenFlow does not make the MPLS TTL available as a match field, but actions are available to set and decrement the TTL. Open vSwitch 2.6 and later makes the MPLS TTL available as an extension.
Unlike the other encapsulations supported by OpenFlow and Open vSwitch, MPLS labels are routinely used in ``stacks'' two or three deep and sometimes even deeper. Open vSwitch currently supports up to three labels.
The OpenFlow specification only supports matching on the outermost MPLS label at any given time. To match on the second label, one must first ``pop'' the outer label and advance to another OpenFlow table, where the inner label may be matched. To match on the third label, one must pop the two outer labels, and so on. The Open Networking Foundation is considering support for directly matching on multiple MPLS labels for OpenFlow 1.6.
Unlike all other forms of encapsulation that Open vSwitch and OpenFlow support, an MPLS label does not indicate what inner protocol it encapsulates. Different deployments determine the inner protocol in different ways [RFC 3032]:
4
or
6
. OpenFlow and Open vSwitch do not currently
support these cases.
Open vSwitch and OpenFlow do not infer the inner protocol, even if
reserved label values are in use. Instead, the flow table must specify
the inner protocol at the time it pops the bottommost MPLS label, using
the Ethertype argument to the pop_mpls
action.
The least significant 20 bits hold the ``label'' field from the MPLS label. Other bits are zero:
Most label values are available for any use by deployments. Values under 16 are reserved.
The least significant 3 bits hold the TC field from the MPLS label. Other bits are zero:
This field is intended for use for Quality of Service (QoS) and Explicit Congestion Notification purposes, but its particular interpretation is deployment specific.
Before 2009, this field was named EXP and reserved for experimental use [RFC 5462].
The least significant bit holds the BOS field from the MPLS label. Other bits are zero:
This field is useful as part of processing a series of incoming MPLS
labels. A flow that includes a pop_mpls
action should
generally match on :
pop_mpls
should be an MPLS Ethertype. For example: table=0,
dl_type=0x8847, mpls_bos=1, actions=pop_mpls:0x8847,
goto_table:1
pop_mpls
should be a non-MPLS
Ethertype such as IPv4. For example: table=1, dl_type=0x8847,
mpls_bos=0, actions=pop_mpls:0x0800, goto_table:2
Holds the 8-bit time-to-live field from the MPLS label:
These fields are applicable only to IPv4 flows, that is, flows that match
on the IPv4 Ethertype 0x0800
.
The source address from the IPv4 header:
For historical reasons, in an ARP or RARP flow, Open vSwitch interprets
matches on nw_src
as actually referring to the ARP SPA.
The destination address from the IPv4 header:
For historical reasons, in an ARP or RARP flow, Open vSwitch interprets
matches on nw_dst
as actually referring to the ARP TPA.
These fields apply only to IPv6 flows, that is, flows that match
on the IPv6 Ethertype 0x86dd
.
The source address from the IPv6 header:
Open vSwitch 1.8 added support for bitwise matching; earlier versions supported only CIDR masks.
The destination address from the IPv6 header:
Open vSwitch 1.8 added support for bitwise matching; earlier versions supported only CIDR masks.
The least significant 20 bits hold the flow label field from the IPv6 header. Other bits are zero:
These fields exist with at least approximately the same meaning in both
IPv4 and IPv6, so they are treated as a single field for matching
purposes. Any flow that matches on the IPv4 Ethertype
0x0800
or the IPv6 Ethertype 0x86dd
may match
on these fields.
Matches the IPv4 or IPv6 protocol type.
For historical reasons, in an ARP or RARP flow, Open vSwitch interprets
matches on nw_proto
as actually referring to the ARP
opcode. The ARP opcode is a 16-bit field, so for matching purposes ARP
opcodes greater than 255 are treated as 0; this works adequately
because in practice ARP and RARP only use opcodes 1 through 4.
dec_ttl
action will fail due to a TTL exceeded
error. Another way that a controller can detect TTL exceeded is to
listen for OFPR_INVALID_TTL
``packet-in'' messages via
OpenFlow.
Specifies what kinds of IP fragments or non-fragments to match. The value for this field is most conveniently specified as one of the following:
no
yes
first
later
not_later
The field is internally formatted as 2 bits: bit 0 is 1 for an IP fragment with any offset (and otherwise 0), and bit 1 is 1 for an IP fragment with nonzero offset (and otherwise 0), like so:
Even though 2 bits have 4 possible values, this field only uses 3 of them:
The switch may reject matches against values that can never appear.
It is important to understand how this field interacts with the OpenFlow fragment handling mode:
OFPC_FRAG_DROP
mode, the OpenFlow switch drops all IP
fragments before they reach the flow table, so every packet that is
available for matching will have value 0 in this field.
OFPC_FRAG_REASM
mode,
but if it did then IP fragments would be reassembled before they
reached the flow table and again every packet available for matching
would always have value 0.
OFPC_FRAG_NORMAL
mode, all three values are possible,
but OpenFlow 1.0 says that fragments' transport ports are always 0,
even for the first fragment, so this does not provide much extra
information.
OFPC_FRAG_NX_MATCH
mode, all three values are
possible. For fragments with offset 0, Open vSwitch makes L4 header
information available.
Thus, this field is likely to be most useful for an Open vSwitch switch
configured in OFPC_FRAG_NX_MATCH
mode. See the
description of the set-frags
command in
ovs-ofctl
(8), for more details.
IPv4 and IPv6 contain a one-byte ``type of service'' or TOS field that has the following format:
This field is the TOS byte with the two ECN bits cleared to 0:
This field is the TOS byte shifted right to put the DSCP bits in the 6 least-significant bits:
This field is the TOS byte with the DSCP bits cleared to 0:
In theory, Address Resolution Protocol, or ARP, is a generic protocol generic protocol that can be used to obtain the hardware address that corresponds to any higher-level protocol address. In contemporary usage, ARP is used only in Ethernet networks to obtain the Ethernet address for a given IPv4 address. OpenFlow and Open vSwitch only support this usage of ARP. For this use case, an ARP packet has the following format, with the ARP fields exposed as Open vSwitch fields highlighted:
The ARP fields are also used for RARP, the Reverse Address Resolution Protocol, which shares ARP's wire format.
Service functions are widely deployed and essential in many networks. These service functions provide a range of features such as security, WAN acceleration, and server load balancing. Service functions may be instantiated at different points in the network infrastructure such as the wide area network, data center, and so forth.
Prior to development of the SFC architecture [RFC 7665] and the protocol specified in this document, current service function deployment models have been relatively static and bound to topology for insertion and policy selection. Furthermore, they do not adapt well to elastic service environments enabled by virtualization.
New data center network and cloud architectures require more flexible service function deployment models. Additionally, the transition to virtual platforms demands an agile service insertion model that supports dynamic and elastic service delivery. Specifically, the following functions are necessary:
1. The movement of service functions and application workloads in the network.
2. The ability to easily bind service policy to granular information, such as per-subscriber state.
3. The capability to steer traffic to the requisite service function(s).
The Network Service Header (NSH) specification defines a new data plane protocol, which is an encapsulation for service function chains. The NSH is designed to encapsulate an original packet or frame, and in turn be encapsulated by an outer transport encapsulation (which is used to deliver the NSH to NSH-aware network elements), as shown in Figure 1:
+------------------------------+ | Transport Encapsulation | +------------------------------+ | Network Service Header (NSH) | +------------------------------+ | Original Packet / Frame | +------------------------------+ Figure 1: Network Service Header Encapsulation
The NSH is composed of the following elements:
1. Service Function Path identification.
2. Indication of location within a Service Function Path.
3. Optional, per packet metadata (fixed length or variable).
[RFC 7665] provides an overview of a service chaining architecture that clearly defines the roles of the various elements and the scope of a service function chaining encapsulation. Figure 3 of [RFC 7665] depicts the SFC architectural components after classification. The NSH is the SFC encapsulation referenced in [RFC 7665].
For matching purposes, no distinction is made whether these protocols are encapsulated within IPv4 or IPv6.
The following diagram shows TCP within IPv4. Open vSwitch also supports TCP in IPv6. Only TCP fields implemented as Open vSwitch fields are shown:
This field holds the TCP flags. TCP currently defines 9 flag bits. An additional 3 bits are reserved. For more information, see [RFC 793], [RFC 3168], and [RFC 3540].
Matches on this field are most conveniently written in terms of
symbolic names (given in the diagram below), each preceded by either
+
for a flag that must be set, or -
for a
flag that must be unset, without any other delimiters between the
flags. Flags not mentioned are wildcarded. For example,
tcp,tcp_flags=+syn-ack
matches TCP SYNs that are not ACKs,
and tcp,tcp_flags=+[200]
matches TCP packets with the
reserved [200] flag set. Matches can also be written as
flags/mask
, where flags
and mask are 16-bit numbers in decimal or in hexadecimal
prefixed by 0x
.
The flag bits are:
The following diagram shows UDP within IPv4. Open vSwitch also supports UDP in IPv6. Only UDP fields that Open vSwitch exposes as fields are shown:
The following diagram shows SCTP within IPv4. Open vSwitch also supports SCTP in IPv6. Only SCTP fields that Open vSwitch exposes as fields are shown:
For historical reasons, in an ICMPv4 flow, Open vSwitch interprets
matches on tp_src
as actually referring to the ICMP type.
For historical reasons, in an ICMPv4 flow, Open vSwitch interprets
matches on tp_dst
as actually referring to the ICMP code.
Ben Pfaff, with advice from Justin Pettit and Jean Tourrilhes.