Name

ovn-northd -- Open Virtual Network central control daemon

Synopsis

ovn-northd [options]

Description

ovn-northd is a centralized daemon responsible for translating the high-level OVN configuration into logical configuration consumable by daemons such as ovn-controller. It translates the logical network configuration in terms of conventional network concepts, taken from the OVN Northbound Database (see ovn-nb(5)), into logical datapath flows in the OVN Southbound Database (see ovn-sb(5)) below it.

Configuration

ovn-northd requires a connection to the Northbound and Southbound databases. The defaults are ovnnb_db.sock and ovnsb_db.sock respectively in the local Open vSwitch's "run" directory. This may be overridden with the following commands:

The database argument must take one of the following forms:

Runtime Management Commands

ovs-appctl can send commands to a running ovn-northd process. The currently supported commands are described below.

exit
Causes ovn-northd to gracefully terminate.

Logical Flow Table Structure

One of the main purposes of ovn-northd is to populate the Logical_Flow table in the OVN_Southbound database. This section describes how ovn-northd does this for switch and router logical datapaths.

Logical Switch Datapaths

Ingress Table 0: Admission Control and Ingress Port Security - L2

Ingress table 0 contains these logical flows:

There are no flows for disabled logical ports because the default-drop behavior of logical flow tables causes packets that ingress from them to be dropped.

Ingress Table 1: Ingress Port Security - IP

Ingress table 1 contains these logical flows:

Ingress Table 2: Ingress Port Security - Neighbor discovery

Ingress table 2 contains these logical flows:

Ingress Table 3: from-lport Pre-ACLs

This table prepares flows for possible stateful ACL processing in ingress table ACLs. It contains a priority-0 flow that simply moves traffic to the next table. If stateful ACLs are used in the logical datapath, a priority-100 flow is added that sets a hint (with reg0[0] = 1; next;) for table Pre-stateful to send IP packets to the connection tracker before eventually advancing to ingress table ACLs.

Ingress Table 4: Pre-LB

This table prepares flows for possible stateful load balancing processing in ingress table LB and Stateful. It contains a priority-0 flow that simply moves traffic to the next table. If load balancing rules with virtual IP addresses (and ports) are configured in OVN_Northbound database for a logical datapath, a priority-100 flow is added for each configured virtual IP address VIP with a match ip && ip4.dst == VIP that sets an action reg0[0] = 1; next; to act as a hint for table Pre-stateful to send IP packets to the connection tracker for packet de-fragmentation before eventually advancing to ingress table LB.

Ingress Table 5: Pre-stateful

This table prepares flows for all possible stateful processing in next tables. It contains a priority-0 flow that simply moves traffic to the next table. A priority-100 flow sends the packets to connection tracker based on a hint provided by the previous tables (with a match for reg0[0] == 1) by using the ct_next; action.

Ingress table 6: from-lport ACLs

Logical flows in this table closely reproduce those in the ACL table in the OVN_Northbound database for the from-lport direction. allow ACLs translate into logical flows with the next; action, allow-related ACLs translate into logical flows with the reg0[1] = 1; next; actions (which acts as a hint for the next tables to commit the connection to conntrack), other ACLs translate to drop;. The priority values from the ACL table have a limited range and have 1000 added to them to leave room for OVN default flows at both higher and lower priorities.

This table also contains a priority 0 flow with action next;, so that ACLs allow packets by default. If the logical datapath has a statetful ACL, the following flows will also be added:

Ingress Table 7: LB

It contains a priority-0 flow that simply moves traffic to the next table. For established connections a priority 100 flow matches on ct.est && !ct.rel && !ct.new && !ct.inv and sets an action reg0[2] = 1; next; to act as a hint for table Stateful to send packets through connection tracker to NAT the packets. (The packet will automatically get DNATed to the same IP address as the first packet in that connection.)

Ingress Table 8: Stateful

Ingress Table 9: ARP responder

This table implements ARP responder for known IPs. It contains these logical flows:

Ingress Table 10: Destination Lookup

This table implements switching behavior. It contains these logical flows:

Egress Table 0: Pre-LB

This table is similar to ingress table Pre-LB. It contains a priority-0 flow that simply moves traffic to the next table. If any load balancing rules exist for the datapath, a priority-100 flow is added with a match of ip and action of reg0[0] = 1; next; to act as a hint for table Pre-stateful to send IP packets to the connection tracker for packet de-fragmentation.

Egress Table 1: to-lport Pre-ACLs

This is similar to ingress table Pre-ACLs except for to-lport traffic.

Egress Table 2: Pre-stateful

This is similar to ingress table Pre-stateful.

Egress Table 3: LB

This is similar to ingress table LB.

Egress Table 4: to-lport ACLs

This is similar to ingress table ACLs except for to-lport ACLs.

Egress Table 5: Stateful

This is similar to ingress table Stateful except that there are no rules added for load balancing new connections.

Egress Table 6: Egress Port Security - IP

This is similar to the port security logic in table Ingress Port Security - IP except that outport, eth.dst, ip4.dst and ip6.dst are checked instead of inport, eth.src, ip4.src and ip6.src

Egress Table 7: Egress Port Security - L2

This is similar to the ingress port security logic in ingress table Admission Control and Ingress Port Security - L2, but with important differences. Most obviously, outport and eth.dst are checked instead of inport and eth.src. Second, packets directed to broadcast or multicast eth.dst are always accepted instead of being subject to the port security rules; this is implemented through a priority-100 flow that matches on eth.mcast with action output;. Finally, to ensure that even broadcast and multicast packets are not delivered to disabled logical ports, a priority-150 flow for each disabled logical outport overrides the priority-100 flow with a drop; action.

Logical Router Datapaths

Logical router datapaths will only exist for rows in the database that do not have set to false

Ingress Table 0: L2 Admission Control

This table drops packets that the router shouldn't see at all based on their Ethernet headers. It contains the following flows:

Other packets are implicitly dropped.

Ingress Table 1: IP Input

This table is the core of the logical router datapath functionality. It contains the following flows to implement very basic IP host functionality.

The flows above handle all of the traffic that might be directed to the router itself. The following flows (with lower priorities) handle the remaining traffic, potentially for forwarding:

Ingress Table 2: UNSNAT

This is for already established connections' reverse traffic. i.e., SNAT has already been done in egress pipeline and now the packet has entered the ingress pipeline as part of a reply. It is unSNATted here.

Ingress Table 3: DNAT

Packets enter the pipeline with destination IP address that needs to be DNATted from a virtual IP address to a real IP address. Packets in the reverse direction needs to be unDNATed.

Ingress Table 4: IP Routing

A packet that arrives at this table is an IP packet that should be routed to the address in ip4.dst. This table implements IP routing, setting reg0 to the next-hop IP address (leaving ip4.dst, the packet's final destination, unchanged) and advances to the next table for ARP resolution. It also sets reg1 to the IP address owned by the selected router port (which is used later in table 6 as the IP source address for an ARP request, if needed).

This table contains the following logical flows:

Ingress Table 5: ARP Resolution

Any packet that reaches this table is an IP packet whose next-hop IP address is in reg0. (ip4.dst is the final destination.) This table resolves the IP address in reg0 into an output port in outport and an Ethernet address in eth.dst, using the following flows:

Ingress Table 6: ARP Request

In the common case where the Ethernet destination has been resolved, this table outputs the packet. Otherwise, it composes and sends an ARP request. It holds the following flows:

Egress Table 0: SNAT

Packets that are configured to be SNATed get their source IP address changed based on the configuration in the OVN Northbound database.

Egress Table 1: Delivery

Packets that reach this table are ready for delivery. It contains priority-100 logical flows that match packets on each enabled logical router port, with action output;.