summaryrefslogtreecommitdiff
path: root/ovn
diff options
context:
space:
mode:
authorBabu Shanmugam <bschanmu@redhat.com>2016-10-05 13:57:47 +0530
committerBen Pfaff <blp@ovn.org>2016-10-05 17:28:16 -0700
commit1a03fc7da32e74aacbf638d3617a290ddffaa069 (patch)
tree5713962862930cca7ed35afd2ef6abfc70daa9f7 /ovn
parent644ecb10a661c822270660c1b4358213f5d48399 (diff)
downloadopenvswitch-1a03fc7da32e74aacbf638d3617a290ddffaa069.tar.gz
DSCP marking on packets
This patch adds support for marking qos on IP packets based on arbitrary match criteria for a logical switch. Signed-off-by: Babu Shanmugam <bschanmu@redhat.com> Suggested-by: Mickey Spiegel <mickeys.dev@gmail.com> Acked-by: Mickey Spiegel <mickeys.dev@gmail.com> [blp@ovn.org fixes races in the test and added ovn-trace checks] Signed-off-by: Ben Pfaff <blp@ovn.org>
Diffstat (limited to 'ovn')
-rw-r--r--ovn/lib/logical-fields.c2
-rw-r--r--ovn/northd/ovn-northd.8.xml47
-rw-r--r--ovn/northd/ovn-northd.c80
-rw-r--r--ovn/ovn-nb.ovsschema26
-rw-r--r--ovn/ovn-nb.xml57
-rw-r--r--ovn/utilities/ovn-nbctl.c5
6 files changed, 177 insertions, 40 deletions
diff --git a/ovn/lib/logical-fields.c b/ovn/lib/logical-fields.c
index 5229be37e..2d3e2178e 100644
--- a/ovn/lib/logical-fields.c
+++ b/ovn/lib/logical-fields.c
@@ -137,7 +137,7 @@ ovn_init_symtab(struct shash *symtab)
expr_symtab_add_predicate(symtab, "ip6", "eth.type == 0x86dd");
expr_symtab_add_predicate(symtab, "ip", "ip4 || ip6");
expr_symtab_add_field(symtab, "ip.proto", MFF_IP_PROTO, "ip", true);
- expr_symtab_add_field(symtab, "ip.dscp", MFF_IP_DSCP, "ip", false);
+ expr_symtab_add_field(symtab, "ip.dscp", MFF_IP_DSCP_SHIFTED, "ip", false);
expr_symtab_add_field(symtab, "ip.ecn", MFF_IP_ECN, "ip", false);
expr_symtab_add_field(symtab, "ip.ttl", MFF_IP_TTL, "ip", false);
diff --git a/ovn/northd/ovn-northd.8.xml b/ovn/northd/ovn-northd.8.xml
index 77eb3d184..df53d4c7f 100644
--- a/ovn/northd/ovn-northd.8.xml
+++ b/ovn/northd/ovn-northd.8.xml
@@ -362,7 +362,27 @@
</li>
</ul>
- <h3>Ingress Table 7: LB</h3>
+ <h3>Ingress Table 7: <code>from-lport</code> QoS marking</h3>
+
+ <p>
+ Logical flows in this table closely reproduce those in the
+ <code>QoS</code> table in the <code>OVN_Northbound</code> database
+ for the <code>from-lport</code> direction.
+ </p>
+
+ <ul>
+ <li>
+ For every qos_rules for every logical switch a flow will be added at
+ priorities mentioned in the QoS table.
+ </li>
+
+ <li>
+ One priority-0 fallback flow that matches all packets and advances to
+ the next table.
+ </li>
+ </ul>
+
+ <h3>Ingress Table 8: LB</h3>
<p>
It contains a priority-0 flow that simply moves traffic to the next
@@ -375,7 +395,7 @@
connection.)
</p>
- <h3>Ingress Table 8: Stateful</h3>
+ <h3>Ingress Table 9: Stateful</h3>
<ul>
<li>
@@ -412,7 +432,7 @@
</li>
</ul>
- <h3>Ingress Table 9: ARP/ND responder</h3>
+ <h3>Ingress Table 10: ARP/ND responder</h3>
<p>
This table implements ARP/ND responder for known IPs. It contains these
@@ -507,7 +527,7 @@ nd_na {
</li>
</ul>
- <h3>Ingress Table 10: DHCP option processing</h3>
+ <h3>Ingress Table 11: DHCP option processing</h3>
<p>
This table adds the DHCPv4 options to a DHCPv4 packet from the
@@ -567,7 +587,7 @@ next;
</li>
</ul>
- <h3>Ingress Table 11: DHCP responses</h3>
+ <h3>Ingress Table 12: DHCP responses</h3>
<p>
This table implements DHCP responder for the DHCP replies generated by
@@ -649,7 +669,7 @@ output;
</li>
</ul>
- <h3>Ingress Table 12: Destination Lookup</h3>
+ <h3>Ingress Table 13 Destination Lookup</h3>
<p>
This table implements switching behavior. It contains these logical
@@ -716,7 +736,14 @@ output;
<code>to-lport</code> ACLs.
</p>
- <h3>Egress Table 5: Stateful</h3>
+ <h3>Egress Table 5: <code>to-lport</code> QoS marking</h3>
+
+ <p>
+ This is similar to ingress table <code>QoS marking</code> except for
+ <code>to-lport</code> qos rules.
+ </p>
+
+ <h3>Egress Table 6: Stateful</h3>
<p>
This is similar to ingress table <code>Stateful</code> except that
@@ -727,10 +754,10 @@ output;
Also a priority 34000 logical flow is added for each logical port which
has DHCPv4 options defined to allow the DHCPv4 reply packet and which has
DHCPv6 options defined to allow the DHCPv6 reply packet from the
- <code>Ingress Table 11: DHCP responses</code>.
+ <code>Ingress Table 12: DHCP responses</code>.
</p>
- <h3>Egress Table 6: Egress Port Security - IP</h3>
+ <h3>Egress Table 7: Egress Port Security - IP</h3>
<p>
This is similar to the port security logic in table
@@ -740,7 +767,7 @@ output;
<code>ip4.src</code> and <code>ip6.src</code>
</p>
- <h3>Egress Table 7: Egress Port Security - L2</h3>
+ <h3>Egress Table 8: Egress Port Security - L2</h3>
<p>
This is similar to the ingress port security logic in ingress table
diff --git a/ovn/northd/ovn-northd.c b/ovn/northd/ovn-northd.c
index 0150a8cde..ad4e38da9 100644
--- a/ovn/northd/ovn-northd.c
+++ b/ovn/northd/ovn-northd.c
@@ -96,21 +96,22 @@ enum ovn_datapath_type {
* form the stage's full name, e.g. S_SWITCH_IN_PORT_SEC_L2,
* S_ROUTER_OUT_DELIVERY. */
enum ovn_stage {
-#define PIPELINE_STAGES \
- /* Logical switch ingress stages. */ \
- PIPELINE_STAGE(SWITCH, IN, PORT_SEC_L2, 0, "ls_in_port_sec_l2") \
- PIPELINE_STAGE(SWITCH, IN, PORT_SEC_IP, 1, "ls_in_port_sec_ip") \
- PIPELINE_STAGE(SWITCH, IN, PORT_SEC_ND, 2, "ls_in_port_sec_nd") \
- PIPELINE_STAGE(SWITCH, IN, PRE_ACL, 3, "ls_in_pre_acl") \
- PIPELINE_STAGE(SWITCH, IN, PRE_LB, 4, "ls_in_pre_lb") \
- PIPELINE_STAGE(SWITCH, IN, PRE_STATEFUL, 5, "ls_in_pre_stateful") \
- PIPELINE_STAGE(SWITCH, IN, ACL, 6, "ls_in_acl") \
- PIPELINE_STAGE(SWITCH, IN, LB, 7, "ls_in_lb") \
- PIPELINE_STAGE(SWITCH, IN, STATEFUL, 8, "ls_in_stateful") \
- PIPELINE_STAGE(SWITCH, IN, ARP_ND_RSP, 9, "ls_in_arp_rsp") \
- PIPELINE_STAGE(SWITCH, IN, DHCP_OPTIONS, 10, "ls_in_dhcp_options") \
- PIPELINE_STAGE(SWITCH, IN, DHCP_RESPONSE, 11, "ls_in_dhcp_response") \
- PIPELINE_STAGE(SWITCH, IN, L2_LKUP, 12, "ls_in_l2_lkup") \
+#define PIPELINE_STAGES \
+ /* Logical switch ingress stages. */ \
+ PIPELINE_STAGE(SWITCH, IN, PORT_SEC_L2, 0, "ls_in_port_sec_l2") \
+ PIPELINE_STAGE(SWITCH, IN, PORT_SEC_IP, 1, "ls_in_port_sec_ip") \
+ PIPELINE_STAGE(SWITCH, IN, PORT_SEC_ND, 2, "ls_in_port_sec_nd") \
+ PIPELINE_STAGE(SWITCH, IN, PRE_ACL, 3, "ls_in_pre_acl") \
+ PIPELINE_STAGE(SWITCH, IN, PRE_LB, 4, "ls_in_pre_lb") \
+ PIPELINE_STAGE(SWITCH, IN, PRE_STATEFUL, 5, "ls_in_pre_stateful") \
+ PIPELINE_STAGE(SWITCH, IN, ACL, 6, "ls_in_acl") \
+ PIPELINE_STAGE(SWITCH, IN, QOS_MARK, 7, "ls_in_qos_mark") \
+ PIPELINE_STAGE(SWITCH, IN, LB, 8, "ls_in_lb") \
+ PIPELINE_STAGE(SWITCH, IN, STATEFUL, 9, "ls_in_stateful") \
+ PIPELINE_STAGE(SWITCH, IN, ARP_ND_RSP, 10, "ls_in_arp_rsp") \
+ PIPELINE_STAGE(SWITCH, IN, DHCP_OPTIONS, 11, "ls_in_dhcp_options") \
+ PIPELINE_STAGE(SWITCH, IN, DHCP_RESPONSE, 12, "ls_in_dhcp_response") \
+ PIPELINE_STAGE(SWITCH, IN, L2_LKUP, 13, "ls_in_l2_lkup") \
\
/* Logical switch egress stages. */ \
PIPELINE_STAGE(SWITCH, OUT, PRE_LB, 0, "ls_out_pre_lb") \
@@ -118,9 +119,10 @@ enum ovn_stage {
PIPELINE_STAGE(SWITCH, OUT, PRE_STATEFUL, 2, "ls_out_pre_stateful") \
PIPELINE_STAGE(SWITCH, OUT, LB, 3, "ls_out_lb") \
PIPELINE_STAGE(SWITCH, OUT, ACL, 4, "ls_out_acl") \
- PIPELINE_STAGE(SWITCH, OUT, STATEFUL, 5, "ls_out_stateful") \
- PIPELINE_STAGE(SWITCH, OUT, PORT_SEC_IP, 6, "ls_out_port_sec_ip") \
- PIPELINE_STAGE(SWITCH, OUT, PORT_SEC_L2, 7, "ls_out_port_sec_l2") \
+ PIPELINE_STAGE(SWITCH, OUT, QOS_MARK, 5, "ls_out_qos_mark") \
+ PIPELINE_STAGE(SWITCH, OUT, STATEFUL, 6, "ls_out_stateful") \
+ PIPELINE_STAGE(SWITCH, OUT, PORT_SEC_IP, 7, "ls_out_port_sec_ip") \
+ PIPELINE_STAGE(SWITCH, OUT, PORT_SEC_L2, 8, "ls_out_port_sec_l2") \
\
/* Logical router ingress stages. */ \
PIPELINE_STAGE(ROUTER, IN, ADMISSION, 0, "lr_in_admission") \
@@ -2611,6 +2613,29 @@ build_acls(struct ovn_datapath *od, struct hmap *lflows)
}
static void
+build_qos(struct ovn_datapath *od, struct hmap *lflows) {
+ ovn_lflow_add(lflows, od, S_SWITCH_IN_QOS_MARK, 0, "1", "next;");
+ ovn_lflow_add(lflows, od, S_SWITCH_OUT_QOS_MARK, 0, "1", "next;");
+
+ for (size_t i = 0; i < od->nbs->n_qos_rules; i++) {
+ struct nbrec_qos *qos = od->nbs->qos_rules[i];
+ bool ingress = !strcmp(qos->direction, "from-lport") ? true :false;
+ enum ovn_stage stage = ingress ? S_SWITCH_IN_QOS_MARK : S_SWITCH_OUT_QOS_MARK;
+
+ if (!strcmp(qos->key_action, "dscp")) {
+ struct ds dscp_action = DS_EMPTY_INITIALIZER;
+
+ ds_put_format(&dscp_action, "ip.dscp = %d; next;",
+ (uint8_t)qos->value_action);
+ ovn_lflow_add(lflows, od, stage,
+ qos->priority,
+ qos->match, ds_cstr(&dscp_action));
+ ds_destroy(&dscp_action);
+ }
+ }
+}
+
+static void
build_lb(struct ovn_datapath *od, struct hmap *lflows)
{
/* Ingress and Egress LB Table (Priority 0): Packets are allowed by
@@ -2715,7 +2740,7 @@ build_lswitch_flows(struct hmap *datapaths, struct hmap *ports,
struct ds actions = DS_EMPTY_INITIALIZER;
/* Build pre-ACL and ACL tables for both ingress and egress.
- * Ingress tables 3 and 4. Egress tables 0 and 1. */
+ * Ingress tables 3 through 9. Egress tables 0 through 6. */
struct ovn_datapath *od;
HMAP_FOR_EACH (od, key_node, datapaths) {
if (!od->nbs) {
@@ -2726,6 +2751,7 @@ build_lswitch_flows(struct hmap *datapaths, struct hmap *ports,
build_pre_lb(od, lflows);
build_pre_stateful(od, lflows);
build_acls(od, lflows);
+ build_qos(od, lflows);
build_lb(od, lflows);
build_stateful(od, lflows);
}
@@ -2797,7 +2823,7 @@ build_lswitch_flows(struct hmap *datapaths, struct hmap *ports,
ovn_lflow_add(lflows, od, S_SWITCH_IN_PORT_SEC_IP, 0, "1", "next;");
}
- /* Ingress table 9: ARP/ND responder, skip requests coming from localnet
+ /* Ingress table 10: ARP/ND responder, skip requests coming from localnet
* ports. (priority 100). */
HMAP_FOR_EACH (op, key_node, ports) {
if (!op->nbsp) {
@@ -2812,7 +2838,7 @@ build_lswitch_flows(struct hmap *datapaths, struct hmap *ports,
}
}
- /* Ingress table 9: ARP/ND responder, reply for known IPs.
+ /* Ingress table 10: ARP/ND responder, reply for known IPs.
* (priority 50). */
HMAP_FOR_EACH (op, key_node, ports) {
if (!op->nbsp) {
@@ -2905,7 +2931,7 @@ build_lswitch_flows(struct hmap *datapaths, struct hmap *ports,
}
}
- /* Ingress table 9: ARP/ND responder, by default goto next.
+ /* Ingress table 10: ARP/ND responder, by default goto next.
* (priority 0)*/
HMAP_FOR_EACH (od, key_node, datapaths) {
if (!od->nbs) {
@@ -2915,7 +2941,7 @@ build_lswitch_flows(struct hmap *datapaths, struct hmap *ports,
ovn_lflow_add(lflows, od, S_SWITCH_IN_ARP_ND_RSP, 0, "1", "next;");
}
- /* Logical switch ingress table 10 and 11: DHCP options and response
+ /* Logical switch ingress table 11 and 12: DHCP options and response
* priority 100 flows. */
HMAP_FOR_EACH (op, key_node, ports) {
if (!op->nbsp) {
@@ -2994,7 +3020,7 @@ build_lswitch_flows(struct hmap *datapaths, struct hmap *ports,
}
}
- /* Ingress table 10 and 11: DHCP options and response, by default goto next.
+ /* Ingress table 11 and 12: DHCP options and response, by default goto next.
* (priority 0). */
HMAP_FOR_EACH (od, key_node, datapaths) {
@@ -3006,7 +3032,7 @@ build_lswitch_flows(struct hmap *datapaths, struct hmap *ports,
ovn_lflow_add(lflows, od, S_SWITCH_IN_DHCP_RESPONSE, 0, "1", "next;");
}
- /* Ingress table 12: Destination lookup, broadcast and multicast handling
+ /* Ingress table 13: Destination lookup, broadcast and multicast handling
* (priority 100). */
HMAP_FOR_EACH (op, key_node, ports) {
if (!op->nbsp) {
@@ -3026,7 +3052,7 @@ build_lswitch_flows(struct hmap *datapaths, struct hmap *ports,
"outport = \""MC_FLOOD"\"; output;");
}
- /* Ingress table 12: Destination lookup, unicast handling (priority 50), */
+ /* Ingress table 13: Destination lookup, unicast handling (priority 50), */
HMAP_FOR_EACH (op, key_node, ports) {
if (!op->nbsp) {
continue;
@@ -3073,7 +3099,7 @@ build_lswitch_flows(struct hmap *datapaths, struct hmap *ports,
}
}
- /* Ingress table 12: Destination lookup for unknown MACs (priority 0). */
+ /* Ingress table 13: Destination lookup for unknown MACs (priority 0). */
HMAP_FOR_EACH (od, key_node, datapaths) {
if (!od->nbs) {
continue;
diff --git a/ovn/ovn-nb.ovsschema b/ovn/ovn-nb.ovsschema
index 5f2f2bf22..865dd3491 100644
--- a/ovn/ovn-nb.ovsschema
+++ b/ovn/ovn-nb.ovsschema
@@ -1,7 +1,7 @@
{
"name": "OVN_Northbound",
- "version": "5.3.4",
- "cksum": "1155817817 9975",
+ "version": "5.4.0",
+ "cksum": "4176761817 11225",
"tables": {
"NB_Global": {
"columns": {
@@ -26,6 +26,11 @@
"refType": "strong"},
"min": 0,
"max": "unlimited"}},
+ "qos_rules": {"type": {"key": {"type": "uuid",
+ "refTable": "QoS",
+ "refType": "strong"},
+ "min": 0,
+ "max": "unlimited"}},
"load_balancer": {"type": {"key": {"type": "uuid",
"refTable": "Load_Balancer",
"refType": "strong"},
@@ -124,6 +129,23 @@
"type": {"key": "string", "value": "string",
"min": 0, "max": "unlimited"}}},
"isRoot": false},
+ "QoS": {
+ "columns": {
+ "priority": {"type": {"key": {"type": "integer",
+ "minInteger": 0,
+ "maxInteger": 32767}}},
+ "direction": {"type": {"key": {"type": "string",
+ "enum": ["set", ["from-lport", "to-lport"]]}}},
+ "match": {"type": "string"},
+ "action": {"type": {"key": {"type": "string",
+ "enum": ["set", ["dscp"]]},
+ "value": {"type": "integer",
+ "minInteger": 0,
+ "maxInteger": 63}}},
+ "external_ids": {
+ "type": {"key": "string", "value": "string",
+ "min": 0, "max": "unlimited"}}},
+ "isRoot": false},
"Logical_Router": {
"columns": {
"name": {"type": "string"},
diff --git a/ovn/ovn-nb.xml b/ovn/ovn-nb.xml
index 32ddb7093..c2a1ebb3c 100644
--- a/ovn/ovn-nb.xml
+++ b/ovn/ovn-nb.xml
@@ -119,6 +119,10 @@
Access control rules that apply to packets within the logical switch.
</column>
+ <column name="qos_rules">
+ QOS marking rules that apply to packets within the logical switch.
+ </column>
+
<group title="other_config">
<p>
Additional configuration options for the logical switch.
@@ -899,6 +903,59 @@
</group>
</table>
+ <table name="QoS" title="QOS table">
+ <p>
+ Each row in this table represents one QOS rule for a logical switch
+ that points to it through its <ref column="qos_rules"/> column. The <ref
+ column="action"/> column for the highest-<ref column="priority"/>
+ matching row in this table determines a packet's qos marking. If no row
+ matches, packets will not have any qos marking.
+ </p>
+
+ <column name="priority">
+ <p>
+ The QOS rule's priority. Rules with numerically higher priority
+ take precedence over those with lower. If two QOS rules with
+ the same priority both match, then the one actually applied to a
+ packet is undefined.
+ </p>
+ </column>
+
+ <column name="direction">
+ <p>
+ The value of this field is similar to <ref colun="direction"
+ table="ACL" db="OVN_Northbound"/> column in the OVN Northbound
+ database's <ref table="ACL" db="OVN_Northbound"/> table.
+ </p>
+ </column>
+
+ <column name="match">
+ <p>
+ The packets that the QOS rules should match, in the same expression
+ language used for the <ref column="match" table="Logical_Flow"
+ db="OVN_Southbound"/> column in the OVN Southbound database's
+ <ref table="Logical_Flow" db="OVN_Southbound"/> table. The
+ <code>outport</code> logical port is only available in the
+ <code>to-lport</code> direction (the <code>inport</code> is
+ available in both directions).
+ </p>
+ </column>
+
+ <column name="action">
+ <p>The action to be performed on the matched packet</p>
+ <ul>
+ <li>
+ <code>dscp</code>: The value of this action should be in the
+ range of 0 to 63 (inclusive).
+ </li>
+ </ul>
+ </column>
+
+ <column name="external_ids">
+ See <em>External IDs</em> at the beginning of this document.
+ </column>
+ </table>
+
<table name="Logical_Router_Port" title="L3 logical router port">
<p>
A port within an L3 logical router.
diff --git a/ovn/utilities/ovn-nbctl.c b/ovn/utilities/ovn-nbctl.c
index 7643241b4..fe6f51556 100644
--- a/ovn/utilities/ovn-nbctl.c
+++ b/ovn/utilities/ovn-nbctl.c
@@ -2626,6 +2626,11 @@ static const struct ctl_table_class tables[] = {
NULL},
{NULL, NULL, NULL}}},
+ {&nbrec_table_qos,
+ {{&nbrec_table_qos, NULL,
+ NULL},
+ {NULL, NULL, NULL}}},
+
{NULL, {{NULL, NULL, NULL}, {NULL, NULL, NULL}}}
};