summaryrefslogtreecommitdiff
path: root/Documentation/howto
diff options
context:
space:
mode:
authorMark Michelson <mmichels@redhat.com>2019-09-06 10:33:03 -0400
committerBen Pfaff <blp@ovn.org>2019-09-06 14:54:58 -0700
commitf3e24610ea18eb873dc860f1710432e9aacd27fd (patch)
treea3bbf718a77f9a85d43b002540b177a887011cc9 /Documentation/howto
parent9b0064a3cad754e2ef20efe61054ea6ca8dbbbde (diff)
downloadopenvswitch-f3e24610ea18eb873dc860f1710432e9aacd27fd.tar.gz
Remove OVN.
OVN is separated into its own repo. This commit removes the OVN source, OVN tests, and OVN documentation. It also removes mentions of OVN from most documentation. The only place where OVN has been left is in changelogs/NEWS, since we shouldn't mess with the history of the project. There is an exception here. The ovsdb-cluster tests rely on ovn-nbctl and ovn-sbctl to run. Therefore those ovn utilities, as well as their dependencies remain in the repo with this commit. Acked-by: Numan Siddique <nusiddiq@redhat.com> Signed-off-by: Mark Michelson <mmichels@redhat.com> Signed-off-by: Ben Pfaff <blp@ovn.org>
Diffstat (limited to 'Documentation/howto')
-rw-r--r--Documentation/howto/docker.rst326
-rw-r--r--Documentation/howto/firewalld.rst107
-rw-r--r--Documentation/howto/index.rst9
-rw-r--r--Documentation/howto/openstack-containers.rst135
4 files changed, 0 insertions, 577 deletions
diff --git a/Documentation/howto/docker.rst b/Documentation/howto/docker.rst
deleted file mode 100644
index a68b02fdb..000000000
--- a/Documentation/howto/docker.rst
+++ /dev/null
@@ -1,326 +0,0 @@
-..
- Licensed under the Apache License, Version 2.0 (the "License"); you may
- not use this file except in compliance with the License. You may obtain
- a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- License for the specific language governing permissions and limitations
- under the License.
-
- Convention for heading levels in Open vSwitch documentation:
-
- ======= Heading 0 (reserved for the title in a document)
- ------- Heading 1
- ~~~~~~~ Heading 2
- +++++++ Heading 3
- ''''''' Heading 4
-
- Avoid deeper levels because they do not render well.
-
-===================================
-Open Virtual Networking With Docker
-===================================
-
-This document describes how to use Open Virtual Networking with Docker 1.9.0
-or later.
-
-.. important::
-
- Requires Docker version 1.9.0 or later. Only Docker 1.9.0+ comes with support
- for multi-host networking. Consult www.docker.com for instructions on how to
- install Docker.
-
-.. note::
-
- You must build and install Open vSwitch before proceeding with the below
- guide. Refer to :doc:`/intro/install/index` for more information.
-
-Setup
------
-
-For multi-host networking with OVN and Docker, Docker has to be started with a
-distributed key-value store. For example, if you decide to use consul as your
-distributed key-value store and your host IP address is ``$HOST_IP``, start
-your Docker daemon with::
-
- $ docker daemon --cluster-store=consul://127.0.0.1:8500 \
- --cluster-advertise=$HOST_IP:0
-
-OVN provides network virtualization to containers. OVN's integration with
-Docker currently works in two modes - the "underlay" mode or the "overlay"
-mode.
-
-In the "underlay" mode, OVN requires a OpenStack setup to provide container
-networking. In this mode, one can create logical networks and can have
-containers running inside VMs, standalone VMs (without having any containers
-running inside them) and physical machines connected to the same logical
-network. This is a multi-tenant, multi-host solution.
-
-In the "overlay" mode, OVN can create a logical network amongst containers
-running on multiple hosts. This is a single-tenant (extendable to multi-tenants
-depending on the security characteristics of the workloads), multi-host
-solution. In this mode, you do not need a pre-created OpenStack setup.
-
-For both the modes to work, a user has to install and start Open vSwitch in
-each VM/host that they plan to run their containers on.
-
-.. _docker-overlay:
-
-The "overlay" mode
-------------------
-
-.. note::
-
- OVN in "overlay" mode needs a minimum Open vSwitch version of 2.5.
-
-1. Start the central components.
-
- OVN architecture has a central component which stores your networking intent
- in a database. On one of your machines, with an IP Address of
- ``$CENTRAL_IP``, where you have installed and started Open vSwitch, you will
- need to start some central components.
-
- Start ovn-northd daemon. This daemon translates networking intent from Docker
- stored in the OVN\_Northbound database to logical flows in ``OVN_Southbound``
- database. For example::
-
- $ /usr/share/openvswitch/scripts/ovn-ctl start_northd
-
- With Open vSwitch version of 2.7 or greater, you need to run the following
- additional commands (Please read the manpages of ovn-nb for more control
- on the types of connection allowed.) ::
-
- $ ovn-nbctl set-connection ptcp:6641
- $ ovn-sbctl set-connection ptcp:6642
-
-2. One time setup
-
- On each host, where you plan to spawn your containers, you will need to run
- the below command once. You may need to run it again if your OVS database
- gets cleared. It is harmless to run it again in any case::
-
- $ ovs-vsctl set Open_vSwitch . \
- external_ids:ovn-remote="tcp:$CENTRAL_IP:6642" \
- external_ids:ovn-nb="tcp:$CENTRAL_IP:6641" \
- external_ids:ovn-encap-ip=$LOCAL_IP \
- external_ids:ovn-encap-type="$ENCAP_TYPE"
-
- where:
-
- ``$LOCAL_IP``
- is the IP address via which other hosts can reach this host. This acts as
- your local tunnel endpoint.
-
- ``$ENCAP_TYPE``
- is the type of tunnel that you would like to use for overlay networking.
- The options are ``geneve`` or ``stt``. Your kernel must have support for
- your chosen ``$ENCAP_TYPE``. Both ``geneve`` and ``stt`` are part of the
- Open vSwitch kernel module that is compiled from this repo. If you use the
- Open vSwitch kernel module from upstream Linux, you will need a minimum
- kernel version of 3.18 for ``geneve``. There is no ``stt`` support in
- upstream Linux. You can verify whether you have the support in your kernel
- as follows::
-
- $ lsmod | grep $ENCAP_TYPE
-
- In addition, each Open vSwitch instance in an OVN deployment needs a unique,
- persistent identifier, called the ``system-id``. If you install OVS from
- distribution packaging for Open vSwitch (e.g. .deb or .rpm packages), or if
- you use the ovs-ctl utility included with Open vSwitch, it automatically
- configures a system-id. If you start Open vSwitch manually, you should set
- one up yourself. For example::
-
- $ id_file=/etc/openvswitch/system-id.conf
- $ test -e $id_file || uuidgen > $id_file
- $ ovs-vsctl set Open_vSwitch . external_ids:system-id=$(cat $id_file)
-
-3. Start the ``ovn-controller``.
-
- You need to run the below command on every boot::
-
- $ /usr/share/openvswitch/scripts/ovn-ctl start_controller
-
-4. Start the Open vSwitch network driver.
-
- By default Docker uses Linux bridge for networking. But it has support for
- external drivers. To use Open vSwitch instead of the Linux bridge, you will
- need to start the Open vSwitch driver.
-
- The Open vSwitch driver uses the Python's flask module to listen to Docker's
- networking api calls. So, if your host does not have Python's flask module,
- install it::
-
- $ sudo pip install Flask
-
- Start the Open vSwitch driver on every host where you plan to create your
- containers. Refer to the note on ``$OVS_PYTHON_LIBS_PATH`` that is used below
- at the end of this document::
-
- $ PYTHONPATH=$OVS_PYTHON_LIBS_PATH ovn-docker-overlay-driver --detach
-
- .. note::
-
- The ``$OVS_PYTHON_LIBS_PATH`` variable should point to the directory where
- Open vSwitch Python modules are installed. If you installed Open vSwitch
- Python modules via the Debian package of ``python-openvswitch`` or via pip
- by running ``pip install ovs``, you do not need to specify the PATH. If
- you installed it by following the instructions in
- :doc:`/intro/install/general`, then you should specify the PATH. In this
- case, the PATH depends on the options passed to ``./configure``. It is
- usually either ``/usr/share/openvswitch/python`` or
- ``/usr/local/share/openvswitch/python``
-
-Docker has inbuilt primitives that closely match OVN's logical switches and
-logical port concepts. Consult Docker's documentation for all the possible
-commands. Here are some examples.
-
-Create a logical switch
-~~~~~~~~~~~~~~~~~~~~~~~
-
-To create a logical switch with name 'foo', on subnet '192.168.1.0/24', run::
-
- $ NID=`docker network create -d openvswitch --subnet=192.168.1.0/24 foo`
-
-List all logical switches
-~~~~~~~~~~~~~~~~~~~~~~~~~
-
-::
-
- $ docker network ls
-
-You can also look at this logical switch in OVN's northbound database by
-running the following command::
-
- $ ovn-nbctl --db=tcp:$CENTRAL_IP:6640 ls-list
-
-Delete a logical switch
-~~~~~~~~~~~~~~~~~~~~~~~
-
-::
-
- $ docker network rm bar
-
-
-Create a logical port
-~~~~~~~~~~~~~~~~~~~~~
-
-Docker creates your logical port and attaches it to the logical network in a
-single step. For example, to attach a logical port to network ``foo`` inside
-container busybox, run::
-
- $ docker run -itd --net=foo --name=busybox busybox
-
-List all logical ports
-~~~~~~~~~~~~~~~~~~~~~~
-
-Docker does not currently have a CLI command to list all logical ports but you
-can look at them in the OVN database by running::
-
- $ ovn-nbctl --db=tcp:$CENTRAL_IP:6640 lsp-list $NID
-
-Create and attach a logical port to a running container
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-::
-
- $ docker network create -d openvswitch --subnet=192.168.2.0/24 bar
- $ docker network connect bar busybox
-
-Detach and delete a logical port from a running container
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-You can delete your logical port and detach it from a running container
-by running:
-
-::
-
- $ docker network disconnect bar busybox
-
-.. _docker-underlay:
-
-The "underlay" mode
--------------------
-
-.. note::
-
- This mode requires that you have a OpenStack setup pre-installed with
- OVN providing the underlay networking.
-
-1. One time setup
-
- A OpenStack tenant creates a VM with a single network interface (or multiple)
- that belongs to management logical networks. The tenant needs to fetch the
- port-id associated with the interface via which he plans to send the container
- traffic inside the spawned VM. This can be obtained by running the below
- command to fetch the 'id' associated with the VM::
-
- $ nova list
-
- and then by running::
-
- $ neutron port-list --device_id=$id
-
- Inside the VM, download the OpenStack RC file that contains the tenant
- information (henceforth referred to as ``openrc.sh``). Edit the file and add the
- previously obtained port-id information to the file by appending the following
- line::
-
- $ export OS_VIF_ID=$port_id
-
- After this edit, the file will look something like::
-
- #!/bin/bash
- export OS_AUTH_URL=http://10.33.75.122:5000/v2.0
- export OS_TENANT_ID=fab106b215d943c3bad519492278443d
- export OS_TENANT_NAME="demo"
- export OS_USERNAME="demo"
- export OS_VIF_ID=e798c371-85f4-4f2d-ad65-d09dd1d3c1c9
-
-2. Create the Open vSwitch bridge
-
- If your VM has one ethernet interface (e.g.: 'eth0'), you will need to add
- that device as a port to an Open vSwitch bridge 'breth0' and move its IP
- address and route related information to that bridge. (If it has multiple
- network interfaces, you will need to create and attach an Open vSwitch
- bridge for the interface via which you plan to send your container
- traffic.)
-
- If you use DHCP to obtain an IP address, then you should kill the DHCP
- client that was listening on the physical Ethernet interface (e.g. eth0) and
- start one listening on the Open vSwitch bridge (e.g. breth0).
-
- Depending on your VM, you can make the above step persistent across reboots.
- For example, if your VM is Debian/Ubuntu-based, read
- `openvswitch-switch.README.Debian` found in `debian` folder. If your VM is
- RHEL-based, refer to :doc:`/intro/install/rhel`.
-
-3. Start the Open vSwitch network driver
-
- The Open vSwitch driver uses the Python's flask module to listen to Docker's
- networking api calls. The driver also uses OpenStack's
- ``python-neutronclient`` libraries. If your host does not have Python's
- ``flask`` module or ``python-neutronclient`` you must install them. For
- example::
-
- $ pip install python-neutronclient
- $ pip install Flask
-
- Once installed, source the ``openrc`` file::
-
- $ . ./openrc.sh
-
- Start the network driver and provide your OpenStack tenant password when
- prompted::
-
- $ PYTHONPATH=$OVS_PYTHON_LIBS_PATH ovn-docker-underlay-driver \
- --bridge breth0 --detach
-
-From here-on you can use the same Docker commands as described in
-`docker-overlay`_.
-
-Refer to the ovs-architecture man pages (``man ovn-architecture``) to
-understand OVN's architecture in detail.
diff --git a/Documentation/howto/firewalld.rst b/Documentation/howto/firewalld.rst
deleted file mode 100644
index 0dc455ea8..000000000
--- a/Documentation/howto/firewalld.rst
+++ /dev/null
@@ -1,107 +0,0 @@
-..
- Licensed under the Apache License, Version 2.0 (the "License"); you may
- not use this file except in compliance with the License. You may obtain
- a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- License for the specific language governing permissions and limitations
- under the License.
-
- Convention for heading levels in Open vSwitch documentation:
-
- ======= Heading 0 (reserved for the title in a document)
- ------- Heading 1
- ~~~~~~~ Heading 2
- +++++++ Heading 3
- ''''''' Heading 4
-
- Avoid deeper levels because they do not render well.
-
-===================================
-Open Virtual Network With firewalld
-===================================
-
-firewalld is a service that allows for easy administration of firewalls. OVN
-ships with a set of service files that can be used with firewalld to allow
-for remote connections to the northbound and southbound databases.
-
-This guide will describe how you can use these files with your existing
-firewalld setup. Setup and administration of firewalld is outside the scope
-of this document.
-
-Installation
-------------
-
-If you have installed OVN from an RPM, then the service files for firewalld
-will automatically be installed in ``/usr/lib/firewalld/services``.
-Installation from RPM includes installation from the yum or dnf package
-managers.
-
-If you have installed OVN from source, then from the top level source
-directory, issue the following commands to copy the firewalld service files:
-
-::
-
- $ cp rhel/usr_lib_firewalld_services_ovn-central-firewall-service.xml \
- /etc/firewalld/services/
- $ cp rhel/usr_lib_firewalld_services_ovn-host-firewall-service.xml \
- /etc/firewalld/services/
-
-
-Activation
-----------
-
-Assuming you are already running firewalld, you can issue the following
-commands to enable the OVN services.
-
-On the central server (the one running ``ovn-northd``), issue the following::
-
-$ firewall-cmd --zone=public --add-service=ovn-central-firewall-service
-
-This will open TCP ports 6641 and 6642, allowing for remote connections to the
-northbound and southbound databases.
-
-On the OVN hosts (the ones running ``ovn-controller``), issue the following::
-
-$ firewall-cmd --zone=public --add-service=ovn-host-firewall-service
-
-This will open UDP port 6081, allowing for geneve traffic to flow between the
-controllers.
-
-Variations
-----------
-
-When installing the XML service files, you have the choice of copying them to
-``/etc/firewalld/services`` or ``/usr/lib/firewalld/services``. The former is
-recommend since the latter can be overwritten if firewalld is upgraded.
-
-The above commands assumed your underlay network interfaces are in the
-"public" firewalld zone. If your underlay network interfaces are in a separate
-zone, then adjust the above commands accordingly.
-
-The ``--permanent`` option may be passed to the above firewall-cmd invocations
-in order for the services to be permanently added to the firewalld
-configuration. This way it is not necessary to re-issue the commands each
-time the firewalld service restarts.
-
-The ovn-host-firewall-service only opens port 6081. This is because the
-default protocol for OVN tunnels is geneve. If you are using a different
-encapsulation protocol, you will need to modify the XML service file to open
-the appropriate port(s). For VXLAN, open port 4789. For STT, open port 7471.
-
-Recommendations
----------------
-
-The firewalld service files included with the OVS repo are meant as a
-convenience for firewalld users. All that the service files do is to open
-the common ports used by OVN. No additional security is provided. To ensure a
-more secure environment, it is a good idea to do the following
-
-* Use tools such as iptables or nftables to restrict access to known hosts.
-* Use SSL for all remote connections to OVN databases.
-* Use role-based access control for connections to the OVN southbound
- database.
diff --git a/Documentation/howto/index.rst b/Documentation/howto/index.rst
index 9a3487be3..60fb8a717 100644
--- a/Documentation/howto/index.rst
+++ b/Documentation/howto/index.rst
@@ -50,12 +50,3 @@ OVS
sflow
dpdk
-OVN
----
-
-.. toctree::
- :maxdepth: 1
-
- docker
- openstack-containers
- firewalld
diff --git a/Documentation/howto/openstack-containers.rst b/Documentation/howto/openstack-containers.rst
deleted file mode 100644
index 692fe25e5..000000000
--- a/Documentation/howto/openstack-containers.rst
+++ /dev/null
@@ -1,135 +0,0 @@
-..
- Licensed under the Apache License, Version 2.0 (the "License"); you may
- not use this file except in compliance with the License. You may obtain
- a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- License for the specific language governing permissions and limitations
- under the License.
-
- Convention for heading levels in Open vSwitch documentation:
-
- ======= Heading 0 (reserved for the title in a document)
- ------- Heading 1
- ~~~~~~~ Heading 2
- +++++++ Heading 3
- ''''''' Heading 4
-
- Avoid deeper levels because they do not render well.
-
-================================================
-Integration of Containers with OVN and OpenStack
-================================================
-
-Isolation between containers is weaker than isolation between VMs, so some
-environments deploy containers for different tenants in separate VMs as an
-additional security measure. This document describes creation of containers
-inside VMs and how they can be made part of the logical networks securely. The
-created logical network can include VMs, containers and physical machines as
-endpoints. To better understand the proposed integration of containers with
-OVN and OpenStack, this document describes the end to end workflow with an
-example.
-
-* A OpenStack tenant creates a VM (say VM-A) with a single network interface
- that belongs to a management logical network. The VM is meant to host
- containers. OpenStack Nova chooses the hypervisor on which VM-A is created.
-
-* A Neutron port may have been created in advance and passed in to Nova with
- the request to create a new VM. If not, Nova will issue a request to Neutron
- to create a new port. The ID of the logical port from Neutron will also be
- used as the vif-id for the virtual network interface (VIF) of VM-A.
-
-* When VM-A is created on a hypervisor, its VIF gets added to the Open vSwitch
- integration bridge. This creates a row in the Interface table of the
- ``Open_vSwitch`` database. As explained in the :doc:`integration guide
- </topics/integration>`, the vif-id associated with the VM network interface
- gets added in the ``external_ids:iface-id`` column of the newly created row
- in the Interface table.
-
-* Since VM-A belongs to a logical network, it gets an IP address. This IP
- address is used to spawn containers (either manually or through container
- orchestration systems) inside that VM and to monitor the health of the
- created containers.
-
-* The vif-id associated with the VM's network interface can be obtained by
- making a call to Neutron using tenant credentials.
-
-* This flow assumes a component called a "container network plugin". If you
- take Docker as an example for containers, you could envision the plugin to be
- either a wrapper around Docker or a feature of Docker itself that understands
- how to perform part of this workflow to get a container connected to a
- logical network managed by Neutron. The rest of the flow refers to this
- logical component that does not yet exist as the "container network plugin".
-
-* All the calls to Neutron will need tenant credentials. These calls can
- either be made from inside the tenant VM as part of a container network
- plugin or from outside the tenant VM (if the tenant is not comfortable using
- temporary Keystone tokens from inside the tenant VMs). For simplicity, this
- document explains the work flow using the former method.
-
-* The container hosting VM will need Open vSwitch installed in it. The only
- work for Open vSwitch inside the VM is to tag network traffic coming from
- containers.
-
-* When a container needs to be created inside the VM with a container network
- interface that is expected to be attached to a particular logical switch, the
- network plugin in that VM chooses any unused VLAN (This VLAN tag only needs
- to be unique inside that VM. This limits the number of container interfaces
- to 4096 inside a single VM). This VLAN tag is stripped out in the hypervisor
- by OVN and is only useful as a context (or metadata) for OVN.
-
-* The container network plugin then makes a call to Neutron to create a logical
- port. In addition to all the inputs that a call to create a port in Neutron
- that are currently needed, it sends the vif-id and the VLAN tag as inputs.
-
-* Neutron in turn will verify that the vif-id belongs to the tenant in question
- and then uses the OVN specific plugin to create a new row in the
- Logical_Switch_Port table of the OVN Northbound Database. Neutron responds
- back with an IP address and MAC address for that network interface. So
- Neutron becomes the IPAM system and provides unique IP and MAC addresses
- across VMs and containers in the same logical network.
-
-* The Neutron API call above to create a logical port for the container could
- add a relatively significant amount of time to container creation. However,
- an optimization is possible here. Logical ports could be created in advance
- and reused by the container system doing container orchestration. Additional
- Neutron API calls would only be needed if the port needs to be attached to a
- different logical network.
-
-* When a container is eventually deleted, the network plugin in that VM may
- make a call to Neutron to delete that port. Neutron in turn will delete the
- entry in the ``Logical_Switch_Port`` table of the OVN Northbound Database.
-
-As an example, consider Docker containers. Since Docker currently does not
-have a network plugin feature, this example uses a hypothetical wrapper around
-Docker to make calls to Neutron.
-
-* Create a Logical switch::
-
- $ ovn-docker --cred=cca86bd13a564ac2a63ddf14bf45d37f create network LS1
-
- The above command will make a call to Neutron with the credentials to create
- a logical switch. The above is optional if the logical switch has already
- been created from outside the VM.
-
-* List networks available to the tenant::
-
- $ ovn-docker --cred=cca86bd13a564ac2a63ddf14bf45d37f list networks
-
-* Create a container and attach a interface to the previously created switch as
- a logical port::
-
- $ ovn-docker --cred=cca86bd13a564ac2a63ddf14bf45d37f --vif-id=$VIF_ID \
- --network=LS1 run -d --net=none ubuntu:14.04 /bin/sh -c \
- "while true; do echo hello world; sleep 1; done"
-
- The above command will make a call to Neutron with all the inputs it
- currently needs to create a logical port. In addition, it passes the $VIF_ID
- and a unused VLAN. Neutron will add that information in OVN and return back
- a MAC address and IP address for that interface. ovn-docker will then create
- a veth pair, insert one end inside the container as 'eth0' and the other end
- as a port of a local OVS bridge as an access port of the chosen VLAN.