Public Repository

Last pushed: 2 years ago
Short Description
This is a container of ovs (Open vSwicth). ovs is build as a user space ovs to run as an container. The ovs version is 2.3.0.
Full Description

ovs is built based on the document "INSTALL.userspace" in openvswitch 2.3.0.
The base linux is ubuntu 14.04. The kernel of the host has to support TUN/TAP driver.

Dockerfile

This container is built with 3 steps.

The 1st Step

Create a container with necessary tools like uml-utilities.
Also add openvswitch source directly which include compiled ovs binaries by make command.
This ovs is created as userspace ovs. When bulding ovs, ovs kernel driver is not built.
Please see INSTALL.usespace of openvswicth srouce package to get more information.

FROM ubuntu:14.04
MAINTAINER Toru Okatsu
Run apt-get update
Run apt-get install -y uml-utilities
Run apt-get install -y make
Run apt-get install -y python
Run apt-get install -y gcc
ADD openvswitch-2.3.0 /root/openvswitch-2.3.0
ADD scripts /root/scripts

The 2nd Step

Run first image and execute following tasks.

  • Install ovs by "make install" command
    *. Initialize openvswitch db
  • On host Linux running Docker, the TUN/TAP driver should be available
  • Create a device file for the TUN/TAP driver

After that, commit the container as the second image.

make install

mkdir -p /usr/local/etc/openvswitch
cd /root/openvswitch-2.3.0/
ovsdb-tool create /usr/local/etc/openvswitch/conf.db vswitchd/vswitch.ovsschema

mkdir /dev/net
mknod /dev/net/tun c 10 200

The 3rd Step

Add a command to start ovs at the container start time.

FROM tokatsu/ovs:0.2
MAINTAINER Toru Okatsu
CMD /root/scripts/start_ovs.sh ; /bin/bash

The content of /root/scripts/start_ovs.sh is as follows.

#!/bin/sh
ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \
                     --remote=db:Open_vSwitch,Open_vSwitch,manager_options \
                     --pidfile --detach

ovs-vsctl --no-wait init
ovs-vswitchd --pidfile --detach --log-file

Examples

Create a simple topology one ovs container and two ip host containers.

host1----link1----ovs1----link2----host2
10.32.1.1                       10.32.1.2

docker.io run -ti --name host1 --hostname host1 ubuntu:14.04 /bin/bash
docker.io run -ti --name host1 --hostname host2 ubuntu:14.04 /bin/bash
docker.io run -ti --name host1 --hostname ovs1 --privileged=true tokatsu/ovs:0.3

--privileged is needed to run ovs because it uses TUN/TAP driver.

To create links between the ovs switch and ip hosts, netns is used.The script to create links is at the bottom of this description.

After creating links, on the ovs switch, run following commands to create a bridge to connect two ip hosts.

ovs-vsctl add-br br0
ovs-vsctl set bridge br0 datapath_type=netdev
ovs-vsctl add-port br0 link1
ovs-vsctl add-port br0 link2

The error message "ovs-vsctl: Error detected while setting up 'br0'. " could be displayed when executing "ovs-vsctl add-br br0". But this is no problem because ovs does not have a kernel driver and this command create a system type of data path.The next command sets the data path type of br0 to netdev. netdev is the user space data path.

ovs-vsctl show command displays the configuration.

6a8c265d-e68d-4237-b0ec-5521a4bcda69
    Bridge "br0"
        Port "link1"
            Interface "link1"
        Port "br0"
            Interface "br0"
                type: internal
        Port "link2"
            Interface "link2"

Network Configuration

To create a virtual network for containers, netns is used. The detail information is described in "Advanced Networking" on Docker site.

The script to create links is as follows.

#!/bin/bash

# This script create links ovs switch container and ip node containers
# This should be executed with root authority 
# ip netns needs to access kernel space

# Create a link between two nodes
# ARGS: nodeA, nodeB, link_id
# nodeA has no ip address
create_link() {
    local PIDA=$1
    local PIDB=$2
    local PREFIX="10.32.1."
    local IPB=${PREFIX}"$3/24"
    local VETH="link"$3

    echo $PIDA $PIDB $PREFIX $IPB

    ip link add A type veth peer name B

    # Put an edge of the link to containers network name spaces
    ip link set A netns $PIDA
    ip netns exec $PIDA ip link set dev A name $VETH
    ip netns exec $PIDA ip link set $VETH up

    ip link set B netns $PIDB
    ip netns exec $PIDB ip link set dev B name $VETH
    ip netns exec $PIDB ip addr add $IPB dev $VETH
    ip netns exec $PIDB ip link set $VETH up
}


# Define nodes
nodes=('ovs1' 'host1' 'host2')

mkdir -p /var/run/netns
i=0
for node in ${nodes[@]}; do
    # Get the process id of each container
    PID1=`docker.io inspect -f '{{.State.Pid}}' $node`

    # Create a sybolic links to connect containers and netns
    if [ ! -e /var/run/netns/$PID1 ]; then
        ln -s /proc/$PID1/ns/net /var/run/netns/$PID1
    fi
done

# Define links
# nodeA
# nodeB     => nodeA---link_id----nodeB
# link_id
# nodeA is a bridge and has no ip address
# nodeB is a ip node 
node_A=(ovs1  ovs1 )
node_B=(host1 host2)
linkid=(1     2    )

for (( i=0; i<${#linkid[@]}; i++ )); do
    # Get PID of each container
    PID1=`docker.io inspect -f '{{.State.Pid}}' ${node_A[$i]}`
    PID2=`docker.io inspect -f '{{.State.Pid}}' ${node_B[$i]}`
    LINK=${linkid[$i]}
    create_link $PID1 $PID2 $LINK
done
Docker Pull Command
Owner
tokatsu

Comments (0)