Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cli bindings and modes of operation #21

Merged
merged 1 commit into from
Aug 1, 2015
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
104 changes: 75 additions & 29 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ docker-ovs-plugin

### QuickStart Instructions

The quickstart instructions describe how to start the plugin in **nat mode**. Flat mode is described in the `flat` mode section.

1. Install the Docker experimental binary from the instructions at: [Docker Experimental](https://github.com/docker/docker/tree/master/experimental). (stop other docker instances)
- Quick Experimental Install: `wget -qO- https://experimental.docker.com/ | sh`
1. Install and start Open vSwitch.
Expand All @@ -25,14 +27,8 @@ docker-ovs-plugin
```
$ sudo ovs-vsctl set-manager ptcp:6640
```

3. Create the `ovsbr-docker0` bridge by hand:

```
$ sudo ovs-vsctl add-br ovsbr-docker0
```

4. Start Docker with the following:

3. Start Docker with the following:

```
$sudo docker -d --default-network=ovs:ovsbr-docker0`
Expand All @@ -44,48 +40,96 @@ docker-ovs-plugin
# echo 'DOCKER_OPTS="--default-network=ovs:ovsbr-docker0"' >> /etc/default/docker
# service docker restart
```
5. Create the socket the plugin uses:

```
$ sudo su
# mkdir -p /usr/share/docker/plugins
# touch /usr/share/docker/plugins/ovs.sock
```

6. Next start the plugin. A pre-compiled x86_64 binary can be downloaded from the [binaries](https://github.com/gopher-net/docker-ovs-plugin/tree/master/binaries) directory. **Note:** Running inside a container is a todo, pop it into issues if you want to help contribute that.
4. Next start the plugin. A pre-compiled x86_64 binary can be downloaded from the [binaries](https://github.com/gopher-net/docker-ovs-plugin/tree/master/binaries) directory. **Note:** Running inside a container is a todo, pop it into issues if you want to help contribute that.

```
$ wget -O ./docker-ovs-plugin https://github.com/gopher-net/docker-ovs-plugin/raw/master/binaries/docker-ovs-plugin-0.1-Linux-x86_64
$ chmod +x docker-ovs-plugin
$ ./docker-ovs-plugin
```


Running the binary with no options is the same as running the following. Any of those fields can be customized, just make sure your gateway is on the same network/subnet as the specified bridge subnet.

```
$ ./docker-ovs-plugin --gateway=172.18.40.1 --bridge-subnet=172.18.40.0/24 -mode=nat
```

If you pass a subnet but not a gateway, we currently make an assumption that the first usable address. For example, in the case of a /24 subnet the .1 on the network will be used)

For debugging or just extra logs from the sausage factory, add the debug flag `./docker-ovs-plugin -d`

6. Run some containers and verify they can ping one another with `docker run -it --rm busybox` or `docker run -it --rm ubuntu` etc, or any other docker images you prefer. Alternatively, paste a few dozen or more containers running in the background and watch the ports provision and de-provision in OVS with `docker run -itd busybox`
5. Run some containers and verify they can ping one another with `docker run -it --rm busybox` or `docker run -it --rm ubuntu` etc, or any other docker images you prefer. Alternatively, paste a few dozen or more containers running in the background and watch the ports provision and de-provision in OVS with `docker run -itd busybox`

```
INFO[0000] OVSDB network driver initialized
INFO[0000] Plugin configuration:
container subnet: [172.18.40.0/24]
container gateway: [172.18.40.1]
bridge name: [ovsbr-docker0]
bridge mode: [nat]
mtu: [1450]
INFO[0000] OVS network driver initialized successfully
INFO[0005] Dynamically allocated container IP is: [ 172.18.40.2 ]
INFO[0005] Attached veth [ ovs-veth0-ac097 ] to bridge [ ovsbr-docker0 ]
INFO[0009] Deleted OVS port [ ovs-veth0-ac097 ] from bridge [ ovsbr-docker0 ]
```

**Additional Notes**:
- The argument passed to `--default-network` the plugin is identified via `ovs`. More specifically, the socket file that currently defaults to `/usr/share/docker/plugins/ovs.sock`.
- The default bridge name in the example is `ovsbr-docker0`.
- The bridge name is temporarily hardcoded. That and more will be configurable via flags. (Help us define and code those flags).
### Flat Mode

There are two generic modes, `flat` and `nat`. The default mode is `nat` since it does not require any orchestration with the network because the address space is hidden behind iptables masquerading.


- flat is simply an OVS bridge with the container link attached to it. An example would be a Docker host is plugged into a data center port that has a subnet of `192.168.1.0/24`. You would start the plugin like so:

```
$ docker-ovs-plugin --gateway=192.168.1.1 --bridge-subnet=192.168.1.0/24 -mode=flat
```

- Containers now start attached to an OVS bridge. It could be tagged or untagged but either way it is isolated and unable to communicate to anything outside of its bridge domain. In this case, you either add VXLAN tunnels to other bridges of the same bridge domain or add an `eth` interface to the bridge to allow access to the underlying network when traffic leaves the Docker host. To do so, you simply add the `eth` interface to the ovs bridge. Neither the bridge nor the eth interface need to have an IP address since traffic from the container is strictly L2. **Warning** if you are remoted into the physical host make sure you are not using an ethernet interface to attach to the bridge that is also your management interface since the eth interface no longer uses the IP address it had. The IP would need to be migrated to ovsbr-docker0 in this case. Allowing underlying network access to an OVS bridge can be done like so:

```
ovs-vsctl add-port ovsbr-docker0 eth2

```

Add an address to ovsbr-docker0 if you want an L3 interface on the L2 domain for the Docker host if you would like one for troubleshooting etc but it isn't required since flat mode cares only about MAC addresses and VLAN IDs like any other L2 domain would.

- Example of OVS with an ethernet interface bound to it for external access to the container sitting on the same bridge. NAT mode doesn't need the eth interface because IPTables is doing NAT/PAAT instead of bridging all the way through.


```
$ ovs-vsctl show
e0de2079-66f0-4279-a1c8-46ba0672426e
Manager "ptcp:6640"
is_connected: true
Bridge "ovsbr-docker0"
Port "ovsbr-docker0"
Interface "ovsbr-docker0"
type: internal
Port "ovs-veth0-d33a9"
Interface "ovs-veth0-d33a9"
Port "eth2"
Interface "eth2"
ovs_version: "2.3.1"
```


### Additional Notes:

- The argument passed to `--default-network` the plugin is identified via `ovs`. More specifically, the socket file that currently defaults to `/run/docker/plugins/ovs.sock`.
- The default bridge name in the example is `ovsbr-docker0`.
- The bridge name is temporarily hardcoded. That and more will be configurable via flags. (Help us define and code those flags).
- Add other flags as desired such as `--dns=8.8.8.8` for DNS etc.
- To view the Open vSwitch configuration, use `ovs-vsctl show`.
- To view the OVSDB tables, run `ovsdb-client dump`. All of the mentioned OVS utils are part of the standard binary installations with very well documented [man pages](http://openvswitch.org/support/dist-docs/).
- The containers are brought up on a flat bridge. This means there is no NATing occurring. A layer 2 adjacency such as a VLAN or overlay tunnel is required for multi-host communications. If the traffic needs to be routed an external process to act as a gateway (on the TODO list so dig in if interested in multi-host or overlays).
- Download a quick video demo [here](https://dl.dropboxusercontent.com/u/51927367/Docker-OVS-Plugin.mp4).
- To view the OVSDB tables, run `ovsdb-client dump`. All of the mentioned OVS utils are part of the standard binary installations with very well documented [man pages](http://openvswitch.org/support/dist-docs/).
- The containers are brought up on a flat bridge. This means there is no NATing occurring. A layer 2 adjacency such as a VLAN or overlay tunnel is required for multi-host communications. If the traffic needs to be routed an external process to act as a gateway (on the TODO list so dig in if interested in multi-host or overlays).
- Download a quick video demo [here](https://dl.dropboxusercontent.com/u/51927367/Docker-OVS-Plugin.mp4).

### Hacking and Contributing

Yes!! Please see issues for todos or add todos into [issues](https://github.com/gopher-net/docker-ovs-plugin/issues)! Only rule here is no jerks.

Since this plugin uses netlink for L3 IP assigments, a Linux host that can build [vishvananda/netlink](https://github.com/vishvananda/netlink) library is required.
Since this plugin uses netlink for L3 IP assignments, a Linux host that can build [vishvananda/netlink](https://github.com/vishvananda/netlink) library is required.

1. Install [Go](https://golang.org/doc/install). OVS as listed above and a kernel >= 3.19.

Expand All @@ -94,9 +138,11 @@ Since this plugin uses netlink for L3 IP assigments, a Linux host that can build
```
git clone https://github.com/gopher-net/docker-ovs-plugin.git
cd docker-ovs-plugin/plugin
# Get the Go dependdencies
# Get the Go dependencies
go get ./...
go run main.go
# or using explicit configuration flags:
go run main.go -d --gateway=172.18.40.1 --bridge-subnet=172.18.40.0/24 -mode=nat
```

3. The rest is the same as the Quickstart Section.
Expand Down
2 changes: 1 addition & 1 deletion Vagrantfile
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ ovs-vsctl set-manager ptcp:6640
echo DOCKER_OPTS=\\"--default-network=ovs:ovsbr-docker0\\" >> /etc/default/docker
service docker restart
mkdir -p /usr/share/docker/plugins
touch /usr/share/docker/plugins/ovs.sock
touch /run/docker/plugins/ovs.sock
wget -O /home/vagrant/docker-ovs-plugin https://github.com/gopher-net/docker-ovs-plugin/raw/master/binaries/docker-ovs-plugin-0.1-Linux-x86_64
chmod +x /home/vagrant/docker-ovs-plugin
SCRIPT
Expand Down
Empty file modified binaries/docker-ovs-plugin-0.1-Linux-x86_64
100755 → 100644
Empty file.
2 changes: 1 addition & 1 deletion docker-compose.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
plugin:
build: .
volumes:
- /usr/share/docker/plugins/ovs.sock:/usr/share/docker/plugins/ovs.sock
- /run/docker/plugins/ovs.sock:/run/docker/plugins/ovs.sock
- /var/run/docker.sock:/var/run/docker.sock
net: host
privileged: true
Expand Down
2 changes: 1 addition & 1 deletion install.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/bin/sh

touch /usr/share/docker/plugins/ovs.sock
touch /run/docker/plugins/ovs.sock
docker-compose up -d
66 changes: 38 additions & 28 deletions plugin/main.go
Original file line number Diff line number Diff line change
@@ -1,21 +1,25 @@
package main

import (
"fmt"
"os"
"path/filepath"

log "github.com/Sirupsen/logrus"
"github.com/codegangsta/cli"
"github.com/gopher-net/docker-ovs-plugin/plugin/ovs"
)

const version = "0.1"
const (
version = "0.1"
ovsSocket = "ovs.sock"
pluginPath = "/run/docker/plugins/"
)

func main() {

var flagSocket = cli.StringFlag{
Name: "socket, s",
Value: "/usr/share/docker/plugins/ovs.sock",
Value: ovsSocket,
Usage: "listening unix socket",
}
var flagDebug = cli.BoolFlag{
Expand All @@ -29,16 +33,17 @@ func main() {
app.Flags = []cli.Flag{
flagDebug,
flagSocket,
ovs.FlagBridgeName,
ovs.FlagBridgeIP,
ovs.FlagBridgeSubnet,
ovs.FlagIpVlanMode,
ovs.FlagGateway,
ovs.FlagMtu,
}
app.Action = Run
app.Before = cliInit
app.Before = initEnv
app.Run(os.Args)
}

func cliInit(ctx *cli.Context) error {
func initEnv(ctx *cli.Context) error {
socketFile := ctx.String("socket")
// Default loglevel is Info
if ctx.Bool("debug") {
Expand All @@ -47,40 +52,45 @@ func cliInit(ctx *cli.Context) error {
log.SetLevel(log.InfoLevel)
}
log.SetOutput(os.Stderr)
// Verify the path to the plugin socket oath and filename were passed
sockDir, fileHandle := filepath.Split(socketFile)
if fileHandle == "" {
log.Fatalf("Socket file path and name are required. Ex. /usr/share/docker/plugins/<plugin_name>.sock")
}
// Make the plugin filepath and parent dir if it does not already exist
if err := os.MkdirAll(sockDir, 0755); err != nil && !os.IsExist(err) {
log.Warnf("Could not create net plugin path directory: [ %s ]", err)
}
// If the plugin socket file already exists, remove it.
if _, err := os.Stat(socketFile); err == nil {
log.Debugf("socket file [ %s ] already exists, deleting..", socketFile)
removeSock(socketFile)
}
log.Debugf("Plugin socket path is [ %s ] with a file handle [ %s ]", sockDir, fileHandle)
initSock(socketFile)
return nil
}

// Run initializes the driver
func Run(ctx *cli.Context) {
var d ovs.Driver
var err error
if d, err = ovs.New(version); err != nil {
if d, err = ovs.New(version, ctx); err != nil {
log.Fatalf("unable to create driver: %s", err)
}
log.Info("OVSDB network driver initialized")
if err := d.Listen(ctx.String("socket")); err != nil {
log.Info("OVS network driver initialized successfully")

// concatenate the absolute path to the spec file handle
absSocket := fmt.Sprint(pluginPath, ctx.String("socket"))
if err := d.Listen(absSocket); err != nil {
log.Fatal(err)
}
}

func removeSock(sockFile string) {
err := os.Remove(sockFile)
// removeSock if an old filehandle exists remove it
func removeSock(absFile string) {
err := os.RemoveAll(absFile)
if err != nil {
log.Fatalf("unable to remove old socket file [ %s ] due to: %s", sockFile, err)
log.Fatalf("Unable to remove the old socket file [ %s ] due to: %s", absFile, err)
}
}

// initSock create the plugin filepath if it does not already exist
func initSock(socketFile string) {
if err := os.MkdirAll(pluginPath, 0755); err != nil && !os.IsExist(err) {
log.Warnf("Could not create net plugin path directory: [ %s ]", err)
}
// concatenate the absolute path to the spec file handle
absFile := fmt.Sprint(pluginPath, socketFile)
// If the plugin socket file already exists, remove it.
if _, err := os.Stat(absFile); err == nil {
log.Debugf("socket file [ %s ] already exists, unlinking the old file handle..", absFile)
removeSock(absFile)
}
log.Debugf("The plugin absolute path and handle is [ %s ]", absFile)
}
20 changes: 11 additions & 9 deletions plugin/ovs/cli.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,17 +4,19 @@ import "github.com/codegangsta/cli"

// Exported variables
var (
// TODO: Values need to be bound to driver. Need to modify the Driver iface. Added brOpts if we want to pass that to Listen(string)
FlagBridgeName = cli.StringFlag{Name: "bridge-name", Value: bridgeName, Usage: "name of the OVS bridge to add containers. If it doees not exist, it will be created. default: --bridge-name=ovsbr-docker0"}
FlagBridgeIP = cli.StringFlag{Name: "bridge-net", Value: bridgeIfaceNet, Usage: "IP and netmask of the bridge. default: --bridge-ip=172.18.40.1/24"}
FlagBridgeSubnet = cli.StringFlag{Name: "bridge-subnet", Value: bridgeSubnet, Usage: "subnet for the containers on the bridge to use (currently IPv4 support). default: --bridge-subnet=172.18.40.0/24"}
FlagIpVlanMode = cli.StringFlag{Name: "mode", Value: ovsDriverMode, Usage: "name of the OVS driver mode [nat|flat]. (default: l2)"}
FlagBridgeSubnet = cli.StringFlag{Name: "bridge-subnet", Value: bridgeSubnet, Usage: "(required for flat L2 mode) subnet for the containers on the bridge to use. default only applies to NAT mode: --bridge-subnet=172.18.40.0/24"}
FlagMtu = cli.IntFlag{Name: "mtu", Value: defaultMTU, Usage: "MTU of the container interface (default: 1440 Note: greater then 1500 unsupported atm)"}
FlagGateway = cli.StringFlag{Name: "gateway", Value: gatewayIP, Usage: "(required for flat L2 mode) IP of the default gateway (default NAT mode: 172.18.40.1)"}
// Bridge name currently needs to match the docker -run bridge name. Leaving this unmodifiable until that is sorted
FlagBridgeName = cli.StringFlag{Name: "bridge-name", Value: bridgeName, Usage: "name of the OVS bridge to add containers. (default name: ovsbr-docker0"}
)

// Unexported variables
var (
// TODO: Temp hardcodes, bind to CLI flags and/or dnet-ctl for bridge properties.
bridgeName = "ovsbr-docker0" // temp until binding via flags
bridgeSubnet = "172.18.40.0/24" // temp until binding via flags
bridgeIfaceNet = "172.18.40.1/24" // temp until binding via flags
gatewayIP = "172.18.40.1" // Bridge vs. GW IPs
bridgeName = "ovsbr-docker0" // TODO: currently immutable
bridgeSubnet = "172.18.40.0/24" // NAT mode can use this addr. Flat (L2) mode requires an IPNet that will overwrite this val.
gatewayIP = "" // NAT mode will use the first usable address of the bridgeSubnet."172.18.40.0/24" would use "172.18.40.1" as a gateway. Flat L2 mode requires an external gateway for L3 routing
ovsDriverMode = "nat" // Default mode is NAT.
defaultMTU = 1450
)
Loading