In order to run Docker’d Streaming System, all you need is a host running Docker. I’ll go over manually setting up an image and then using the tool I’ve provided.

Manual Setup

Let’s say we want to set up an encoder in the cloud. First we need to ensure that we have a host that is running Docker. Install directions can be found here. We also need Flumotion worker and planet configurations. I’ll link a sample worker and planet. Finally, we need a Dockerfile that will build our image. You can find a sample Dockerfile here.

Now, let’s get building.

  1. Start off by making a build directory. I’ll call this encoder_dir in this guide. We also want a sources directory, where we will place all the resources we’d like our image to have access to. mkdir -p encoder_dir/sources/

  2. Let’s place our Dockerfile at encoder_dir/Dockerfile (Dockerfile is the filename where the Docker client will look for build instructions by default) and our worker and planet files at encoder_dir/sources/worker.xml and encoder_dir/sources/planet.xml respectively. We’ll notice in our Dockerfile that we are adding the sources directory to /home/ on our image. This means that all files in encoder_dir/sources are going to be available in /home/ in our image.
  3. Now we are ready to build!

    cd encoder_dir docker build -t test-encoder .

  4. Now our image is built. Let’s start it. The -P option ensures that the ports we expose in our Dockerfile gets mapped to the host.

    docker run -P test-encoder

  5. We now have an encoder running, but we don’t yet know what port it is on. Let’s check.

    docker ps

    This should show us what port on the host is mapped to our image’s port 15000, let’s say 49152. Now we should be able to connect a collector using the settings in our planet.xml to our host on port 49152. We should also note what port is mapped to 8080, let’s say 49153, which is where we will find our stream.

  6. Once our collector is connected, we should be able to find our stream at 0.0.0.0:49153!

Automatic Setup

Quick’n’Dirty

  1. Fill out config.json/config.private.json
  2. cd to pusher directory
  3. .bin/activate (if Fabric not installed)
  4. fab prepare_deploy
  5. fab service_deploy
  6. Wait! The above step takes a while.
  7. fab service_start

Voila!

Not so quick

At a conference, we obviously don’t want to have to manually build and setup 15 different images! I’ve included a tool that should be recognized as very similar to the current push_configs.py. Things are, however, slightly different. Because we now (thanks to Docker) have the capability to run multiple components on a single host and we want these all to be highly configurable, we keep track of what components a host has as well as connection details for that host in ‘host files’. These live in the tools/flumotion-config/pusher/hosts directory. These files are named based on the host they hold the information for. Let’s cd to the pusher directory and see how we can use them.

Before we start using the new pusher/tools, you need to have the Fabric library installed. Alternatively, you can use the virtualenv located in the pusher directory which has it installed.

. bin/activate

prepare_deploy

Now that we have fabric accessible, let’s autogenerate our host files from config.json and config.private.json.

fab prepare_deploy

If we look in the hosts directory, it is now populated with files for each host that is going to be involved in our system. If we look at a particular file 192.168.59.103 (my Docker VM), it might look something like:

{
    "services": [
        {
            "planet_template": "encoder.xml",
            "worker_template": "worker.xml",
            "docker_template": "Dockerfile_base",
            "type": "flumotion-encoder",
            "conf": {
                "justintv_user": false,
                "flumotion-mixer": "192.168.59.103",
                "preview": "http://preview.timvideos.us/%23pycon-ab/latest.png",
                "group": "bc",
                "flumotion-password-crypt": "12CsGd8FRcMSM",
                "title": "Grand Ballroom AB",
                "irclog": "http://logs.timvideos.us/%23pycon-ab/preview.log.html",
                "twitter": "@pycon OR #pycon OR #pyconus",
                "flumotion-user": "username",
                "flumotion-salt": "12345",
                "flumotion-collector": "192.168.59.103",
                "channel": "#pycon-ab",
                "flumotion-encoder": "192.168.59.103",
                "link": "https://us.pycon.org/2012/schedule/",
                "flumotion-password": "password",
                "flumotion-encoder-port": 49152,
                "justintv_key": false,
                "logo": "/static/logo/pycon.png"
            }
        },
        {
            "planet_template": "collector.xml",
            "worker_template": "worker.xml",
            "docker_template": "Dockerfile_base",
            "type": "flumotion-collector",
            "conf": {
                "justintv_user": false,
                "flumotion-mixer": "192.168.59.103",
                "preview": "http://preview.timvideos.us/%23pycon-ab/latest.png",
                "group": "bc",
                "flumotion-password-crypt": "12CsGd8FRcMSM",
                "title": "Grand Ballroom AB",
                "irclog": "http://logs.timvideos.us/%23pycon-ab/preview.log.html",
                "twitter": "@pycon OR #pycon OR #pyconus",
                "flumotion-user": "username",
                "flumotion-salt": "12345",
                "flumotion-collector": "192.168.59.103",
                "channel": "#pycon-ab",
                "flumotion-encoder": "192.168.59.103",
                "link": "https://us.pycon.org/2012/schedule/",
                "flumotion-password": "password",
                "flumotion-encoder-port": 49152,
                "justintv_key": false,
                "logo": "/static/logo/pycon.png"
            }
        }
    ],
    "host": "192.168.59.103",
    "connection_details": {
        "password": "tcuser",
        "keyfile": "",
        "user": "docker",
        "port": "22"
    }
}

We can point out a few things. This host is running two ‘services’ or components: an encoder and a collector for the BC group. We are connecting to this host with the username ‘docker’ and the password ‘tcuser’ and we aren’t using a key file. We can also see that the template files being used for the services.

If you’d prefer to do thing similarly to the old method of deployment, you don’t need to worry about the host files, except for specifiying the connection details! They do make very specific configurations possible though.

service_deploy and host_load

Assuming we specified the connection details correctly, we can now build the components on the host.

fab service_deploy[:group[,components]]

Where group and service are optional arguments. If we only want to deploy bc group’s encoder, we can do

fab service_deploy:bc,flumotion-encoder

We also may want to deploy only encoders for all groups. That would look like:

fab service_deploy:all,flumotion-encoder

If we want to deploy all components for a group, we specify similarly:

fab service_deploy:bc,all

Default arguments are all groups and all components. We can also use the host_load command if say, we want to deploy all components for a single host like so:

fab host_load:hosts='...'

service_start and host_start

If you’ve followed the instructions above, you now have component images on your machine running Docker. We can bring components up in the exact same way we built them with service_start or host_start like so:

fab service_start
fab service_start:bc,flumotion-encoder
fab service_start:all,flumotion-collector
fab host_start:hosts=192.168.59.103

Again, service_start starts components regardless of their hosts while host_start starts all components on a given host(s).

service_stop and host_stop

This is how we bring our system down! Same as above, we can bring down individual components. Some examples below:

fab service_stop
fab service_stop:all,flumotion-encoder
fab host_stop:hosts=192.168.59.103

Changing configurations

Let’s say we want our encoder for the bc group to use a different worker template and that file is located in the pusher directory and named worker_different.xml. We can modify the host file directly OR use the commandline tool provided:

fab set_worker_file:filename[,group[,component]]
fab set_worker_file:worker_different.xml
fab set_worker_file:worker_different.xml,all,flumotion-encoder
fab set_worker_file:worker_different.xml,ab,flumotion-encoder

Now that our system knows we are using a different template, we need to rebuild our images. Let’s say we ran fab set_worker_file:worker_different.xml,all,flumotion-encoder. We now need to stop these containers, rebuild the images, and start them again.

fab service_stop:all,flumotion-encoder
fab service_deploy:all,flumotion-encoder
fab service_start:all,flumotion-encoder

Or the equivalent

fab service_reload:all,flumotion-encoder

Voila! All of our encoders are now running with worker_different.xml!