Normalizing the Cloud with Docker Machine

Normalizing the Cloud with Docker Machine

Docker Penguin Gopher Banner

Normalizing the Cloud with Docker Machine

Docker machine creates a virtual machine running Docker located in an array of locations that I can then create containers and ship workloads to. The locations and types you can provision to range from the who is who of cloud computing, workstation resources and traditional on prem local resources sitting in the average enterprise DC.

  • amazonec2
  • azure
  • digitalocean
  • google
  • openstack
  • rackspace
  • softlayer
  • virtualbox
  • vmwarefusion
  • vmwarevcloudair
  • vmwarevsphere
  • *will list more as they grow

What I get from using Docker Machine is a is a cloud catalogue of virtually any provider type I would want (public and private), all in a single client sitting on my laptop. Even cooler is a puzzle piece that has long been missing, is what feels like a normalization of how I can use cloud resources without having to be a cloud ninja of all of the often disparate APIs.

Having a common interface like docker machine enables me to transcend any one provider and abstracts away virtually all remnants of the heterogenous public, private, sandbox implantation details that has been a time consuming constraint to many of us for years. I also see this as good for the CSP’s, making it easier to consume their resources will mean a faster migrations to the cloud, easier testing between dev/prod and maybe even some rational hybrid strategies.

What this means to the consumer could be the content of an entire book but cloud buzz-word bingo is starting to shape up to be a reality. Workloads are truly portable between clouds with a much lower technical barrier to entry via a simple interface. Multi-cloud, hybrid-cloud, workload mobility (data issues aside :_), elasticity, cloud brokering, etc, all suddenly feel very real and with an interface that lends itself to the common enterprise ops and admins. Remove the disjointed complex nature of multi-cloud and you no longer have to be a superhero to start building infra across clouds.

The mechanics are essentially, create a Docker host (VM running Docker) on any of the supported cloud or local provider types. Next register your credentials into your environmental variables and choose which provider you want to spin up resources on. Use ‘docker run’ just like you were on your local machine but instead it will spin up resources anywhere you want.

The following diagram outlines the simple process:
Docker Machine

Enough chit chat, lets get hacking!

Install Docker Machine

Installation instructions are really good from Docker found here.

In this example, I am using Mac OS X. The docker machine commands are the same across platforms. There is support for both Linux and Windows OS’s also.

See the Docker Machine Releases page on Github for the latest stable and RCs. I have some links in a an installer I use (and even update occasionally) sometimes to refresh the toolbox here

Take a look at the docker-machine options

Install the Docker client if it isn’t already. The following is for Darwin. Here are all of the platforms installs.

– Verify the binary like so:

Start VirtualBox VM for Containers

Start a VM that will house your containers:

List the new machine:

Setting Docker ENV Variables

To view the ENV variables you need to talk to the newly spun host, run the following. You likely need to prepend sudo to the statement.

Note: The 3 standard Docker ENV variables are what tells the docker client to run commands against a running Docker daemon. Two are for for the TLS crypto and cert and the third points to the docker host tcp/ip/port/api.

The following docker env command only prints the key/values of the ENVs. They still need to be exported in order to be passed in a $FOO variable fashion. New terminal windows/tabs the values need to be reinitiated into the new session.

In the docs the eval statement is listed w/o the sudo command prefix.

– My environment requires that so the following is what I use.

– Either way what you are looking for are the variables in your shell like so:

If those values are not in your environment, you will get some errors like so:

Running Docker Containers in the new Machine

Lets start a container with a quick little test:


Or you can start a bash shell to poke around from a shell:


Or a web server to serve up pictures of kittahs:

Then test with:

You can also inspect the new container from the base docker command with a few different approaches:

If you run into any issues you can remove the docker host and recreate it with:

All of the machines you create by default are stored in ~/.docker/machine/ along with a couple of temp state files so its easy to backup the configuration of your machine harness since those are read at runtime every time you send a docker-machine command. You can of course opt for alternative storage paths for those. If you wanted to have copy or restore your machine list, you should be able to simply copy the ~/.docker/machine/ directory to another host and see the same remote machines.


The only issue, but not really issue, you may see is if you close your laptop or reboot your machine where docker-machine is running, the virtualbox image is probably not running. If you get a state of ‘error’ odds are the VM isn’t running in VirtualBox (or any other machine for that matter).

In most cases I just start the machine and VirtualBox plays nicely:

If it doesn’t, you can troubleshoot further with some debugging enabled (debug is your friend since this is beta docker-machine -D [args]). :

There is an issue I ran into with Virtualbox that returns true when starting a VM but it doesn’t actually start even though it reports to Docker machine that it does

In that case I just destroyed the old image and started a new one. This is the beauty of throw away infra!

A couple of Virtualbox debugging commands as follows:

If your permissions get a little screwy w/ sudo you may run into this error:

That can be solved by dropping the sudo on the docker-machine create

If you run into an issue that resembles a name resolution issue with a message like so:

Odds are the root is either a proxy issue or not able to resolve DNS names. If it is a DNS resolution issue, you can specify a DNS server in boot2docker by first SSH’ing to the underlying VM/Docker Host/Machine and popping in a DNS or two.

OOM Killer and Docker v1.7

If you have ever wondered how you can oversubscribe applications inside of an OS as well as you can in Linux when at runtime an application may try to lock up as much memory as possible even if it doesn’t use it, it is because the kernel is a bit sneaky and is only allocating much of that memory when the app actually needs it. The downside of the over-commit memory model is when resources become starved, the kernel has to decide who to kill off with a -9 sig. In the current v1.7 master (still in development) I ran into an issues with the following error because I didn’t have cgroup memory swap enabled.

So when I went to spin up a Docker container on a fresh build:

Your kernel does not support oom kill disable

  1. Fix croup memory by editing:
  2. Replace:
    GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
  3. and GRUB_CMDLINE_LINUX_DEFAULT="quiet" with:
  4. GRUB_CMDLINE_LINUX_DEFAULT="cgroup_enable=memory swapaccount=1"
  5. Update the loader update-grub
  6. If you don’t have the update-grub script (debian) you can do something like exec grub-mkconfig -o /boot/grub/grub.cfg
  7. reboot the host

The Grub swap modification will also fix the error: System error: no such directory for memory.swappiness. docker

Build From Source

If you want to try out the latest dev build but don’t feel like building it yourself, check out Docker Master Binaries

A quick down and dirty to build a binary from source:

See Setup and work in a Docker development container. →

Installing Docker on Debian 8.0 Potential Errors

If you go to install on Jesse you may run into this. Adding the backport will provide a usable binary.

To fix this add the following sources to vi /etc/apt/sources.list
Add anywhere in sources.list:

Now you will have an updated recent install of to install via apt-get.

Next up

I was partly inspired by my buddies Brian and Aaron at recently talking about how hard staying on top of the rapidly changing world of compute over the past year or so. They mentioned they traditionally looking to blogs for the emerging innovations and mentioned Scott Lowe’s blog which is one of the top on my reading list. These and other community focused peeps have perennially taken the time to share and pry information out of the valley to get info to those who don’t have the time to stalk PRs, and attend all the cons. I couldn’t agree more and kudos to the discussion. (I try to update the podcast links at the bottom of the page, email if I missed yours). For various reasons the current pace of disruption is accelerated and as a result its even more valuable to maintain relationships with friends and colleagues in the community to help one another out staying on top of relevant information that can give you or your company the competitive edge you are likely in some part, paid to do.

Now that you have the ability to provision anywhere you want, head over to Docker Hub and take a look at all of the software and projects you can spin up in the time it takes to download the container image. The time it saves being able to use other peoples software builds fundamentally changed how I look at computing. What used to take days figuring out how to install and integrate a peice of software is now done in minutes because I can reuse others open source work and build on top of it.

In the next few posts we will take a look at using various cloud and on prem setups. The barriers to cloud consumption are too high today. There is also early docker swarm integration that will further blur the gaps between providers. The providers I am doing first, are ones that offer trial accounts so that anyone can follow along with me. Along the way and time permitting, showing some kewl ops and netops use cases from having compute and docker hub images already built waiting to be used.

I think we are starting to turn the corner on cloud computing by abstracting away primitive heterogenous APIs that can often lead to complexity and vendor lock-in. My opinion is having harnesses like docker machine that reduce complexity across all clouds, not just one-offs, has benefits to both users in cloud consumption and providers in increased volume and a new influx of customers.

For issues, PR and roadmap check out the project repo:

Disclaimer, all of this are my own personal reactions and opinions of docker machine. I do my best to gear all of my posts so that a total novice can step in and follow along without too much hair pulling, yet not boring folks with detail overload. While I do work for Docker, I would be blogging about Docker anyways because you have either tried Docker or you haven’t. If you have I don’t need to convince even the most prickly of folk, still appreciate the fun of using Docker. If I never have to care about how an OS virtualizes hardware again, it won’t be soon enough.

VENOM, CVE-2015-3456, is a security vulnerability in the virtual floppy drive code used by many computer virtualization platforms. This vulnerability may allow an attacker to escape from the confines of an affected virtual machine (VM) guest and potentially obtain code-execution access to the host. Absent mitigation, this VM escape could open access to the host system and all other VMs running on that host, potentially giving adversaries significant elevated access to the host’s local network and adjacent systems. – Floppy Driver Exploits of HW Virtualization in 2015

Thanks for stopping by!

Next Up: Using Docker Machine to Provision on Microsoft Azure →

About the Author

Brent SalisburyI have over 15 years of experience wearing various hats from, network engineer, architect, devops and software engineer. I currently have the pleasure of working at the company that develops my favorite software I have ever used, Docker. My comments here are my personal thoughts and opinions. More at Brent's BioView all posts by Brent Salisbury →

  1. RGNRGN05-07-2015

    This is interesting. I do agree the simplicity of docker-machine is what has been missing from other attempts at a clean abstraction that is simple enough to be useful. OpenStack comes to mind as losing its way imo… Its hard to filter the noise to projects that I should spend time getting to know. Blogs like this make it much easier. Thanks Brent.


    • Brent SalisburyBrent Salisbury05-11-2015

      Thanks for the feedback Raj. Always curious to hear your use cases as you explore. You aren’t alone on keeping up. We are in an accelerated cycle right now without a doubt. Coupled with a sense of urgency that stuck with us since 2008 to do more with less, with no entropy in sight.