Docker Machine Provisioning on AWS

Docker Machine Provisioning on AWS

Docker Machine Provisioning on AWS

Next up in the docker machine series of integration posts is spinning up some cloud resources on Amazon EC2. The model is the same as previous docker machine posts, boot a VM running as a Docker host, add your cloud credentials for the provider and use your regular docker client commands on the remote Docker host in Amazon.

The docker machine docs are excellent at docs.docker.com. These posts are just my notes as I am getting to know the docker machine provisioning harness.

Docker Machine


  • access-id and secret-key and VPC ID.
  • First if you haven’t before, sign up for the free tier at AWS valid for a year.
  • Next retrieve your VPC ID from the Services -> ‘VPC Dashboard‘ -> ‘Your VPCs’
  • Next, grab your access-id and secret-key for AWS. If you have never setup a user account before, simply do so from
    IAM Console -> Users It will display the one time secret and ID.
  •  Lastly attach a policy to the new user account to enable API calls for resources. For simplicity sake, you can add “AdministratorAccess”

The docker-machine create examples were verified. Thanks to @botchagalupe for verifying them with me.

Export your account ENV variables:

This isn’t required but it can be easier/cleaner to keep your credentials in the shell’s environmental variables.

Verify that the AWS credentials are exported into your shell (these will need to be re-added if you switch to a new tab or terminal):

Lets get Cloud Bursty

The region ID by default is: us-east-1
Inside of the of region is the zone. The zone will be only a letter. Look closely at the options in the Docker Machine docs as there are a lot of moving parts. A default subnet is also used. Later in the post is an example of provisioning with a subnet argument passed. Container service zones can be viewed from Amazon here.

!!Note!!– if you delete the default subnet for the VPC you will be required to provide a subnet per AWS SOP. The only way to restore the default subnet is to contact AWS support and have them create a new VPC for you.

Next create the machine like so:

It will take about 30-60 seconds for the VM to boot up and then return a successful create message.

Now Run Some Containers!

Now you have an excellent interface to all of your clouds.

For testing lets spin up a bash shell so you can poke around in the container:

Verify we are in fact on our cloud instance for funzies:

To remove the container, you use the normal docker commands:

To switch between docker machines you are manageing, notice the astericks ‘*’ by ‘test-instance’ when you list the populated machines. That means that any ‘docker’ commands like spinning up a new container with ‘docker run’ will be performed on the active docker host/machine.

To switch the active host, simply use the active command

To remove the Docker host/VM running the docker daemon with your containers simply run:

Booting a Docker Machine with a Subnet Specified

The only difference when adding a specified subnet is you are adding a --amazonec2-subnet-id argument. The subnet must be added by you via API or the VPC web UI.

1) In order to specify a subnet ID you need to first create a subnet in the VPC.

AWS VPC Subnet Creation

2) Record the subnet-id and pass it into the docker-machine create. Since docker-machine will default to us-east-1 and zone a, if you create the subnet there, you can leave those parameters out. If the subnet was in say region b you would need to specify it in the create.

3) In this case I created a subnet in the region us-east-1 and the zone a, those concatenated are us-east-1a. (which is the default in the docker-machine default parameters).

If you were to pass that region and zone via the docker-machine API it would be:

--amazonec2-region us-east-1
--amazonec2-zone a

I am emphasizing the zones because its going to be what trips folks up if anything.

Then view the new machine.

Here is output from another instance but with the -D debug flag with more details of the create:

Troubleshooting

If you want to test your AWS permissions prior to getting started with machine you can download the client tool with (This is totally optional, it is only another way to troubleshoot AWS):

You can also modify those key/values in ‘~/aws/credentials’ and ‘~/aws/config’

Next verify the AWS permissions for the client account by querying AWS for the user list.

It is probably safe to say most of the issues you will encounter are going to be related to your AWS ENVs. I listed a couple of examples and the resulting err messages for reference.

If you specify the wrong VPC (which I did the first go around, doh), you will see something like this:

Another potential issue is if you specify a subnet ID in the wrong availability region.

The resulting AWS API reply

The aws API client can list all of the resources that you may prefer over the web-ui. For example:

Returns:

Thats it! Easy peasy. Its pretty cool seeing the efficiencies in the cloud that you can get from one single EC2 instance. That OpEx efficiency of application virtualization is profound and few are starting to scratch the surface outside of the Google’s of the world and some crafty web-scales that were ahead of their time. Next couple over next couple of nights I will dig into Google GCE, RackSpace (and anywhere else offering trials) with docker machine and then dig into network use cases. To be clear, Im using CSPs that have a free trial so anyone can hack along with me.

Next up is Docker Machine on Rackspace Pub Cloud →

About the Author

Brent SalisburyI have over 20 years of experience wearing various hats from, network engineer, architect, ops and software engineer. More at Brent's LinkedInView all posts by Brent Salisbury →

  1. ManuelManuel06-12-2015


    Fantastic Post. But what if you want to manage this remote host from a different machine? How could I manage it by means of docker-machine, or at least by a ssh connection?
    Thanks in advance

    • Brent SalisburyBrent Salisbury06-13-2015


      Hey Manuel! Let me make sure I got your question right.

      Scenario: You have a few machines populated and the associated profiles for each machine on lets say your laptop that you are running docker-machine commands from. You want those credentials to be populated on another host?

      You can copy the profile from one machine to another and carry your tokens/certs with you. By default they are stored in ~/.docker/machine

      Here is a listing of the directory with the machine profiles for each machine in docker-machine list


      $ ls ~/.docker/machine/machines/
      .DS_Store .active aws-machine/ digitalocean-machine/ google-machine/ virtualbox-machine/ vmwarefusion-machine/
      [11:32:23] (='o'=) [~]$ ls ~/.docker/machine/machines/digitalocean-machine/
      ca.pem cert.pem config.json id_rsa id_rsa.pub key.pem server-key.pem server.pem

      Just copy them to another machine to the default location or specify the path. Since you can specify the storage path I reckon you could share it but eh.

      -s, --storage-path "/Users/brent/.docker/machine" Configures storage path [$MACHINE_STORAGE_PATH]

      All that said, Swarm integration might help what you are looking for or I could see a token on DTR or something being possible.

      • ManuelManuel06-14-2015


        Thank you very much Brent

        If I get you correctly I could copy the aws-machine profile for example to a different computer in the default place (~/.docker/machine/machines/) and manage that remote asw docker instance as in the original computer

        I’ll try it, thanks again