Docker Machine Provisioning on AWS
Next up in the docker machine series of integration posts is spinning up some cloud resources on Amazon EC2. The model is the same as previous docker machine posts, boot a VM running as a Docker host, add your cloud credentials for the provider and use your regular docker client commands on the remote Docker host in Amazon.
The docker machine docs are excellent at docs.docker.com. These posts are just my notes as I am getting to know the docker machine provisioning harness.
- access-id and secret-key and VPC ID.
- First if you haven’t before, sign up for the free tier at AWS valid for a year.
- Next retrieve your VPC ID from the Services -> ‘VPC Dashboard‘ -> ‘Your VPCs’
- Next, grab your access-id and secret-key for AWS. If you have never setup a user account before, simply do so from
IAM Console -> Users It will display the one time secret and ID. - Lastly attach a policy to the new user account to enable API calls for resources. For simplicity sake, you can add “AdministratorAccess”
The docker-machine create
examples were verified. Thanks to @botchagalupe for verifying them with me.
Export your account ENV variables:
This isn’t required but it can be easier/cleaner to keep your credentials in the shell’s environmental variables.
1 2 3 4 5 |
export AWS_ACCESS_KEY_ID=<Secret> export AWS_SECRET_ACCESS_KEY=<Super_Top_Secret> export AWS_VPC_ID=vpc-8752d5e2 |
Verify that the AWS credentials are exported into your shell (these will need to be re-added if you switch to a new tab or terminal):
1 2 3 |
env | grep AWS |
Lets get Cloud Bursty
The region ID by default is: us-east-1
Inside of the of region is the zone
. The zone will be only a letter. Look closely at the options in the Docker Machine docs as there are a lot of moving parts. A default subnet is also used. Later in the post is an example of provisioning with a subnet argument passed. Container service zones can be viewed from Amazon here.
!!Note!!– if you delete the default subnet for the VPC you will be required to provide a subnet per AWS SOP. The only way to restore the default subnet is to contact AWS support and have them create a new VPC for you.
Next create the machine like so:
1 2 3 4 5 6 7 8 9 |
docker-machine -D create \ --driver amazonec2 \ --amazonec2-access-key $AWS_ACCESS_KEY_ID \ --amazonec2-secret-key $AWS_SECRET_ACCESS_KEY \ --amazonec2-vpc-id vpc-a8e5cfcd \ --amazonec2-zone b \ test-instance1 |
It will take about 30-60 seconds for the VM to boot up and then return a successful create message.
Now Run Some Containers!
Now you have an excellent interface to all of your clouds.
For testing lets spin up a bash shell so you can poke around in the container:
1 2 3 4 5 6 7 8 |
docker run -it ubuntu /bin/bash # Unable to find image 'ubuntu:latest' locally # latest: Pulling from ubuntu # ... # ... root@dc7264161d79:/# |
Verify we are in fact on our cloud instance for funzies:
1 2 3 4 5 |
apt-get install -y curl curl ifconfig.me # 52.5.11.81 |
To remove the container, you use the normal docker commands:
1 2 3 4 5 6 7 |
# Grab the container ID docker ps -lq # Stop and Delete the container docker stop/rm # or a one liner to stop and delte the most recent image etc. docker ps -l -q | xargs docker stop | xargs docker rm |
To switch between docker machines you are manageing, notice the astericks ‘*’ by ‘test-instance’ when you list the populated machines. That means that any ‘docker’ commands like spinning up a new container with ‘docker run’ will be performed on the active docker host/machine.
1 2 3 4 5 6 7 |
docker-machine ls NAME ACTIVE DRIVER STATE URL SWARM dev virtualbox Running tcp://192.168.99.102:2376 test-instance * amazonec2 Running tcp://52.5.11.81:2376 |
To switch the active host, simply use the active command
1 2 3 4 5 6 7 |
docker-machine active dev # [01:01:25] (='o'=) [~]docker-machine ls # NAME ACTIVE DRIVER STATE URL SWARM # dev * virtualbox Running tcp://192.168.99.102:2376 # test-instance6 amazonec2 Running tcp://52.5.11.81:2376 |
To remove the Docker host/VM running the docker daemon with your containers simply run:
1 2 3 |
docker-machine rm test-instance |
Booting a Docker Machine with a Subnet Specified
The only difference when adding a specified subnet is you are adding a --amazonec2-subnet-id
argument. The subnet must be added by you via API or the VPC web UI.
1) In order to specify a subnet ID you need to first create a subnet in the VPC.
2) Record the subnet-id and pass it into the docker-machine create
. Since docker-machine will default to us-east-1 and zone a, if you create the subnet there, you can leave those parameters out. If the subnet was in say region b
you would need to specify it in the create.
3) In this case I created a subnet in the region us-east-1
and the zone a
, those concatenated are us-east-1a
. (which is the default in the docker-machine default parameters).
If you were to pass that region and zone via the docker-machine API it would be:
--amazonec2-region us-east-1
--amazonec2-zone a
I am emphasizing the zones because its going to be what trips folks up if anything.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
docker-machine create \ --driver amazonec2 \ --amazonec2-access-key $AWS_ACCESS_KEY_ID \ --amazonec2-secret-key $AWS_SECRET_ACCESS_KEY \ --amazonec2-vpc-id $AWS_VPC_ID \ --amazonec2-subnet-id subnet-209c260b \ --amazonec2-zone a test-aws-instance-2 # Launching instance... # To see how to connect Docker to this machine, run: docker-machine env aws-instance-1 |
Then view the new machine.
1 2 3 4 5 |
docker-machine ls NAME ACTIVE DRIVER STATE URL SWARM aws-instance-1 amazonec2 Running tcp://52.6.214.218:2376 |
Here is output from another instance but with the -D
debug flag with more details of the create:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
# INFO[0000] Launching instance... # DEBU[0001] creating key pair: test-instance # DEBU[0001] configuring security group in vpc-42dffc27 # DEBU[0001] found existing security group (docker-machine) in vpc-42dffc27 # DEBU[0001] configuring security group authorization for 0.0.0.0/0 # DEBU[0001] launching instance in subnet subnet-209c260b # DEBU[0002] waiting for ip address to become available # DEBU[0021] Got the IP Address, it's "52.5.11.81" # DEBU[0021] created instance ID i-efc00f39, IP address 52.5.11.81, Private IP address 172.31.50.167 # DEBU[0021] Settings tags for instance # DEBU[0021] Getting to WaitForSSH function... # DEBU[0177] generating server cert: /Users/brent/.docker/machine/machines/test-instance/server.pem ca-key=/Users/brent/.docker/machine/certs/ca.pem private-key=/Users/# brent/.docker/machine/certs/ca-key.pem org=test-instance # INFO[0184] "test-instance" has been created and is now the active machine. # INFO[0184] To point your Docker client at it, run this in your shell: eval "$(docker-machine env test-instance)" |
Troubleshooting
If you want to test your AWS permissions prior to getting started with machine you can download the client tool with (This is totally optional, it is only another way to troubleshoot AWS):
1 2 3 4 5 6 7 8 9 10 11 |
# Install easy_install curl https://bootstrap.pypa.io/ez_setup.py -o - | python sudo easy_install awscli complete -C aws_completer aws aws configure # Answer the questions: # AWS Access Key ID [None]: ... # etc. |
You can also modify those key/values in ‘~/aws/credentials’ and ‘~/aws/config’
1 2 3 4 5 6 7 8 |
~/.aws/config ~/.aws/credentials $ cat ~/.aws/config # [default] # output = json # region = us-east-1a |
Next verify the AWS permissions for the client account by querying AWS for the user list.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
$ aws iam list-users { "Users": [ { "UserName": "brent", "Path": "/", "CreateDate": "2015-04-27T06:51:28Z", "UserId": "-----------", "Arn": "arn:aws:iam::---------" } ] } |
It is probably safe to say most of the issues you will encounter are going to be related to your AWS ENVs. I listed a couple of examples and the resulting err messages for reference.
If you specify the wrong VPC (which I did the first go around, doh), you will see something like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
docker-machine -v #docker-machine version 0.2.0 (HEAD) docker-machine -D create \ --driver amazonec2 \ --amazonec2-access-key $AWS_ACCESS_KEY_ID \ --amazonec2-secret-key $AWS_SECRET_ACCESS_KEY \ --amazonec2-vpc-id $AWS_VPC_ID \ --amazonec2-subnet-id $AWS_SUBNET_EAST1 \ test-instance #INFO[0000] Launching instance... #DEBU[0001] creating key pair: test-instance #DEBU[0001] configuring security group in vpc-8752d5e2 #DEBU[0001] creating security group (docker-machine) in vpc-8752d5e2 #ERRO[0001] Error creating machine: Error decoding error response: Error decoding error response: http: read on closed response body #WARN[0001] You will want to check the provider to make sure the machine and associated resources were properly removed. #FATA[0001] Error creating machine |
Another potential issue is if you specify a subnet ID in the wrong availability region.
1 2 3 4 5 6 7 8 9 10 |
docker-machine create \ --driver amazonec2 \ --amazonec2-access-key $AWS_ACCESS_KEY_ID \ --amazonec2-secret-key $AWS_SECRET_ACCESS_KEY \ --amazonec2-vpc-id $AWS_VPC_ID \ --amazonec2-subnet-id subnet-209c260b \ --amazonec2-zone us-east-1 \ test-instance |
The resulting AWS API reply
1 2 3 4 5 |
## ERRO[0000] Error creating machine: unable to find a subnet in the zone: us-east-1a ## WARN[0000] You will want to check the provider to make sure the machine and associated # resources were properly removed. ## FATA[0000] Error creating machine |
The aws API client can list all of the resources that you may prefer over the web-ui. For example:
1 2 3 |
$ aws ec2 describe-subnets |
Returns:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
{ "Subnets": [ { "VpcId": "vpc-42dffc27", "CidrBlock": "172.31.0.0/20", "MapPublicIpOnLaunch": true, "DefaultForAz": true, "State": "available", "AvailabilityZone": "us-east-1b", "SubnetId": "subnet-60542817", "AvailableIpAddressCount": 4091 }, { "VpcId": "vpc-42dffc27", "CidrBlock": "172.31.16.0/20", "MapPublicIpOnLaunch": true, "DefaultForAz": true, "State": "available", "AvailabilityZone": "us-east-1c", "SubnetId": "subnet-58b72401", "AvailableIpAddressCount": 4091 }, { "VpcId": "vpc-42dffc27", "CidrBlock": "172.31.48.0/20", "MapPublicIpOnLaunch": true, "DefaultForAz": true, "State": "available", "AvailabilityZone": "us-east-1a", "SubnetId": "subnet-209c260b", "AvailableIpAddressCount": 4091 }, { "VpcId": "vpc-42dffc27", "CidrBlock": "172.31.32.0/20", "MapPublicIpOnLaunch": true, "DefaultForAz": true, "State": "available", "AvailabilityZone": "us-east-1e", "SubnetId": "subnet-a7567a9d", "AvailableIpAddressCount": 4091 } ] } |
Thats it! Easy peasy. Its pretty cool seeing the efficiencies in the cloud that you can get from one single EC2 instance. That OpEx efficiency of application virtualization is profound and few are starting to scratch the surface outside of the Google’s of the world and some crafty web-scales that were ahead of their time. Next couple over next couple of nights I will dig into Google GCE, RackSpace (and anywhere else offering trials) with docker machine and then dig into network use cases. To be clear, Im using CSPs that have a free trial so anyone can hack along with me.
Fantastic Post. But what if you want to manage this remote host from a different machine? How could I manage it by means of docker-machine, or at least by a ssh connection?
Thanks in advance
Hey Manuel! Let me make sure I got your question right.
Scenario: You have a few machines populated and the associated profiles for each machine on lets say your laptop that you are running docker-machine commands from. You want those credentials to be populated on another host?
You can copy the profile from one machine to another and carry your tokens/certs with you. By default they are stored in
~/.docker/machine
Here is a listing of the directory with the machine profiles for each machine in
docker-machine list
$ ls ~/.docker/machine/machines/
.DS_Store .active aws-machine/ digitalocean-machine/ google-machine/ virtualbox-machine/ vmwarefusion-machine/
[11:32:23] (='o'=) [~]$ ls ~/.docker/machine/machines/digitalocean-machine/
ca.pem cert.pem config.json id_rsa id_rsa.pub key.pem server-key.pem server.pem
Just copy them to another machine to the default location or specify the path. Since you can specify the storage path I reckon you could share it but eh.
-s, --storage-path "/Users/brent/.docker/machine" Configures storage path [$MACHINE_STORAGE_PATH]
All that said, Swarm integration might help what you are looking for or I could see a token on DTR or something being possible.
Thank you very much Brent
If I get you correctly I could copy the aws-machine profile for example to a different computer in the default place (~/.docker/machine/machines/) and manage that remote asw docker instance as in the original computer
I’ll try it, thanks again