A Tutorial on Docker

2017-06-29 07:25:00 +0000
tutorial, Ubuntu 16.04


This post summarizes Get started with Docker and its commands.


Install the community edition of docker:

sudo apt remove docker docker-engine
sudo apt install apt-transport-https ca-certificates curl software-properties-common

curl -fsSL | sudo apt-key add -
sudo add-apt-repository \
   "deb [arch=amd64] \
   $(lsb_release -cs) \

sudo apt update
sudo apt install docker-ce

Check the installation:

docker --version
sudo docker run hello-world


sudo apt purge docker-ce
sudo rm -rf /var/lib/docker


An image is a lightweight, stand-alone, executable package that includes everything needed to run a piece of software, including the code, a runtime, libraries, environment variables, and config files.

A container is a runtime instance of an image—what the image becomes in memory when actually executed. It runs completely isolated from the host environment by default, only accessing host files and ports if configured to do so.

Containers run apps natively on the host machine’s kernel. They have better performance characteristics than virtual machines that only get virtual access to host resources through a hypervisor. Containers can get native access, each one running in a discrete process, taking no more memory than any other executable.

Dockerfile will define what goes on in the environment inside your container. Access to resources like networking interfaces and disk drives is virtualized inside this environment, which is isolated from the rest of your system, so you have to map ports to the outside world, and be specific about what files you want to “copy in” to that environment. However, after doing that, you can expect that the build of your app defined in this Dockerfile will behave exactly the same wherever it runs.

See an example for Dockerfile.

# create a docker image, naming and tagging with `-t` <imagename>[:<tag>]
sudo docker build -t <imagename>:<tag>

# see a list of built images
sudo docker images
sudo docker images -a # default hides intermediate images

# run an image with name[:tag]
sudo docker run <imagename>:<tag>

# run with mapping port 4000 to 80 via `-p`
sudo docker run -p 4000:80 <imagename>:<tag>

# run in detached mode using `-d`
sudo docker run -d -p 4000:80 <imagename>:<tag>

# see a list of containers running
sudo docker ps
sudo docker ps -a # includes ones not running

# stop a container with container id <hash>, which is shown in `ps`
sudo docker stop <hash>
# or force to shutdown a container
sudo docker kill <hash>

# remove a container
sudo docker rm <hash>
sudo docker rm $(docker ps -a -q) # all containers

# remove an image
sudo docker rmi <imagename>
sudo docker rmi $(docker images -q) # all images

Working with docker hub:

sudo docker login
# push it to docker hub
sudo docker push <username>/<repository>:<tag>
# run from docker hub
sudo docker run <username>/<repository>:<tag>


Services are really just containers in production.

Services are containers in production with configuration via docker-compose.yml

Install docker-compose:

sudo -i
curl -L`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

# check the installation
docker-compose --version

Add your services in docker-compose.yml, see an example.


A swarm is a group of machines that are running Docker and joined into a cluster.

A swarm is made up of multiple nodes, which can be either physical or virtual machines.

The official tutorial involves virtual machines via virtualbox.

Install docker-machine:

curl -L`uname -s`-`uname -m` >/tmp/docker-machine && chmod +x /tmp/docker-machine && sudo cp /tmp/docker-machine /usr/local/bin/docker-machine

# check the installation
docker-machine version

Make a swarm environment with virtual machines:

# Create virtual machines via `docker-machine`
docker-machine create --driver virtualbox myvm1
docker-machine create --driver virtualbox myvm2

# see a list of virtual machines
docker-machine ls
# get basic info of a virtual machine
docker-machine env myvm1
# stop and start again
docker-machine stop myvm2
docker-machine start myvm2
docker-machine stop $(docker-machine ls -q) # stop all

## send a command
# docker-machine ssh <hostname> :: will open a terminal session
# docker-machine ssh <hostname> "command" :: will deliver the command

# execute "docker swarm init" to a virtual machine
docker-machine ssh myvm1 "docker swarm init --advertise-addr" # myvm1 is a node manager

# if you need a join token
docker-machine ssh myvm1 "docker swarm join-token -q worker"

# add myvm2 to the swarm
docker-machine ssh myvm2 "docker swarm join --token <token>"

# see a list of nodes of the swarm
docker-machine ssh myvm1 "docker node ls"
# get detailed information of a node
docker-machine ssh myvm1 "docker node inspect <nodeId>"

Remove a swarm and virtual machines:

# detach a node from the swarm
docker-machine ssh myvm2 "docker swarm leave"
# kill the swarm by the master node
docker-machine ssh myvm1 "docker swarm leave -f"

# remove all virtual machines and their disk images
docker-machine rm $(docker-machine ls -q)


A stack is a group of interrelated services that share dependencies, and can be orchestrated and scaled together. A single stack is capable of defining and coordinating the functionality of an entire application.

A stack is deployed via docker stack deploy with a docker-compose.yml file. See an example.

# copy a file to the swarm manager (to the path ~/)
docker-machine scp docker-compose.yml myvm1:~

# run this compose file by the swarm manager with naming
docker-machine ssh myvm1 "docker stack deploy -c docker-compose.yml <appname>"

# see a list of services of <appname>
docker-machine ssh myvm1 "docker stack services <appname>"
# see a list of running containers
docker-machine ssh myvm1 "docker stack ps <appname>"

# stop and remove <appname>
docker-machine ssh myvm1 "docker stack rm <appname>"