Get into Docker
Have you ever heard about Docker before? Most likely. If not, don’t worry, I’ll try to summarize it for you. Docker is probably one of the hottest technologies at the moment. It has the potential to revolutionize the way we build, deploy and distribute applications. At the same time, it’s already having a huge impact in the development process.
In some cases, the development environments can be so much complicated, that it’s hard to keep the consistency between the different team members. I’m pretty sure that most of us already suffered from the syndrome “Works on my Machine”, right? One way to deal with the problem is to build Virtual Machines (VM) with everything set up so you can distribute them through your team. But VM’s are slow, large and you cannot access them if they are not running.
What is Docker?
Short answer: it’s like a lightweight VM. In practice, it’s not the case, since Docker is different from a regular VM. Docker creates a container for your application, packaged with all of the required dependencies and ready to run. These containers run on a shared Linux kernel, but they are isolated from each other. This means that you don’t need the usual VM operating system, giving a considerable performance boost and shrinking the application size.
Let’s dig a little more into detail:
Docker Image
A Docker Image is a read only template used to create the Docker containers. Each image is built with a series of layers composing your final image. If you need to distribute something using Ubuntu and Apache, you start with a base Ubuntu image and add Apache on top. If you later want to upgrade to a Tomcat instance, you just add another layer to your image. Instead of distributing the entire image as you would with a VM, you just release the update.
Docker Registry
The Docker registry also called Docker Hub is a Docker Image repository. It’s the same concept of Maven repositories for Java libraries. Download or upload images and you are good to go. The Docker Hub already contains a huge number of images ready to use, from simple Unix distributions, to full blown application servers.
Docker Container
A Docker Container is the runtime component of the Docker Image. You can spin multiple containers from the same Docker Image in an isolated context. Docker containers can be run, started, stopped, moved, and deleted.
How do I start?
You need to install Docker of course. Please refer to the installation guides of Docker. They are pretty good and I had no problem installing the software. Make sure you follow the proper guide to your system.
Our first Docker Container
After having Docker installed, you can immediately type in your command line:
docker run -it -p 8080:8080 tomcat
You should see the following message:
Unable to find image ‘tomcat:latest’ locally
And a lot of downloads starting. Like Maven, when you build an application, it downloads the required libraries to run Tomcat, by reaching out to Docker Hub. It takes a while to download. (Great, one more thing to download the Internet. Luckily we can use ZipRebel, to download it quickly).
After everything is downloaded, you should see the Tomcat instance booting up, and you can access it by going to http://localhost:8080
in Linux boxes. For Windows and Mac users is slightly more complicated. Since Docker only works in a Linux environment, to be able to use it in Windows and Mac you need boot2docker (which you should have from the installation guide). This is in fact a VM that runs Docker on Linux completely from memory. To access the Docker containers you need to refer to this VM IP. You can get the IP with the command: boot2docker ip
.
Explaining the command:
docker run | The command to create and start a new Docker container. |
-it | To run in interactive mode, so you can see the after running the container. |
-p 8080:8080 | This is to map the internal container port to the outside host, usually your machine. Port mapping information can only be set on the container creation. If you don’t specify it, you need to check which port Docker assigned |
tomcat | Name of the image to run. This is linked to the Docker tomcat repository. This holds the instructions, so Docker knows how to run the server. |
Remember that if you stop and run again the same command, you are creating and running a new container.
Multiple Containers
You can run multiple Tomcat instances by issuing the following commands:
docker run -d -p 8080:8080 --name tomcat tomcat
docker run -d -p 9090:8080 --name web tomcat
These create two Tomcat containers named tomcat and web. Just remember to change the port mapping and the name. Adding a name is useful to control the container. If not, Docker will randomly generate one for you.
The -d
instructs Docker to run the container in the background. You can now control your container with the following commands:
docker ps | See a list of all the running Docker containers. Add -a to see all the containers. |
docker stop web | Stops the container named web. |
docker start web | Starts the container named web. |
docker rm web | Remove the container named web. |
docker logs web | Shows the container named web logs. |
Connecting to the Container
If you execute the command docker exec -it tomcat bash
, you will be able to connect to the container shell and explore the environment. You can for instance, verify the running processes with ps -ax
.
1 2 3 4 5 6 7 | radcortez:~ radcortez$ docker exec -it web bash root@75cd742dc39e:/usr/local/tomcat# ps -ax PID TTY STAT TIME COMMAND 1 ? Ssl+ 0:05 /usr/bin/java -Djava.util.logging.config.file=/usr/local/tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.endorsed.dirs= 47 ? S 0:00 bash 51 ? R+ 0:00 ps -ax root@75cd742dc39e:/usr/local/tomcat# |
Interacting with the Container
Let’s add a file to the container:
echo "radcortez" > radcortez
Exit the container, but keep it running. Execute docker diff web
. You are going to see a bunch of files related to the tomcat temporary files, plus the file we just added. This command evaluates the file system differences between the running container and the origin image.
Conclusion
We only scratched the surface of Docker capabilities. It’s still soon to tell if Docker will become a mandatory tool. Currently it’s receiving major adoption from big players like Google, Microsoft or Amazon. Docker may end up failing in the end, but it sure opened up an old discussion which doesn’t have a clear answer yet.
Related Articles
Learn how to create, build and distribute your own Docker Images in this follow up post: