Docker - an overview!
Last update: 2024-05-06
Today I will talk about Docker. It is a nice whale that has been appearing in the development environment.
Yeah. Reading something about that I was very enthusiastic with that. Who usually studies new IT topics to perform some tests knows how annoying sometimes have to go installing and configuring various tools, polluting your computer with various technologies to perform some tests and then have to discard. With docker this can be solved. With docker you can centralize these settings and, when you finish your tests, you can throw away or even save a portion of the environment that you can use at other times.
Of course, Docker is muuuuuch more than that. Docker helps you through the entire development process, from the development itself to the production environment.
Good, but my goal here is to just give you a general idea about what docker is and what it can bring of benefits. And from there each one will be able to deepen according to two needs.
Let’s go!!!.
What is it – An Overview
Docker is an open source project that aims to leave applications in containers. Docker can be used to develop, deploy, and run applications. It allows you to separate the application from the infrastructure.
Anyone who works developing application knows how annoying it is often to worry about the environment itself instead of focusing on the main focus that is developing features.
The docker is based on client-server architecture where the client initiates a request to the server (deamon), the server can be remote or not. This communication is performed via REST API, CLI (terminal) and the Kitematic (GUI).
Some confusion is common when start tje studies on Docker. One of that is the fact that Docker is not virtual machine (VM).
A good comparison was made in the eBook: Docker-for-Virtualization-Admin that compares VM to a house and the Docker is compared to an apartment. In a house, the whole structure is individual and independent, while in an apartment a part of the structure is shared. You can create Docker image only with what will be used in your application. While the VMs start with the complete operating system and depending on the application, the developer may or may not perform some customization. But that does not mean that VM and Docker container cannot coexist. It is possible to have a Docker container inside a VM.
In a simplistic way, we can consider that docker is something to manage an easy, small portion of the infrastructure needed to run an application. It is possible to have more than one such structure in communication, but this detail I leave for who will need to go deep according to their needs.
Terminology
Docker Engine: is an open source containerization technology for building and containerizing your applications.
Docker Swarm: native clustering solution for docker;
Daemon: a portion of the server where there is container management
CLI: commands and communicates with the docker daemon
Docker Machine: Creates a host on your computer, in the cloud, or in your data center
Docker Desktop (Daemon and CLI) is a simple application to build, share and run containerized applications and microservices. However, the terms of use was changed and now is necessary a license to use. As alternative, you can use, for instance, Colima.
Image: read-only template with instructions for creating a Docker container.
Dockerfiles: scripts that automate the process of constructing images; they define what will exist inside the containers. It is a simple text file with a list of commands that will indicate to the client docker what should be created in the image.
Image Layers: are the instructions in a Dockerfile translated to the image. The order of the instructions is omportant. Each layer can be reuse in a nre build
Containers: runnable instance of an image. A standardized unit of software. A package of code and dependencies to run that code. The same container always have the same application and execution behavior.
Attached and detached containers: the container run in foreground when is attached and you can see the logs. The terminal is blocked. When it is detached it runs in background and the terminal is free. You can use 'logs' command to see the returns.
Docker Hub: Register of images. You can see how to use here.
Volumes: mechanism for persisting data managed by Docker. The contents are outside of the container's scope. Volumes are folders on the host machine mounted into containers. It is mmanaged by docker and can be anonynous or named. Anonymous volumes are removed automatically when the container is removed when you use '--rm' to start the container. If nor, the volume will not be removed but a new anonmous volume is attached to the container when it starts again.
Bind Mounts: A similar concept of volume, but it is managed by the developer. The developer define the folder ot oath on the host machine. It is better to persistent and editable data.
Data:
- Application (code + env) [anonymous]: the data in image is read-only
- Temporary App Data (e.g user input) [named]: the data in container is read/write and temporary
- Permanent App Data (e.g user accounts)[Bind Mounts]: the data in volumes is read/write and permanent
Network containers: it make the app available in network. It means the communication with outside. [1][2][3]
- Communication
- container to WWW: container can send request to outside
- container to localhost: e.g, use the database in localhost. It's necessary to use the reference used by docker that will find it (mongodb://host.docker.internal:27017/mydatabase)
- container to container: manually by IPAdress retrieved by inspect command (mongodb://172.17.0.2:27017/mydatabase); or using netowrk resource and add both container in the same network
- Drivers
- bridge: Containers can find each other by name if they are in the same Network (default)
- host: For standalone containers, isolation between container and host system is removed
- overlay: Multiple Docker daemons (i.e. Docker running on different machines) are able to connect with each other.
- macvlan: Set a custom MAC address to a container
- none: All networking is disabled.
- Third-party plugins
Docker compose: a tool for defining and executing multi-container applications. Service are containers and it will support publish ports, manage environment variables, volumes and network. [Reference]
Commands
- CMD: instruction in Dockerfile that set the command to be executed when running a container from an image
- ENTRYPOINT: instruction that allow to configure a container that will run as an executable
- CMD vs ENTRYPOINT: both are instructions to run when the container start, but ENTRYPOINT sets the process to run, while CMD supplies default arguments to that process
- EXPOSE: indicates the ports on which a container listens for connections. Ports exposed from the container.
- Port Mapping: defined ports internally and published on the host machine.
- docker ls
- docker run
- docker stop
- docker exec
- docker kill
If you don't want to install the docker but would like to test it, you can use this playground
Best Practices
- Have small images
- Use swarm whenever possible
- Use continuous integration for application testing and deployment
- Create ephemeral containers, that is, containers that can be simply replaced.
- Do not install unnecessary things
- Each container must have a goal
Development to Production
After all the local work, the idea is to push our image to production. For that, is necessary to have some hosting provider that will receive our image and run the instances.
One way to do it is manually going into AWS, for instance, creating EC2 and all necessary structure for it (VPC, subnet, security group), connecting to instance via ssh, install and run the docker. It is considered manual because the developer has to manage all the steps.
Automatically management (deploy the app/container) can be done using ECS (Elastic Container Service)