Last update: 2024-05-06

Today I will talk about Docker. It is a nice whale that has been appearing in the development environment.

Yeah. Reading something about that I was very enthusiastic with that. Who usually studies new IT topics to perform some tests knows how annoying sometimes have to go installing and configuring various tools, polluting your computer with various technologies to perform some tests and then have to discard. With docker this can be solved. With docker you can centralize these settings and, when you finish your tests, you can throw away or even save a portion of the environment that you can use at other times.

Of course, Docker is muuuuuch more than that. Docker helps you through the entire development process, from the development itself to the production environment.

Good, but my goal here is to just give you a general idea about what docker is and what it can bring of benefits. And from there each one will be able to deepen according to two needs.

Let’s go!!!.


What is it – An Overview

Docker is an open source project that aims to leave applications in containers. Docker can be used to develop, deploy, and run applications. It allows you to separate the application from the infrastructure.

Anyone who works developing application knows how annoying it is often to worry about the environment itself instead of focusing on the main focus that is developing features.

The docker is based on client-server architecture where the client initiates a request to the server (deamon), the server can be remote or not. This communication is performed via REST API, CLI (terminal) and the Kitematic (GUI).

Some confusion is common when start tje studies on Docker. One of that is the fact that Docker is not virtual machine (VM).

A good comparison was made in the eBook: Docker-for-Virtualization-Admin that compares VM to a house and the Docker is compared to an apartment. In a house, the whole structure is individual and independent, while in an apartment a part of the structure is shared. You can create Docker image only with what will be used in your application. While the VMs start with the complete operating system and depending on the application, the developer may or may not perform some customization. But that does not mean that VM and Docker container cannot coexist. It is possible to have a Docker container inside a VM.

In a simplistic way, we can consider that docker is something to manage an easy, small portion of the infrastructure needed to run an application. It is possible to have more than one such structure in communication, but this detail I leave for who will need to go deep according to their needs.


Terminology

Docker Engine: is an open source containerization technology for building and containerizing your applications.

Docker Swarm: native clustering solution for docker;

Daemon: a portion of the server where there is container management

CLI: commands and communicates with the docker daemon

Docker Machine: Creates a host on your computer, in the cloud, or in your data center

Docker Desktop (Daemon and CLI) is a simple application to build, share and run containerized applications and microservices. However, the terms of use was changed and now is necessary a license to use. As alternative, you can use, for instance, Colima.

// x86/amd64 based images on Apple M1/M2 Macs
$ colima start --cpu 2 --memory 4 --arch x86_64

// Apple M1/M2 Macs
$ colima start --cpu 2 --memory 4 --arch aarch64

Image: read-only template with instructions for creating a Docker container.

$ docker build -t NAME:TAG .     // build the image with a group/name and a tag 

$ docker inspect MY_IMG.         // To see the details of the image

$ docker rmi IMAGE_ID.           // remove the image (the containers must be removed before)
$ docker rmi $(docker images -q) // remove all images
$ docker rmi prune               // remove all images not used
$ docker rmi prune -a            // remove all images

Dockerfiles: scripts that automate the process of constructing images; they define what will exist inside the containers. It is a simple text file with a list of commands that will indicate to the client docker what should be created in the image.

FROM node
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]

Image Layers: are the instructions in a Dockerfile translated to the image. The order of the instructions is omportant. Each layer can be reuse in a nre build

Containers: runnable instance of an image. A standardized unit of software. A package of code and dependencies to run that code. The same container always have the same application and execution behavior.

// execute in foreground (block the terminal - attached)
$ docker run MY_IMG_ID 

// Iterative
$ docker run -itd --name=MY_IMG ubuntu

// Docker in a amd64 platform and Image created in arm64 platform
$ docker run --platform linux/arm64/v8 -d --rm -p 80:80 MY_IMAGE

$ docker ps                   // list containers running
$ docker ps -p                // list containers id
$ docker kill $(docker ps -q) // fill all containers
$ docker rm MY_MAGE_NAME
$ docker rm $(docker ps -a -q)

$ docker exec -it MY_SQL mysql -u root -p 2017:2017 // iterative

Attached and detached containers: the container run in foreground when is attached and you can see the logs. The terminal is blocked. When it is detached it runs in background and the terminal is free. You can use 'logs' command to see the returns.

// execute in detache mode (background)
$ docker run -d MY_IMG_ID 

// back to attache mode
$ docker attache MY_IMG_ID 

// local port 3000, docker port 80, detached, 
// remove container when it is stopped, give a specific name
$ docker run -p 3000:80 -d --rm --name MYNAME IMAGE_ID 

// execute in background (detache mode)
$ docker start MY_IMG_NAME 

$ docker logs -f CONTAINER_ID

// see the logs, useful to detache mode then 
// you cannot see the result in the terminal
$ docker logs MY_IMG_NAME 

Docker Hub: Register of images. You can see how to use here.

// connect to Docker Hub
$ docker login 
$ docker push MY_HUB_PATH

// change the local name (clone) to the name used in Docker Hub to push
// Must be logged 
$ doker login
$ docker tag OLD_NAME:OLD_TAG DOCKER_HUB_REPO/REMOCE_BUILD_NAME:TAG 

Volumes: mechanism for persisting data managed by Docker. The contents are outside of the container's scope. Volumes are folders on the host machine mounted into containers. It is mmanaged by docker and can be anonynous or named. Anonymous volumes are removed automatically when the container is removed when you use '--rm' to start the container. If nor, the volume will not be removed but a new anonmous volume is attached to the container when it starts again.

Bind Mounts: A similar concept of volume, but it is managed by the developer. The developer define the folder ot oath on the host machine. It is better to persistent and editable data.

Data:

  • Application (code + env) [anonymous]: the data in image is read-only
  • Temporary App Data (e.g user input) [named]: the data in container is read/write and temporary
  • Permanent App Data (e.g user accounts)[Bind Mounts]: the data in volumes is read/write and permanent
$ docker volume create MY_VOL
$ docker volume ls
$ docker volume inspect MY_VOLUME

$ docker container stop MY_CONTAINER
$ docker container rm MY_CONTAINER
$ docker volume rm MY_VOLUME
// remove unused anonymous volumes
$ docker volume prune 

// usea name of the volume
$  docker run -d -p 3000:80 --rm --name feedback-app \
-v myfeedback:/app/feedback feedback-node:volumes 

// use the path in your host machine. (Bind Mounts)
// The second volume (anonynous) avoid override the volumes. 
// Useful to change code and don't need re-build the images
$  docker run -d -p 3000:80 --rm --name feedback-app \
-v "MY_LOCAL_PATH:/app" \
-v /app/node_modules feedback-node:volumes 

Network containers: it make the app available in network. It means the communication with outside. [1][2][3]

    Communication
  • container to WWW: container can send request to outside
  • container to localhost: e.g, use the database in localhost. It's necessary to use the reference used by docker that will find it (mongodb://host.docker.internal:27017/mydatabase)
  • container to container: manually by IPAdress retrieved by inspect command (mongodb://172.17.0.2:27017/mydatabase); or using netowrk resource and add both container in the same network
    Drivers
  • bridge: Containers can find each other by name if they are in the same Network (default)
  • host: For standalone containers, isolation between container and host system is removed
  • overlay: Multiple Docker daemons (i.e. Docker running on different machines) are able to connect with each other.
  • macvlan: Set a custom MAC address to a container
  • none: All networking is disabled.
  • Third-party plugins
$ docker network ls
$ docker network inspect bridge
$ docker network disconnect bridge MY_IMG
$ docker network create -d bridge MY_BRIDGE
$ docker network connect MY_BRIDGE MY_IMG

$ docker network create favorites-net
$ docker run -d --name mongodb --network favorites-net mongo
$ docker run --name favorites-net --network favorites-net -d --rm -p 3000:3000 network:dbnet

// run mongo in container to be found by app that is localhost
$ docker run -d --name mongodb -d --rm -p 2017:2017 mongo

Docker compose: a tool for defining and executing multi-container applications. Service are containers and it will support publish ports, manage environment variables, volumes and network. [Reference]

// starts the containers in the background and leaves them running
$ docker compose up --d
$ docker compose down    // stop and delete the container and all default networks created
$ docker compose down -v // also remove the volume created

Commands

  • CMD: instruction in Dockerfile that set the command to be executed when running a container from an image
  • ENTRYPOINT: instruction that allow to configure a container that will run as an executable
  • CMD vs ENTRYPOINT: both are instructions to run when the container start, but ENTRYPOINT sets the process to run, while CMD supplies default arguments to that process
  • EXPOSE: indicates the ports on which a container listens for connections. Ports exposed from the container.
  • Port Mapping: defined ports internally and published on the host machine.
  • docker ls
  • docker run
  • docker stop
  • docker exec
  • docker kill

If you don't want to install the docker but would like to test it, you can use this playground


Best Practices

  • Have small images
  • Use swarm whenever possible
  • Use continuous integration for application testing and deployment
  • Create ephemeral containers, that is, containers that can be simply replaced.
  • Do not install unnecessary things
  • Each container must have a goal

Development to Production

After all the local work, the idea is to push our image to production. For that, is necessary to have some hosting provider that will receive our image and run the instances.

One way to do it is manually going into AWS, for instance, creating EC2 and all necessary structure for it (VPC, subnet, security group), connecting to instance via ssh, install and run the docker. It is considered manual because the developer has to manage all the steps.

Automatically management (deploy the app/container) can be done using ECS (Elastic Container Service)

Course

  • Docker: Ferramenta essencial para Desenvolvedores
  • Docker & Kubernetes: The Practical Guide [2024 Edition]
  • Docker for the Absolute Beginner - Hands On - DevOps