Docker

Docker

walkthrough of concepts

Docker is a platform that enables developers to automate the deployment of applications inside lightweight, portable containers. Containers are isolated environments that contain everything needed to run a piece of software, including the code, runtime, libraries, and system tools. Docker provides a way to package and distribute applications with their dependencies, ensuring consistency across different environments.

Here’s a basic overview of key concepts and commands in Docker:

Key Concepts:

  1. Container: A lightweight, standalone, and executable package that includes everything needed to run a piece of software, including the code, runtime, libraries, and system tools.

  2. Image: A lightweight, standalone, and executable software package that includes everything needed to run a piece of software, including the code and runtime.

  3. Dockerfile: A text file that contains instructions for building a Docker image. It specifies the base image, adds dependencies, copies files, and defines commands to run when the container starts.

  4. Docker Hub: A cloud-based registry service that allows you to share and access Docker images.

I have shared some of the common commands used in docker in my previous blog-

Running opportunistic Networking Environment Simulator using Docker
ONE simulator can be used to Simulate Delay Tolerant Networks . First clone the repository in the preferred directory…duttaani.hashnode.dev

docker exec: Executes a command inside a running container.

docker exec -it <container_id> <command>

Docker volumes: Volumes in docker are a way to persist and share data between containers and between the host machine and containers. Volumes provide a mechanism for storing data outside of the container filesystem, ensuring that the data persists even if the container is stopped or removed.

  • Docker supports different volume drivers that enable the use of various storage systems, such as NFS, Amazon EBS, and more. (below is an example of named volume , since anonymous volume will get deleted when we delete the container)
docker volume create my_volume
docker run -v my_volume:/path/in/container my_image

❯ docker volume ls
DRIVER    VOLUME NAME
local     fffda084c4637b38c5215f6bbffb7790bf9e460ddf73e2296598849c46718e90
local     minikube
local     my_volume

How could we prevent some specific files inside our container from being overwritten?

We could use anonymous volumes in our docker image.
For Example: VOLUME [“/app/node_modules”]

How could we do container-to-container communication in docker?

We could use docker networks for the same. Containers which are part of same network could communicate with each other.

If both containers are the part of the same network then docker will translate the name of the container to its IP Address. Hence one container could call API other container using its name in the request.

Docker provides built-in DNS resolution for containers on the same network, allowing you to use container names as hostnames.

For Example: If you have two Node.js applications running in separate Docker containers that are part of the same network, you can make HTTP requests from one container to another using the container name

Container 1: API Server

Container 2: Client Server

  1. Container 1 runs an API server on port 3000. {make sure express is present in package.json}

  2. Container 2 runs a client server on port 4000. {make sure axios and express is present in package.json}

  3. The client server makes a GET request to the API server using the URL http://api_server:3000/api/data. The use of api_server as the hostname is possible because both containers are part of the same Docker network.

Create a custom bridge network

docker network create my_network

Run the API server container

docker run -d - name api_server - network my_network api_server_image

Run the client server container

docker run -d - name client_server - network my_network client_server_image

output:

In the above output, we could see that “Hello from API server!” was printed; hence, both docker containers are communicating successfully using the docker network.

How do we bring up multiple docker containers using a single file in our host machine?

We could use Docker Compose which is a tool for defining and running multi-container Docker applications. It allows you to define a multi-container environment in a single file, making it easier to manage and deploy complex applications.

Docker Compose uses a YAML file (docker-compose.yml) to define the services, networks, and volumes of your application. The basic structure of a docker-compose.yml file looks like this:

version: '3'
services:
 web:
   image: nginx:latest
   networks:
     - front-tier
   ports:
     - "8080:80"
 db:
   image: postgres:latest
   environment:
     POSTGRES_PASSWORD: example_password
   networks:
     - back-tier
   volumes:
     - pg_data:/var/lib/postgresql/data
 networks:
   front-tier:
   back-tier:
 volumes:
   pg_data:

In above docker-compose file we could define networks and volumes