DevOps From Beginning: Docker

 Docker: 

· Docker is an open-source containerization platform by which you can pack your application and all its dependencies into a standardized unit called a container.

· Docker is a tool that creates the Container. we can run the docker on any OS i.e. we can install the docker on Windows but the code inside the docker file will be of Linux.

· Docker engine (which creates the container) runs natively on Linux OS.

· Written in GO language.

· Docker is a set of platforms as a service (PaaS) products that use Operating system-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries, and configuration files; they can communicate with each other through well-defined channels.

· Docker Hub is a repository service and it is a cloud-based service where people push their Docker Container Images and also pull the Docker Container Images from the Docker Hub anytime or anywhere via the internet. It provides features such as you can push your images as private or public. Mainly DevOps team uses the Docker Hub. It is an open-source tool and freely available for all operating systems. It is like storage where we store the images and pull the images when it is required. When a person wants to push/pull images from the Docker Hub they must have a basic knowledge of Docker.

· Containers are packages of software that contain all of the necessary elements to run in any environment. In this way, containers virtualize the operating system and run anywhere, from a private data center to the public cloud or even on a developer’s personal laptop. From Gmail to YouTube to Search, everything at Google runs in containers.

Note: if I’m writing an application code in Python, it would have the Python runtime with the version that is needed, it would have all the Python libraries that I am using, it would also have the application code and all of this would be bundled into a single container image. Now we can use this container directly to execute our application in any Operating System or machine and we can also run multiple copies of it.

Note: Your computer doesn't understand Python natively, it only understands machine code. To get your machine to run Python code you need some way to convert it into machine code. The programs, libraries, and configuration that allow you to do this are collectively known as the "Python runtime environment".


Architecture Of Docker:


Docker ecosystem: For building a Container, docker needs many things (like dependencies) that is known as Docker ecosystem.

Docker Daemon/Server/Docker engine: Docker daemon runs on the host OS. It is responsible for running container to manage the Docker services. Docker Daemon can communicate with other daemon.

Docker Client: Docker users can interact with Docker daemon through the client (CLI).Docker client uses commands and Rest API to communicate with the Docker daemon. When a client runs any server command on the Docker client terminal, the client terminal sends these Docker commands to the Docker daemon. It is possible for Docker client to communicate with more than one daemon.

Docker Host: Docker host is used to provide an environment to execute and run application. It contains the docker daemon, images, containers, networks and storages.

Docker Hub/ Registry: Docker registry manages and stores the Docker images. There are two type of registry in the Docker.

1. Public registry: It is also called as Docker Hub.

2. Private registry: It is used to share the images within the enterprise.

Docker Images: Docker images are the read only binary templates used to create the Docker container. Or single file with all dependencies and configuration required to run a program.

Ways to create an Image:

1. Take image from Docker Hub.

2. Create image from Docker file.

3. Create image from existing Docker container.

Docker container: Container hold the entire packages that is needed to run the application. In the other words, the image is a template and the container is a copy of that template. Container is like a Virtual machine. Images becomes container when they run on Docker engine. 

Note: We can modify in the container but not in the image.

Basic commands in Docker:

1. To install the docker: yum install docker –y

2. To see all images present in local machine: docker images

3. To find out the images in docker hub: docker search image_name

4. To download image from docker hub to local machine: docker pull name_image(like rhel, centos)

5. To give the name to container(create+start): docker run -it –name container_name image_name /bin/bash (-i---> interactive mode: allow to interect with container shell, provides a terminal interface for interecting with container)

/bin/bash ----> It means you want to start an interactive bash session within container, bash is command line interpreter: provide CLI to user to interect with user.

/bin----> binary directory where executable binaries are located….bash-->executable file

6. To check is service start or not: service docker status

7. To start the container: docker start container_name

8. To go inside the container: docker attach container_name

9. To see all container: docker ps –a (ps- process status)

10. To see only the running container: docker ps

11. To stop the container: docker stop container_name

12. To delete the container: docker rm container_name

13. To see the details of OS: cat /etc/os-release

14. To search all-particular images present at the Docker Hub: docker search image_name

 

How to create the image from the existing container:

1. Create one container first: docker run –it –name container_name Ubuntu /bin/bash

2. Now we want to brings some changes in the container for that we to the tmp (any default directory) directory: cd tmp/

3. Now create one file inside the tmp directory: touch newfile

4. Now if we want to see the difference came between the base image and image after changing: docker diff container_name

Output:

C /root

A /root/bash-history

C /tmp

A /tmp/newfile

5. Now, create image of the container: docker commit container_name new_image_name

6. Now we will see the images is created or not: docker_images

7. Now you can create new container from the image: docker run –it --name new_container_name new_image_name /bin/bash

8. Now can check your container.

 

How to create an image from Docker file:

Dockerfile: A text file contains the set of instruction. It is way to automate the Docker image creation.

Docker components:

FROM----> for base image, this command must be on the top of the dockerfile.

RUN----> to execute commands, it will create a layer in image.

MAINTAINER-----> Author/Owner/Description.

COPY-----> Copy file from local system (docker VM). We need to provide source, destination. (But from this command we can’t download file from internet and any remote repo)

ADD----> Similar to copy but it provide a feature to download files from internet, also extract the file at docker image side.

EXPOSE----> To expose the ports in this like port 8080.

WORKDIR----> To set working directory for a container.

CMD-----> Mention the execute commands but during container creation

ENTRYPOINT----> Similar to CMD, but has higher priority, first…..commands will be executed by ENTRYPOINT only.

ENV----> Environment variables (like a key value).

ARG

 

Steps to create a Dockerfile:

1. Create a file with the name “Dockerfile”.

Vi Dockerfile

2. Add instructions in Dockerfile.

FROM Ubuntu

RUN echo “nitin” >/tmp/testfile

3. Will create an image from Dockerfile.

docker build –t myimage . (“.” represent the current directory)

4. Run image to create the container

docker run –it –name mycontainer myimage /bin/bash

5. Now you can see the testfile under the tmp directory.

Example: Dockerfile creation

FROM ubuntu

WORKDIR /tmp (when container will created you will see in tmp dire by default)

RUN echo “hello world” > /tmp/testfile (testfile with hello world will create in tmp dir)

ENV myname nitin (nitin will be store in myname variable when you search $myname you will find nitin)

COPY testfile1 /tmp (testfile1 from local vm will save in the container’s tmp dir)

ADD test.tar.gz /tmp (test.tar.gz will extract…..means unzip the file)


 

Volume in Docker

· Volume is simply a directory inside our container.

· Firstly, we have to declare the directory as a volume and then share then volume.

· Even if we stop the container , still we can access the volume.

· Volume will be created in one container.

· You can declare a directory as volume only while creating the container.

· You can’t create volume from the existing container.

· you can store one volume across any number of containers

· Volume will not be included when you updated an image.

You can mapped volume in two ways

1. Container <------> Container

2. Container <-------> Host

 

Benefits of Volume

· Decoupling container from storage. (it means If container delete volume will not delete).

· Share volume among differen containers.

· Attach volume to containers.

· On delete container volume does not delete.

 

Creating Volume from dockerfile and sharing with another container:

1. Create a dockerfile and write

FROM ubuntu

VOLUME [“/myvolume1”]

2. Then create image from this docker file.

docker build –t myimage . (“.” indecates the current working directory)

3. Now create a container from this image and run

docker run –it --name container1 myimage /bin/bash

4. Now do ls then you will find the myvolume1

5. Now, share volume with the container

Container1<-------->Container2

docker run –it --name container2 -- privileged=true --volumes-from container1 ubuntu /bin/bash

6. Now after creating container2, myvolume1 is visible. Whatever you do in volume ,all data will be reflecting in the volume of other container.

touch samplefile /myvolume1 (run this in container2)

docker start container1

docker attach container1

Now you can see the samplefile in myvolume dir

Creating a volume by using command and sharing with another container:

1. Creating a container with volume through the command.

docker run –it --name container1 –v /volume2 ubuntu /bin/bash

2. Do ls and you will find the volume2

3. Go to the volume2 and create a file

4. Create one more container and share volume2

docker run –it --name container2 --privileged=true --volumes-from container1 ubuntu /bin/bash

5. Now you are inside the container2, do ls and you can see volume2 overthere

6. To verify now you can create a file inside this volume2 and you can check in container1’volume2 dir.

 

Sharing volume between host os and container:

Note: In this case we will not share the volume directly, we will mapped the volume directory with the directory present in the local VM.

1. First verify that how many files inside the ec2 dir

/home/ec2

2. Now we will create the mapping with ec2 and volume so that both are act as a single volume (It means if you will do anything in the volume it will reflect in the ec2 directory and viceversa).

docker run –it -- name hostcont -v /host/ec2:/volume --privileged=true ubuntu /bin/bash

3. cd /volume

4. Do ls, you will see all files of host machine.

5. To cross check it you can create a file in the volume and check in local vm, you should find the same file overthere.

Some other Commands of docker:

· docker volume ls

· docker volume create <volumename>

· docker volume rm <volumename>

· docker volume prune # It removed all unused docker volume.

· docker volume inspect <volumename> # To check the details of volume

· docker container inspect <containername> # To check the details of container

 

How we can establish a connection between user (on internet) and container: 






1. docker run –td --name container_name –p 80:80 ubuntu #container will be create and port80 of host is mapped with port80 of container means it will establish the connection.

2. docker ps

3. docker port container_name

output: 80/TCP(container)----->0.0.0.0/80(host)

4. docker exec –it container /bin/bash # To go inside the container_name

5. Now update the ubuntu

apt-get update

6. Now install the apache server

apt-get intall apache2 –y

7. go to the default path

cd /var/www/html

echo “hello welcome to my server” > index.html #we made a default webpage: index.html

8. start the apache2

service apache2 start

9. Copy the ip of host machine (a.a.a.a:80 by default all request will go at port 80) and search on browser.

10. For jenkins we need to expose the port 8080 for host and for port 8080 for container.(if port is not allowed, we need to go to security group of vm and allow the same).

 

Difference between docker attach and docker exec?

Both command are used to go inside the container but the difference is that Docker exec create a new process in the container’s environment while docker attach just connect the standard input/output of the main process inside the container to corresponding standard input/output error of current terminal.

Docker exec is specifically for running the new things in a already started container.

Difference between expose and publish (-p) in docker:

Basically you have three options:

1. Neither specify expose nor –p

2. Only specify expose

3. Specify expose and –p

1. If you specify neither expose nor –p , the service in the container will only be accessible from other containers.

2. If you expose the port, the service in the container is not accessible from outside the docker but from inside, other docker container can access the service of that container, so this is good for inter-container communication.

3. If you expose and –p a port, the service in the container is accessible from anywhere, even outside docker.

4. If you do –p but do not expose, docker does an implicit expose. This is because, if a port is open to the public, it is automatically also open to the other docker containers, hence –p includes expose.


Comments

Popular posts from this blog

How to enable the syslog monitoring-Zabbix

Zabbix installation: Distribution setup

API & API in Zabbix