Docker Crash Course for Absolute Beginners- Sai Charan Paloju

BySai Charan Paloju

Jun 24, 2023 #AmazonEC2Components, #AmazonEC2InstanceTypes, #AmazonElasticBlockStorage-EBS, #AmazonElasticFileSystem-EFS, #AmazonMachineImages(AMI), #AmazonS3WebHosting, #AmazonSimpleStorageService-S3, #ApplicationLoadBalancer, #AttachLoadBalancertoAutoScaling, #Automation, #AutoScalingGroups, #AWS Cloud DevOps Engineer Course, #AWS Command Line Interface, #AWS Curriculum, #AWS Database Management, #AWS High Availability, #AWS Monitoring, #AWS Networking, #AWS Server Management, #AWS Storage Management, #AWSAvailabilityZones, #AWSCLIConfigurations, #AWSCLIFeatures, #AWSCommandLineInterface, #AWSDatabaseManagement, #AWSHighAvailability, #AWSInternetGateway, #AWSManagementConsole, #AWSMonitoring, #AWSNetworking, #AWSRegions, #AWSServerManagement, #AWSStorageManagement, #Backups-SNAPSHOTS, #CI/CD, #CICD, #clone, #Cloud Concepts, #Cloud DevOps Engineer Course, #CloudConcepts, #CloudProviders-AWSvsAZUREvsGCP, #CloudwatchAlarms, #CloudwatchDashboards, #ConfigureAWSCLI, #ConfigureEmailForHighCPUUsage, #ConfiguringFirewallsForEFSAccess, #continuos integration & continuos deployement, #Continuous Integration & Continuous Deployment, #CreatePAASMySQLDatabase, #CreatingaCustomVPCUsingAWSCLI, #CreatingDownScalePolicy, #CreatingUpScalePolicy, #CustomAmazonMachineImages, #Database&Tables, #DatabaseConcepts, #DefinethebenefitsoftheAWScloudincluding, #DeployingWebApplicationsOnEC2Instance, #DesigningCustomVPC-ClientRequirement, #DesigningHighlyAvailableVPC, #DesktopsvsServers, #devops, #DevOps Build, #DevOps Engineer, #DevOps Tools, #Distributed build, #EBSADDITIONALVolume, #EBSROOTVolume, #EBSVolumeTypes, #EBSvsEFSvsS3, #EC2IPAddressTypesPublicvsPrivatevsElastic, #Elasticity, #FundamentsofCloudComputing, #github, #HighAvailability, #IAASvsPAASvsSAAS-CloudOfferings, #IAMOverview, #IAMPolicies, #IAMUsers, #ImplementingApplicationLoadBalancer, #InfrastructureAsAService(IAAS), #InstallAWSCLIonCentOS, #IntroductiontoLoadBalancing, #IntroductiontoScalability, #Jenkins, #jenkins email notifications, #Jenkins job, #jenkins notify job, #Jenkins Notify Job: Post Build Actions- Email-Notification-CI/CD(Continuous Integration & Continuous Deployment), #Jenkins Setup, #Jenkis Reports, #LaunchConfigurations, #LogicalDataCenters, #Monitoring-Cloudwatch, #NetworkingBasics-Protocol-Port-Firewall, #Pay-as-yougopricing, #PlatformAsAService(PAAS), #post build actions, #private repo clone, #private repository, #ProvisionEFSFileSystem, #publish junit test result report, #RelationalDatabaseService(RDS)-Features, #Reliability, #Repository, #sai charan paloju, #Scalability, #Security, #SetupJavaWebApplication-PAASMySQL, #SharedFileAccessacrossMultipleInstances, #SharesFileAccessacrossMultipleAvailabilityZones, #SimpleNotificationService-SNS, #smart cherrys tech, #smart cherrys thoughts, #SoftwareAsAService(SAAS), #Sonar Analysis, #Sonar Analysis In Jenkins, #SSHSoftware's-GitBash&Putty&Terminal, #TakeEC2ActionUsing-CloudWatch, #TraditionalNetworkingComponents, #UnderstandingCLIReference, #UnderstandingDefaultVPC, #UnderstandingRequirementsFromClient, #VPCNACL's, #VPCPublicSubnets, #VPCRouteTables, #VPCSecurityGroups, #VPCSubnetting, #WalkthroughAWSFreeTierAccount, #WorkwithSSHKeyPairs

Docker Crash Course for Absolute Beginners- Sai Charan Paloju- we will learn 1. what is docker? why was it created? what problems does it solve? software development before and after docker? software deployment before and after docker. why docker is a big deal? why it has become so popular and widely used in IT Projects.

2. Docker vs Virtual Machines

3. Difference between docker and virtual machine

4. Advantages of docker

5. Install docker locally

6. Docker Images vs Containers

7. Docker Registry(Public, Private)

8. Run Containers

9. Create own Image(Dockerfile)

10. Docker Commands(pull, run, start, stop, logs, build)

11. Versioning Images

12. Docker Workflow

 

 

1. what is docker? why was it created? what problems does it solve?

  • Docker is virtualization software
  • it makes developing & deploying applications easy
  • docker does that packaging application into something called as container that has everything the application need to run.
  • like application code,  its libraries and dependencies but also runtime and environment configuration.
  • application and its running environment are both packaged in a single docker package which you can easily share and distribute.

Why docker is a big deal? how did things worked before Docker? what problems does it solve?

  • before docker, if team of developers working on a application, they would have to install all the services that application depends on or like database services etc. directly on their operation system.
  • example- if you are developing a JavaScript application, and you need PostgreSQL database maybe you need a Redis for caching, mosquito for messaging, because you have microservices application.
  • you need all these services locally on your development environment, so you can actually develop and test the application.
  • every developer in the team, would then have to go, install all those services, configure, and run them on their local development environment.
  • and depending on which operating system they are using the installation process will be different.
  • because installing PostgreSQL on Mac OS is different from installing it on windows machine.
  • another thing with installing services,  directly on operating system, following some installation guide, is that usually have multiple steps of installation and configuration of the service.
  • so with multiple commands that you have to execute to install, configure and setup the service, the chances of something going wrong, and error happening is actually pretty high.
  • and this approach, or this process of setting up a development environment for a developer, can actually be pretty tedious depending on how complex your application is.
  • for example if your application has 10 services, that your application is using, you’ll have to do that installation 10 times for each service, and again it will differ within the team, based on what operating system each developer is using.

How Containers(Docker) solve some of these problems?

  • with containers actually you do not have to install any of the services directly on your operating system.
  • because with docker you have that service packaged in one isolated environment, so you have PostgreSQL with a specific version.
  • Packaged with its whole configuration inside of a container, so as a developer you dont have to go and look for some binaries to download and install on your machine but rather you just go ahead and start that service as a docker container using a single docker command which fetches the container package from internet and starts it on your computer.
  • and the docker command will be the same regardless of which operating system you are on and it  will also be the same regardless of which service you are installing.
  • so if you have 10 services that your JavaScript application depends on you would have to run 10 docker commands for each container and that will be it.
  • so as you see Docker standardizes the process of running any service on your development environment, and makes who process much easier.
  • so you can basically focus and work more on development instead of trying to install and configure services on your machine.
  • and this obviously makes setting up your local development much faster, and easier than the option without containers plus with the docker you can even have different versions of the same application, running on your local environment without having any conflict.
  • which is very difficult to do if you are installing, that same application with different versions directly on your operating system.

How Containers can improve the application deployment process ?

  • before containers a traditional deployment process, was development team would produce and application artifact or a package together with a set of instructions, of how to actually install and configure that  application package on the server.
  • so you would have something like a jar file for Java application or something similar depending on programming language used and
  • in addition of course you would have some kind of database service or some other services your application is needed.
  • also with the set of instructions, how to configure and set it up server, so that application can connect to it and use it.
  • so development team would give that application artifact or package over to the operations team.
  • and operations team would handle, installing and configuring the application, all its dependent services like database for example.
  • now the problem with these kind of approach is that, first of all you need to configure everything and install everything again indirectly on the operating system, which i mentioned in development context, that is very error prone, and you can have various different problems, during the setup process.
  • you can also have conflicts with dependency versions, where two services are depending on the same library for example but with different versions.
  • when that happens its going to make the setup process way more difficult and complex.
  • so alot of things that can go wrong when operations team is installing and setting up application any services on server.
  • another problem that could arise from this kind of process is when there is miscommunication between the development team and operations team.
  • because everything is in a textual guide like an instruction list of how to configure and run the application or maybe some kind of checklist there could be cases where developers forget to mention some important steps about configuration, when when that part fails, the operations team have to go back to developers and ask for more details and input, and this could lead to some back and forth communication until the application is successfully deployed on the server.
  • so basically you have this additional communication overhead where developers have to communicate, in some kind of textual graphical whatever format, how the application should run.
  • and as I mentioned this could lead to issues and miscommunications
  • with containers, this process is actually simplified, because now developers create an application package that doesn’t only include the code itself, but also all the dependencies and the configuration of the application.
  • so instead of having to write that in some textual and document, they just package all of that inside the application artifact.
  • and since it is already encapsulated, in one environment the operations people don’t have to configure any of this stuff directly on the server.
  • so it makes the whole process way easier, and there is less room for issues that I mentioned previously.
  • so the only thing now that operations team need to do in this case is to run a docker command that gets the container package that developers created and runs it on the server.
  • the same way operation team will run any services that application needs also as docker containers, and that makes the deployment process way easier on the operations side now.
  • operations team will have to install all the set up and docker runtime on server before they will be able to run containers but that’s just one time effort for one service or one technology.
  • and once you have docker runtime installed, you can simply run docker containers on that server.

2. Docker vs Virtual Machines

  • `Docker is a virtualization tool just like a virtual machine.
  • and virtual machines have been around a long time.

Why is Docker so widely used?

what advantages does it have over virtual machines? and what is the difference between the two.

  • with docker you dont need to install services directly on operating system.
  • but in that case how does docker run its containers on an operating system.

How an operating system is made up?

OS=2 layers

  1. OS Kernal
  2. OS Applications Layer
  • Kernal is the part which communicates with hardware components like CPU, RAM, HARDDRIVE
  • to allocate resources to the Applications(on OS Application Layer) that are running on the Operating System.
  • those applications(chrome, ms office) are the part of the OS application layer, they run on top of the OS kernal layer
  • kernal is middleman between applications & hardware of computer.

Now Docker and Virtual machine are both virtualization tools, The questions is what part of the operating system they actually virtulize?

And thats where the main difference between Docker and virtual machines actually lie,

  •  Docker Virtualizes the OS Application Layer, This means when you run a Docker Container it actually contains the Applications layer of the operating system and some other applications installed on top of that application layer this could be a Java runtime or python or whatever and it uses the kernal of the host.
  • because it doesnt have its own kernal.
  • The Virtual Machine on the other hand has the applications layer and its own kernal.
  • So it virtualizes, the complete operating system, which means that when you download, a virtual machine image on your host, it doesnt use the host kernal, it actually puts up its own.

what does actually difference between docker and virtual machine mean?

  • first of all the size of the docker packages or images are much smaller because they just have to implement one layer of the operating system.
  • Docker images are usually a couple of megabytes large, virtual machine images on the other hand can be a couple of gigabytes.
  • this means when working with Docker, you actually save alot of disk space.
  • You can run and start Docker containers much fasters than virtual machines.
  • because virtual machine has to put up a kernal every it starts.
  • while docker container just reuses, the host kernal and you just start the application layer on top of it.
  • so while virtual machines needs a couple of minutes to start up
  • Docker containers usually start up in a few milliseconds.
  • The third difference is compatibility.
  • so you can run vitual image of any operating system, on any other operatins system host.
  • so on windows machine you can run a linux virtual machine for example.
  • but you cant do that with docker, atleast not directly.
  • what is the problem here?
  • lets say you have windows operating system with windows kernal and its applications layer.
  • and you want to run linux based docker image directly on that windows host,
  • the problem here is linux based docker image, cannot use the windows kernal,
  • it would need a linux kernal to run.
  • because you can run a linux application layer on a windows kernal.
  • so thats kind of an issue with docker however, you are developing on windows or Mac OS
  • you want to run  various services
  • because most containers for the popular services are actually linux based.
  • also intresting to note that docker was originally written and built for linux.
  • but later docker actually made an update and developed whats called docker desktop for Windows and Mac
  • Which made it possible to run linux based containers on windows and mac computers as well
  • the way it works is that Docker desktop uses a hypervisor layer, a lightweight linux based distribution on top of it
  • to provide the needed linux kernal, and this way make running linux based containers,
  • possible on windows and mac operating systems.

So this means for local development as an you would install Docker Desktop on your windows or Mac OS computer to run linux based images which i mentioned most of the popular services databses Etc are mostly linux based.

Install Docker Desktop

  • so would need that and that brings us to the installation of Docker.
  • to install docker go to their offical Docker website
  • follow the installation steps
  • because docker gets updates all the time the installation changes so
  • always best to refer to latest installation guide in the official documentation
  • It has command line interfact(CLI) Client & Graphical User Interface(GUI) Client.

Docker Images vs Containers

  • Docker allows to package the application with its environment configuration and this package that you can share and distribute easily.
  • so just like an application artifact file like zip file or tarfile or jar file.
  • which you can upload to a artifact storage and download on the server or locally whenever you need it and
  • package or artifact that we produce with Docker is called Docker image.
  • so its basically application artifact.
  • but different from jar file or from other application artifacts, it not only has the compiled application code inside but addionally has information about the environment configuration, it has operating system application layer as i mentioned
  • and tools like node npm or java runtime installed on that depending on what programming language your application was written in.
  • for example you have a JavaScript application, you would need node.js and npm to run your application.
  • so in the docker image, you would actually have node and npm installed already
  • you can also add environment variables that your application needs.
  • for example you can create directories, you can create files, or any other enviroment configuration, whatever you need aroung your application.
  • so all of the information is packaged in the docker image togther with the application code.
  • and thats the great advantage of docker that we talked about.
  • and as i said package is called an image.
  • so if that’s the image, what is a container then?
  • well we need to start that application package somewhere right
  • so when we take that package or image
  • and download it to server
  • or your local computer or laptop
  • we want to run it on that computer.
  • the application has to actually run.
  • and when we run that image on an operating system
  • and the application inside starts in the pre-configured environment
  • that gives us a container
  • so a running instance of an image is a container.
  • so a container is basically a running instance an image.
  • and fromt he same image from one image you can run multiple containers
  • which is a legitimate use case if you need to run multiple instances
  • of the same application for increase performance for example
  • and thats exactly i were saying
  • so we have images, these are application packages basically
  • and from those images we can start containers
  • which are running instances of those images.

As i said with Docker Desktop in addtion to the Graphical User Interface(GUI) Client we get Command Line Interface(CLI) Client, Docker Client that can talk to Docker Enginer, and Since we installed Docker Desktop, We should have that Docker CLI also available locally.

which means if you open your terminal, you should be able to execute docker commands.and with docker commands we can do anything.

for example we can check what images we have available locally, with below command

List of all Docker images Commad- docker images

to check docker containers,

List of running containers- docker ps

Docker Registry

  • its clear that we get containers by running images but how do we get images to run containers from
  •  let say we want to run a database container or redis or some log collector service container
  • how do we get their docker images ?
  • thats where docker registries come in
  • so there are ready docker images available online in image storage or registry
  • so basically this is a storage specifically docker image type of artifacts
  • usually the company is developing those services redis, mongodb etc
  • as well as docker community itself will create whats called official images.
  • so you know this mangodb image was actually created by mongodb itself.
  • or the docker community
  • so you know its an official verified image from docker itself
  • and docker itself offers the biggest docker registry called docker hub
  • where you can find any of these official images
  • and many other images that different companies or individual developers
  • have created and uploaded there
  • docker
  • you can search different service images by typing in the search bar, you dont need signup for that.
  • example- redis
  • in this docker official images in docker hub registry, a team is responsible for reviweing and publishing all content in the docker official images repositories.
  • this team works in the collaboration technology creators or maintainers, as well as security experts, to create and manage those official docker images.
  • so this way it is ensured that not only the technology creators are involved in the official image creation, but also all the docker security best practices and production best practices, are also considered in the image creation.
  • you can find images for any service that you want to use on docker hub.

Versioning Images

  • ofcource technology changes and there are updates to applications those technolgoies
  • so you have a new version of redis or mongodb
  • and in that case a new docker image will be created
  • so images are versioned as well
  • and these are called images tags
  • and on the page of each image you have the list of versions
  • or tags of that image listed
  • lastest tag is the last image built

Pull an Image

  • First we locate the image that we want to run as a container locally
  • for example, go ahead and search for nginx image in docker hub, which is basically a simple web server and it has a UI
  • so we will be able to access our container, fromt he browser to validate the container has started successfully, thats why im choosing nginx
  • second step after locating the image is to pick a specific image tag
  • and note that selecting a specific version of image is the best practise in most cases
  • and lets say we choose version 1.23
  • so we are choosing this tag right here and to download an image
  • we go back to our terminal
  • and we execute docker pull nginx:1.23
  • 1.23 is the lastest version of the image that we have on docker hub
  • execute it
  • docker client will contact docker hub and it will say i want to grab the nginx image with this specific tag and download it locally
  • you can see in CLI, it will pull image from image registry(docker hub)
  • type docker images command in the cli
  • we should see nginx image locally with image tag 1.23
  • and some other information like size of the image
  • if we pull image without tag for example with the command docker pull nginx
  • it will pull the lastest image of nginx
  • type docker images again to see tag of the image that is downloaded
  • two images with nginx with two diffewrent tags
  • these are two seperate images with different versions

Run An Image

  • now we have images locally but obviosly theyre only useful when we run them in a container environment
  • type docker run nginx:1.23
  • to the run that image
  • this command actually starts the container based on the image
  • and we know the container started
  • because we see the logs of nginx service starting up inside the container
  • we can see container logs in the console(cli)
  • now if we open new terminal session
  • type docker ps
  • i should actually see one container
  • in the running container list
  • and we have information about the container
  • we have the ID, image, that the container is based on including the tag, name of the container
  • type ctlr+c
  • the container exists and the process actually dies
  • type docker ps
  • you will see there is no container running
  • type docker run -d nginx:1.23
  • to run the container of this image in the backgroud without bloacking the terminal
  • instead of showing the logs of the image, it will show the id of the image, this means container is running in the backgraoud
  • type docker ps
  • we should see the container running again
  • to see the logs of the image, type docker logs containerid

In order to create the nginx:1.23 container,

  1. we first pull the image from the docker hub registry with docker pull nginx:1.23 command
  2. then we created container from the image with docker run nginx:1.23

But actually we can save the Pull command by ourselfs and execute run command directly, even the image is not available locally.

  • type docker images on cli
  • it shows the images

without downloading an image from docker hum using docker pull command, we can directly run a image which exists in docker hub, for example.

  • type docker run nginx:1.22-alpine 
  • then click enter
  • when you execute this command, what is does is first it try to find it locally, if it cannot find it, it directly downloads from docker hub and runs
  • downloading and running in one command
  • check with docker ps command
  • now we have multiple nginx containers with different versions
  • quit this container by typing exit
  • type docker ps
  • to check running containers

Port Binding

  • each application(image) container has some standard port on which its running, nginx application always runs on port 80
  • reddis container runs on port 6379
  • when you type docker ps
  • you’ll see running containers of the images(applications), and you can see port numbers on which they are running
  • If i try to access nginx:1.23 container, which is running on port 80(on which it is running) from the browser
  • lets try to do that
  • type localhost:80
  • hit enter
  • you see that nothing is available on this port on localhost
  • now we can tell docker, hey you know what bind that nginx:1.23 container port(80) to our local host on any port that i tell you on some specific port 8080 or port 9000
  • so that i can access the container whatever is running inside the container, as if it was running on my localhost port 8080
  • and we do that with an addtional flag when creating a docker container
  • so what we are going to do is, we going stop this container(nginx:1.23)
  • and create a new one
  • type docker stop continaerid
  • which basically stops this running container
  • and we are going to create a new container
  • type docker run -d -p 8080:80 nginx:1.23
  • same application(nginx) with same version(1.23) we are running on backgroud on detached mode(-d) by port binding(publishing the containers application port to the localhost)
  • 8080 port im choosing
  • so this flag here will actually expose the container to our local network or localhost
  • so these nginx process runing in container will be accessible for us on port 8080
  • if i execute docker run -d -p 8080:80 nginx:1.23
  • check container is running or not with docker ps
  • in the port section we see the different value
  • so instead of having just 80, we have this port biding information
  • so if you forgot which port you chose or if you have 10 different containers, with docker ps you can actually see on which port each container is accessible on your localhost
  • so this will be the port, if you go to browser, we will type localhost:8080 in the weblink place
  • hit enter
  • there you have the welcome to nginx page
  • which means we are actually accessing our application
  • and we can see that in the logs as well, docker logs containerid
  • when you type this command this shows, this is the log that nginx application produced
  • that it got a request from windows os machine, chrome browser, so we see that our request actually reached the nginx application running inside the container
  • so thats how easy it is to run a service inside container and then access it locally

Choosing host port

  • now as i said you can choose whatever port you want but its pretty much standard to use the same port on your host server as the container is using
  • so if i was running a MySQL container which started at port 3306
  • i would bind it on localhost 3306
  • so thats kind of a standard

Start and Stop Containers

  • now there is one thing I want to point out here which is that Docker run commandactually creates  a new container every time
  • it doesnt resue the container that we created previously
  • which means since we executed docker run command couple of times already
  • we should actually have multiple containers on our laptop
  • however if i enter docker ps
  • i only see the running container
  • i dont see onces that i created but stopped
  • but those containers actually still exist
  • so if i do docker ps -a
  • this command gives list of all containers(running & stopped)
  • docker stop containerid – this command stops the running container
  • docker start containerid – this command is used to start the container which is stopped, it starts the container which was already created
  • check container status wheather its started or stopped by entering docker ps
  • docker run –name web-app -d -p 9000:80 nginx:1.23
  • this command creates a container with name web-app, and runs
  • –name web-app , –name is the flag and web-app is the name of the container we gave
  • enter docker ps again to check the the status and details of this container
  • docker logs web-app – this command is to check the logs of this web-app container

Docker Registries(Public & Private)

  • dockerhub is called public image registry
  •  those images that we used are visible and available for  public
  • but if a company creates their own images of their own applications
  • ofcourse they dont want it to be available publicly
  • so for that there are private docker registries
  • and there many of them, almost all cloud providers, have the service for private docker registry.
  • for example, AWS ECR(Elanstic Container Registry), Google, Azure they all have their own docker registries.
  • NEXUS is the popular artfacts storage, has docker registry.
  • Even dockerhub has a private docker registry.
  • so on the landing page of the dockerhub
  • you’ll see get started form
  • so basically if you want to store your private docker images on dockerhub
  • you can actually create a private registry on dockerhub
  • or even create a public registry
  • and upload your images there

Registry vs Repository

  • AWS ECR is a registry, basically thats a service
  • that provide storage for images
  • and inside that registry you can have multiple repositories for all your different applications images
  • so each application gets its own repository and in that repository you can store, different image versions or tags os that same application
  • in the same way dockerhub is a registry, its a service for storing images and on dockerhub you can have your public repositories for storing images, that will be accessable publicly
  • or you can have private repositories for different applications
  • and again you can have repository dedicated for each application.

Dockerfile (Create own images)

  • companies create custome images for their applications
  • how can create my own docker image for my application
  • so the usecase for that is when im done with development so application is ready, it has some features
  • and we want to release it to the end users
  • so we want to run it on a deployment server
  • and to make the deployment process easier,
  • we want to deploy our application as a docker container
  • along with database and other services, they are also gonna run as docker containers
  • so how can we take our created deployed application code
  • and package it into a docker image
  • for that we need to create a definition of how to build an image from our application
  • and that defition is written in a file called dockerfile
  • so thats how it should be called.
  • creating simple dockerfile is very easy
  • in this part we gonna take a super simple Node js application.
  • we will write dockerfile for that Node.js application to create a docker image
  • and as i said its very easy to do.
  • i uploaded application in docker desktop
  • i can see server.js file
  • which basically just starts the application
  • on port 3000 and it just says welcome when you access it from the browser
  • and we have one package.json file
  • which contains dependencies
  • in the root of the application we gonna create a new file called Dockerfile
  • in this dockerfile we gonna write definition of how the image should be built from this application
  • what does our application need?
  • it needs node installed, because node should run our application,
  • to start the application node src/service.js
  • this is the command
  • we need that node command inside the image
  • and thats where the concept of base image comes in
  • docker image is based on base image
  • which is mostly a light weight a linux operating system image
  • that has the node nps or whatever the tool you need for your application installed on top of it
  • so for javascript application youll have node base image
  • if you have java application, we will use and image that has, java runtime installed
  • linux operting system with java installed on top of it
  • and thats the base image.
  • and we define the base image using directive in docker file called FROM
  • were saying build this image from base image
  • if i go to dockerhub.com
  • search for node
  • youll se image of node & npm installed inside
  • base images are just like other images
  • so basically you can pile and build on top of the images in Docker
  • so they’re just like any other image that we saw
  • and they also have text or image versions
  • so we are gonna choose node image
  • in dockerfile its FROM node:19-alpine
  • so thats out base image
  • that is first directive in our dockerfile
  • this will make sure that when our node.js application starts in a container
  • it will have node and npm commands available inside
  • to run our application.
  • we need install the dependencies of an application, we just have one dependency for node.js application that we took
  • we have one dependency which is express library
  • which means we would have to execute npm install command
  • which will check the package.json file
  • read all the dependencies, defined inside and install them locally in node module folder
  • so basically we’re mapping the same thing
  • that we would do to run the application locally
  • we’re making that inside the container, so
  • we would have to run npm install command also inside the container
  • so as i mentioned before most of the docker images are linux based
  • Alpine is a linux a lightweight linux operating system distribution
  • so in docker file you can write any linux commands that you want to execute inside the container
  • and whenever we want to run any command inside the container
  • wheather its a linux command or node command or npm command
  • whatever we executed using RUNdirective
  • so thats an another directive
  • and you see that directives are written in all caps
  • then comes the command npm install, RUN npm install
  • which will download dependencies inside the container
  • and create a node modules folder
  • inside the container before the application gets started
  • think of a container as its own isolated environment
  • it has simple linux operating system with node and npm installed
  • and we are executing npm install, to install dependencies
  • however we need application code inside container as well right
  • so we need the server.js inside and we need the package.json
  • thats what npm command will need to actually read
  • the dependencies and thats an another directive
  • where we take the files from our local computer
  • and we paste them copy them to the container
  • and thats a directive called and you copy individual files like package.json
  • into the container and we can say where in container
  • on which location in the file system
  • it should be copied to
  • and lets say it should be copied into a folder called /app/
  • COPY package.json /app/
  • inside the container,
  • package.json is in our machine
  • /app/ is inside the container
  • its a completely isolated system from our local environment
  • so we can copy individual files
  • so we can also copy the complete directories
  • so we also need our application code inside obviously
  • to run the application
  • COPY src /app/
  • this directive is copying src folder in /app/
  • src is in local system we are coping into container in the folder called /app/
  • WORKDIR /app
  • this directive is for whatever comes after, those files goes to this.
  • CMD [“node”, “server.js”]
  • this is the last directive in docker file, this is to start the application

Docker file we created for node.js application

FROM node:19-alpine

COPY package.json /app/

COPY src /app/

WORKDIR /app

RUN npm install

CMD [“node”,  “server.js”]

this is complete docker file which will create the docker image for our node.js application

which we can then start as container

Builder Image

  • now we have the deifition in docker file
  • now we will build the image from this definition
  • we can use a docker command to build a docker image
  • docker build -t node-app:1.0 .
  • this is the command to build image from docker file
  • node-app is the name of our node application, 1.0 is the a version we are putting
  • docker builds image from our docker file
  • if i check with docker images command
  • we should see node-app image created
  • with tag, image id, when its created, size of it
  • docker run -d -p 3000:3000 node-app:1.0
  • im running the image with this docker run command to create container
  • if i check with docker ps
  • we should see our node app running on 3000 port
  • go to browser
  • localhost:3000
  • put this in browser
  • we should see our app
  • docker logs containerid
  • we can check the logs with this command

Dockerize your own application

  1. write dockerfile
  2. build docker image
  3. run as docker container

By Sai Charan Paloju

Trained AWS Certified Solutions Architect Associate Course SAA-C02/Content Writer/Creator, Masters Degree- Software Engineering, Bachelors Degree- Computer Science & Engineering, Youtuber- Host/Interviewer/Content Creator/Video Editor, Podcaster- Host/Interviewer/Content Creator/Editor, Technical Writer, Social Media Manager/Influencer Ex-Professional Cricketer mailme@smartcherrysthoughts.com https://smartcherrysthoughts.com/

14 thoughts on “Docker Crash Course for Absolute Beginners- Sai Charan Paloju”

Leave a Reply

Your email address will not be published. Required fields are marked *