Before I begin, I must disclose that I’m in no way an expert in the use of Docker, and much of what I will write about here is self-taught, and possibly not the most efficient way of running docker, but it works for me – your experience may differ!
What is Docker?
Docker is a platform that allows developers to build, test, and deploy applications using containers. It packages an application and its dependencies into a container, which can then be run on any system with Docker installed.
Containers?
A container is a lightweight, standalone, executable package that contains everything an application needs to run, including the code, runtime, system tools, libraries, and dependencies.
Unlike a virtual machine which requires a full operating system to run applications, containers “borrow” resources from the underlying host operating system. This means that containers are much more lightweight than virtual machines, and only consume resources they need, unlike a VM which often contain resources that are never used.
In order to run virtual machines, you need an application called a hypervisor – This application manages the virtual machine(s) you deploy and how they interact with the underlying host, or hardware (depending on what type of hypervisor you use).
Docker on the other hand achieves this with the use of Docker Engine. Docker Engine is the core component that builds, runs, and manages containers and it consists of a server (dockerd), some APIs, and a command-line interface (CLI) used to manage containers.
You can build your own containers if you are skilled enough, but for most people, using pre-built containers is the option to choose.
Pre-built containers are typically found on docker hub

Installing docker
Installing docker is a simple process, but will differ depending on the Operating system you use. As mentioned, I have deployed Ubuntu server on my PC and docker was one of the snap options included as I deployed the server operating system, so I didn’t need to install docker as such.
If you want to instal docker manually, then there are full instructions on the docker website for whichever Linux distribution you run. (https://docs.docker.com/engine/install/).
For Windows users, you will need to install docker desktop (which actually runs as a virtual machine) – instructions here – https://docs.docker.com/desktop/
Deploying containers
So, now you have docker installed, the next thing is to deploy a container.
Containers can be deployed in various ways, although Docker Compose is the recommended option.
Docker Compose is a tool for defining and running multi-container applications.
Docker Compose simplifies the control of your entire application stack, making it easy to manage services, networks, and volumes in a single configuration file. You can then create and start all the services from your configuration file.
Configuration files (A.K.A. compose files) are created using YAML (Yet Another Markup Language), which is an easy language to use and allows for a great deal of flexibility.
Most containers you download will have either a docker-compose.yml, or compose.yml file for you to edit to suit how you wish to define your container. These are the files used by the docker compose utility to build and configure your container.

The default name for a Compose file is compose.yaml
(preferred) or compose.yml
that is placed in the working directory of your container.
Compose also supports the use of docker-compose.yaml
and docker-compose.yml
for backwards compatibility of earlier versions. If both files exist, Compose prefers the canonical compose.yaml
.
You can place the configuration data for multiple containers all in one compose.yaml file if you wish, or keep them separate – its your choice. So long as you follow the correct syntax for the file, you won’t have issues.
To configure your container, there are a couple of things you need to know.
- The environment section of the compose.yml file allows you to set any specific values for your container such as the context the container will run under. Typically on a Linux system, the 1st created user account will have user and group IDs of 1000. So setting these values in the environment section will cause the container to run under that user’s context, as opposed to the root user for example.
- When specifying the port you want your container to be accessible via, you need to think about the value to the left of the colon.
So in the example below, the container will communicate internally via port 3000, but is mapped to port 80 externally. So to connect to this, you should use port 80. This is very useful if you have multiple containers that would otherwise wish to use the same port.

- Similarly, if you want your container to be able to access any storage outside of the container, you need to place the path to the volume to the left of the colon.

In the example above, the external path /home/mark/docker/homepage maps to the /app/config folder in the container. Any files in the external path will automatically be copied to the on inside the container.
- If your host device loses power, then your containers will need to be restarted once power is regained. To avoid having to do this manually, you should use a restart option. In the exaple below, the option is to restart to container until a
docker compose stop
command is issued.

Docker Command Line Interface (CLI)
The Docker CLI lets you interact with your Docker Compose applications through the docker compose
command, and its subcommands.
Using the CLI, you can manage the lifecycle of your multi-container applications defined in the compose.yaml
file.
The CLI commands enable you to start, stop, and configure your applications
Some key docker cli commands include:
docker compose up
– To start a containerdocker compose stop
– To stop a containerdocker compose logs
– To examine a container log filedocker compose ps
– To see all running containers
There are many more commands you can use with docker, but for now the ones above should be enough to get your 1st containers up and running.
In part 3, I’ll discuss the containers I am running, and how I have them configured.