Docker: How to create a Home Virtual Lab for Ethical Hacking
If you’re into ethical hacking and looking for a way to set up your own home virtual lab, you’ve come to the right place. In this article, I’ll share my experience with Docker, an amazing containerization platform that has taken the tech world by storm. I’ll guide you through the process of installing it, using it, and managing instances and images, all while keeping things as straightforward as possible.
By the end of this guide, you’ll be well-equipped to use Docker for building a home virtual lab to practice and improve your ethical hacking skills. So, let’s dive in and explore the power of Docker together.
What is Docker?
Docker is an open-source containerization platform that allows developers and IT professionals to package and distribute applications and their dependencies in a consistent, isolated environment called a container.
A container is a standalone, lightweight, and executable unit that packages an application along with its dependencies and required resources. It provides an isolated and consistent environment for an application to run, regardless of the host system. A container is intended to start quickly and be torn down if an issue is encountered, with another container spun up in its place. Because of this it should not be used to store user data. You instead share files with the host or other servers.
Containers are created from images, which are the blueprints containing the necessary components for running an application. Containers ensure that an application runs consistently across different computing environments by isolating it from the underlying host system.
Docker offers several advantages over traditional virtual machines (VMs), making it a more efficient and lightweight solution for running applications:
- Resource usage: Containers share the host system’s kernel and use fewer resources, whereas VMs require a full OS instance, consuming more memory and CPU.
- Startup time: Containers have faster startup times compared to VMs, as they don’t need to boot up an entire OS.
- Portability: Containers package applications and dependencies together, ensuring consistent behavior across different environments, while VMs may face compatibility issues.
- Scalability: Docker makes it easy to scale applications by adding or removing containers, while scaling with VMs can be more resource-intensive and time-consuming.
- Size: Container images are typically smaller than VM disk images, making them quicker to transfer, store, and deploy.
Installing Docker
In this article I will assume that you are running Ubuntu or an Ubuntu-based OS (this is what I use). However, if you’re using a different operating system, such as Windows or macOS, you can find detailed installation instructions for Docker in the official Docker documentation:
- Windows: https://docs.docker.com/docker-for-windows/install/
- macOS: https://docs.docker.com/docker-for-mac/install/
Before proceeding, make sure your system is up to date. Docker requires some packages to be installed on your system. Install these dependencies by running:
$ sudo apt-get install ca-certificates curl gnupg apt-transport-https
Next, add the official GPG key:
$ sudo install -m 0755 -d /etc/apt/keyrings $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg $ sudo chmod a+r /etc/apt/keyrings/docker.gpg
Next, use the following command to set up the repository:
$ echo \ "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \ "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Last, update the apt package index and install docker:
$ sudo apt update $ sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
At this point the install process is done. If you need more details on the procedure you can take a look at the official manual: https://docs.docker.com/engine/install/ubuntu/
Managing Instances and Images with Docker
Let’s take a look at the basics of managing Docker instances (containers) and images, including how to pull, run, and manage containers, as well as how to create and manage custom images.
Pulling Docker images
Before running a container, you need to pull the corresponding image from a registry (or, in other words, a repository, like Docker Hub) to your local system. By pulling an image, you make it available for creating and running containers on your machine. The syntax for the docker pull
command is as follows:
docker pull <image_name>[:<tag>]
<image_name>
: the name of the image you want to pull. Image names usually consist of a user or organization name, followed by a slash and the image’s actual name, such asubuntu
,python
, ornginx
. For official images, only the image name is used without the user or organization part (e.g.,ubuntu
).:<tag>
(optional): The specific version or variant of the image. If you don’t specify a tag, Docker will use thelatest
tag by default, which typically corresponds to the most recent stable version of the image.
For example, to pull the latest version of the OWASP Juice Shop image (an intentionally insecure javascript web application and an excellent candidate for practicing web hacking), you would run:
$ docker pull bkimminich/juice-shop
Running Docker containers
The docker run
command is used to create and start a new container from a Docker image. It provides various options to configure the container’s behavior, such as port mapping, running in the background, and setting environment variables. The syntax for the docker run
command along with some common options is as follows:
docker run [options] <image_name>[:<tag>] [<command>] [<args>]
<image_name>[:<tag>]
: The name of the image you want to create a container from, with an optional tag to specify the version or variant.[<command>] [<args>]
: Optionally, you can specify a command to run inside the container, followed by any arguments needed for the command.
Some common options for docker run
are:
-d
: Run the container in detached mode, which means the container will run in the background and not occupy the terminal.-it
: Allocate a pseudo-TTY and keep STDIN open, allowing you to interact with the container using the terminal. Typically used when you want to run an interactive shell inside the container.-p <host_port>:<container_port>
: Map a network port on the host to a network port inside the container. This is useful when you want to expose a service running inside the container to the host or external networks.--name <container_name>
: Assign a custom name to the container, making it easier to manage and identify (if not specified a random name is chosen instead).-v <host_path>:<container_path>
: Mount a volume from the host system to the container, allowing persistent storage or sharing data between the host and the container.-e <key>=<value>
: Set environment variables for the container, which can be used to configure the application or service running inside the container.
So, for example, we could run an instance of OWASP Juice Shop called juice-shop
and map its internal 3000 port to our local 8080 port with:
docker run -d -p 8080:3000 --name juice-shop bkimminich/juice-shop
Managing containers and images in Docker
In Docker each object (such as containers, images, networks and volumes) has a unique identifier called ID
. Docker IDs are typically represented as alphanumeric strings and are used when performing operations on specific objects.
While Docker IDs are long and not easily memorable, you can use the first few characters of an ID to reference a specific object, as long as it’s unique among the existing objects of the same type. For example, if you have a container with an ID of 5a5c2db6e5c3a93294af3d8c8a4cb4d4dd9b2e4b8c846b1eab5f5c5ec374ba92
, you can reference it using just the first few characters, like 5a5c2db6
, as long as there isn’t another container with an ID starting with the same characters.
You can also assign custom names to containers and images using the --name
flag when creating a container or the docker tag
command when dealing with images. These custom names can be used in place of IDs for most Docker operations.
Here are some essential commands for managing Docker containers:
- List running containers:
docker ps
- List all containers (running and stopped):
docker ps -a
- Stop a container:
docker stop <container_id>
- Remove a container:
docker rm <container_id>
And here are some essential commands for managing Docker images:
- List images:
docker images
- Remove an image:
docker rmi <image_id>
- Tag an image:
docker tag <source_image> <target_image>
- Push an image to a registry:
docker push <image_name>
Conclusion
Docker is a powerful tool that simplifies application development and deployment by using containers, making it an ideal choice for building a home virtual lab for practicing ethical hacking. We’ve covered the basics of Docker, including what it is, how to install it on Ubuntu Linux, and managing instances and images. Docker’s efficiency, portability, and scalability make it a popular choice among developers and IT professionals alike.
In a follow-up article, we’ll take a closer look at creating custom Docker images, allowing you to tailor your virtual lab environments to specific ethical hacking scenarios. Stay tuned!
Recent comments