Deploying FaustCTF Vulnbox on Hetzner

Hetzner sponsored the infrastructure of FaustCTF 2025 and also provided a 10€ coupon for every team that wanted to host their vulnbox on their infrastructure for free.

Message from the FaustCTF organisers on each team’s profile page.

In this article we will go through the steps to deploy the vulnbox provided by the FaustCTF organizers on a Hetzner server. You can create your Hetzner using our referral link.

Introduction

The FaustCTF is an Attack/Defense competition organized yearly. In order to participate in the competition, the registered teams have to self host the vulnbox (a VM with the competition’s services) given by the organizers.

The organizers offer the vulnbox in two formats:

  • an OVA container – a format commonly used with virtualization software like Virtual Box
  • a QCOW2 image – a QEMU formal compatible with multiple hypervisors and some cloud providers

Unfortunately, Hetzner doesn’t directly support either of these formats, but their rescue mode can be utilized to prepare the server. Since this approach was not clearly documented on the internet, we will go through it step by step.

Create Account and Redeem Code

We will create a new Hetzner account to be used for our this team and use the code provided by the FaustCTF organizers. If you wish, you can create your account following our referral link.

After creating your account, to redeem the code, head to the Console, click “Usage” on the navigation bar and click the “Redeem Code”. A pop up will appear to insert the code.

You can redeem Hetzner codes by visiting the “Usage” section on Hetzner Console.

In our case, using the code provided by the FaustCTF organizer we got 10€ of credits, enough to host our vulnbox for the 8 hours of the competition.

Starting a server

To start preparing our server, we will set up a simple Ubuntu VPS to use it as a base and then convert it to the vulnbox. To do so, while on Hetzner Console, go to “Projects”, select the “Default” project, and under the “Servers” click “Add Server”.

For the new server, select the “Location” of your preference (e.g. Falkenstein at eu-central), select Ubuntu as “Image”, for “Type” select the “Shared vCPU” option with the “x86 (Intel/AMD)” variant in combination with “CPX41” (8 vCPUs, 16GB RAM, 240GB SSD, but you can scale this later), and apart from the “Name” that you can change, leave the rest to default. (Note that the RAM selected on this step should be enough so that we can download the vulnbox on the RAM disk later on through the rescue mode).

After creating the server, click on it to visit its management panel.

Operating in the Rescue Mode

Lets continue by enabling the rescue mode on the server we prepared. You can do that by visiting the “Rescue” tab and clicking “Enable rescue & power cycle” (if requested for a public key, ignore it and continue). If enabled, you will see a some new credentials on the screen, copy them as we will use them in a bit (you will lose them if you refresh the page). Then open the “Console” through the “Actions” dropdown on the top right of the page. A new pop up console will appear and you will be able to use the credentials given to login.

The first problem one may face when using the rescue mode is that the keyboard has a German layout (e.g. typing “z” may be interpreted as “y”), causing problems when copy pasting commands. So lets change that by changing the keyboard layout/language.

We will start by running dpkg-reconfigure keyboard-configuration which you can paste as dpkg/reconfigure kezboard/configuration so that it can be translated to the correct command using the German keyboard. On the configuration menu, select the Generic 104-key PC keyboard and then for language select other > English (US) > English (US), you can leave the rest as is. As soon as the configuration is done, execute setupcon and the keyboard will now be in English (US).

Let’s now download the vulnbox OVA container provided by the CTF organisers. You can use wget to do that, just be careful when pasting the URL as it may be entered with https;// instead of https://. When the download completes, extract the OVA using tar -xvf vulnbox.ova and then delete the OVA file using rm vulnbox.ova (to save space).

Now, we have the vulnbox’s disk in the form of a VMDK file, and we can clone it to our server’s disk using qemu-img to convert it and write the results directly on the /dev/sda where the main disk of the server is (this will take one or two minutes).

qemu-img convert -O raw vulnbox-disk1.vmdk /dev/sda
Write the vulnbox disk on the server’s disk, then sync and reboot.

The server should now be ready to boot as the vulnbox. You can now copy the random root password and connect to it using SSH, to configure the vulnbox so that it can be used with your team.

Conclusion

To sum up, we were able to utilise the VMDK disk file of the provided vulnbox OVA container to easily import the FaustCTF on a Hetzner server. During the competition, we had no problem with our vulnbox and our total credits consumption was around 3€ (running the CPX41 for about 8 hours).

Fantastic Docker Hiccups and How to Fix Them

If you have encountered unexpected issues while working with Docker… you are in the right place!

Whether you’re a seasoned Docker user or just getting started, you may have encountered some unexpected “hiccups” along the way. In this article, we’ll analyse various Docker hiccups that I have experienced while developing and administrating servers, and try to provide practical solutions to fix them.

Buckle up and get ready to tackle those Docker challenges head-on!

Hiccups Covered

Here is a list of small problems that we will try to fix:

  • Huge Container Logs

I plan to update this article from time to time with more small Docker problems that I find interesting.

Huge Container Logs

As a naive and noob Docker user, I was under the impression that each container’s logs were automatically trimmed to minimise disk space usage. But… this is not the case, as Docker by default keeps all the logs… and I had to find it the hard way one day that some services were throwing error 500 and on further inspection, I found out that a server hosting 2 GitLab containers run out of disk space (each instance generated 39GB of logs).

To address the issue, I found out that I could configure each one of my containers to limit the log file in size and number. For projects deployed as a docker-compose.yml, one can configure the logging behavior of each service to apply these limits. Here is an example:

services:
  example:
    image: hello-world:latest
    restart: unless-stopped
    logging:
      driver: json-file
      options:
        max-size: "10m"
        max-file: "5"

Running a Docker Server on AWS securely

Let’s say you want to run a Docker server on the cloud. Amazon Web Services (AWS) and its Elastic Compute Cloud (EC2) is one option. It’s not the cheapest one, but it’s chosen by many.

Preparing an EC2 Instance

To start with, let’s prepare the EC2 instance that will be hosting our Docker server. Head to your AWS portal, log in and then find the EC2 Dashboard (you can do that by searching for “EC2” on the top search bar). Make sure you have selected the appropriate location (e.g. Frankfurt) from the dropdown list on the top right corner of the page, and then click the “Launch instance” button.

Let’s now configure the instance to be launched. To start with, we should give a name to the instance so that we can recognize it and select an OS image to install on the instance (we will be choosing the latest stable Ubuntu version).

Following that, you will have to select the instance type of the EC2 instance. The instance type is defining your EC2 instance’s hardware specifications. For my purposes, we will be choosing the “M5a” instance type family, powered by AMD EPYC 7571 processors, and selecting the smallest instance size available (namely “m5a.large”, featuring 2 vCPUs and 8 GiB of Memory) for the configuration of the server (we can upgrade it easily later on). You can find more information about the various EC2 instance types here.

Moving on, you will have to select a Key pair to install on the instance. This will allow you to access the server through SSH easily. If you don’t have one already created, create one and install it on your computer.

The next crucial step is to configure the network settings of the EC2 instance. Make sure that your instance is connected to the appropriate VPC and that the selected subnet has internet access so that you can expose the instance’s services to the public internet.

The last step of setting up our new EC2 instance is to configure its storage. For our example, I will be setting it to 200 GiB of storage, but you should be configuring that according to your needs.

Finally… we are done! Time to launch the new EC2 instance…

Installing Docker

Installing Docker on an Ubuntu EC2 instance is relatively trivial, as you just have to follow the official guide. Here are the commands you will need to run (based on the guide of 2023):

sudo apt-get update
sudo apt-get install -y ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Now the docker server should be up and running!

Securing the Docker Server

Unfortunately, by default, the containers will be able to request and retrieve the AWS Metadata information. These may result in the leak of private keys that a malicious container can use to get access to the host server (the AWS EC2 instance running the docker containers).

Fortunately, to achieve that and leak these metadata, the attacker will have to get Remote Code Execution (RCE) on the container or Server Side Request Forgery (SSRF), which may not be trivial depending on the software running on a container. Yet again, in some cases, like in CTF competitions, it is usually expected that the user will get such access to the container, thus making by default the whole server vulnerable.

To fix this issue, the only thing we can do is configure the AWS Metadata to accept requests from the host machine and not from the containers inside it. To do so, we can filter the requests based on their hops, limiting the accepted hop value to 1. You can set this option on your instance by running the following command (replace %your_instance_id% with your instance ID) :

aws ec2 modify-instance-metadata-options --instance-id %your_instance_id% --http-put-response-hop-limit 1

Please note that this may not work if the containers are not running in the default bridge network mode!

Installing LibreNMS on Docker

As its website explains, LibreNMS is “a fully featured network monitoring system that provides a wealth of features and device support”. It can be used to monitor servers and services so make sure they are functioning as intended.

Setting up LibreNMS

To set LibreNMS up, we will follow the official guide, correcting some of the commands and installing some additional software.

First, we will download the latest LibreNMS code from its GitHub repository, then we will extract the Docker-related files and delete the rest of the files:

mkdir ./librenms-temp
cd ./librenms-temp
wget https://github.com/librenms/docker/archive/refs/heads/master.zip
unzip master.zip
cd ..
cp -r ./librenms-temp/docker-master/examples/compose ./librenms
rm -rf ./librenms-temp
cd librenms

We can now optionally edit the .env file (e.g. to change the default database password):

nano .env

Now let’s launch the containers of LiberNMS:

sudo docker compose -f compose.yml up -d

As soon as all the containers are up, visit http://localhost:8000/ (or replace localhost with your server’s IP) and initialise LibreNMS by creating an admin account.

Installing Nagios plugins on LibreNMS Docker

After completing the LibreNMS setup, we can shut it down temporarily to install the Nagios plugins. To shut LibreNMS down, we can turn off its containers:

sudo docker compose -f compose.yml down

Let’s now install nagios-plugins that will allow us to create some service checks inside LiberNMS. To do so, we will create a temporal Ubuntu container (named tmp) to install nagios-plugins copy the binaries we need to LibreNMS files and delete them:

sudo docker run --name tmp -v $(pwd)/librenms:/opt/librenms -i -t ubuntu bash
apt update
apt install nagios-plugins -y
cp -P /usr/lib/nagios/plugins/* /opt/librenms/monitoring-plugins/
exit
sudo docker rm tmp

Then, we can start LibreNMS again:

sudo docker compose -f compose.yml up -d

The plugins will now be available inside the “Add Service” menu, under “Type”.

Setting up Website checks

Now that Nagios plugins are installed, we can use the Nagios check_http plugin to monitor the status of our websites.

Certificate Check

Let’s create a service to check our website certificate.

  • Click + Add Service from the menu
  • Give a name to the service
  • Select a device from the list (maybe the server hosting the website)
  • Select http as a Check Type
  • Input the domain of your website inside the Remote Host input
  • Input the parameters --sni -S -p 443 -C 30,14 -t 20 inside the Parameters input
  • Click Add Service

This will check the certificate of the website and generate a critical alert if it is about to expire in less than 14 days or generate a warning if it is about to expire in less than 30 days.

Website Check (HTTP/HTTPS)

Let’s create a service to check our website status over HTTP or HTTPS.

  • Click + Add Service from the menu
  • Give a name to the service
  • Select a device from the list (maybe the server hosting the website)
  • Select http as a Check Type
  • Input the domain of your website inside the Remote Host input
  • Input the parameters inside the Parameters input
    • to check an HTTP website insert the: -E -p 80 -t 20
    • to check an HTTPS website insert the: --sni -E -S -p 443 -t 20
  • Click Add Service

This will check the website and generate an alert in case it throws an error or if the website is unresponsive.

Sum up

LibreMNS is an easy to setup tool that can assist you in monitoring your servers and services. Plus, it can be easily set up using Docker. You can give it a try! 😊

Installing Portainer on Docker

I found Portainer to be just what I needed to manage the containers of my small Docker servers. You can install it easily as a container running on the same Docker server that you want it to manage.

Setting Portainer up

To install it, you first need to create a volume so that it can store your accounts and preferences:

sudo docker volume create portainer_data

Then you can deploy the container using the official Portainer image (please note that in the following command, I removed the -p 8000:8000 port mapping as I will only be using the HTTPS protocol to connect to it):

sudo docker run -d -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest

Now, you can visit though https (under port 9443) the deployed Portainer app to create an admin account. For example, if your server is located at 10.10.1.30, you can access Portainer at:

https://10.10.1.30:9443/

Please note that your browser will not trust the certificate, thus you will have to “Accept the risks” and proceed.

Updating Portainer

Updating Portainer to the latest version is relatively simple. First, you will have to pull the latest version of the container using:

 sudo docker pull portainer/portainer-ce:latest

Then you can just recreate the container without deleting the volume holding your configuration:

sudo docker stop portainer
sudo docker rm portainer
sudo docker run -d -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest

As simple as that! 🙂

Installing Gogs Git Server on Raspberry Pi

Git is a must-have utility for programmers. It allows you to track code changes and easily share your code with others enabling remote collaboration. Online git services like GitHub, GitLab, and Bitbucket are nowadays accessible for everyone, though sometimes the need to self-host a private service arises.

Gogs is an open-source git server, light enough to run on a Raspberry Pi. It is written in Go Lang and provides pre-compiled binaries for arm architectures.

In this tutorial, we will install Gogs on a Raspberry Pi 2. So, first, connect to your Raspberry, and let’s start!

Update your system

To start with, update our system and install any missing dependencies.

sudo apt update
sudo apt install wget unzip git -y

Prepare for installation

We will create a special user called git to operate the Gogs server. The following command will create the user and disable his password:

sudo adduser --system --shell /bin/bash --gecos "User for managing of gogs server" --group --disabled-password --home /home/git git

The next thing we will have to do is to download the pre-compiled Gogs package. Check the latest versions available at https://dl.gogs.io/. At the time of writing this report the latest version is 0.12.3 so we downloaded the gogs_0.12.3_linux_armv7.zip package (armv7architecture is compatible with Raspberry Pi 2, 3, and 4).

sudo su -c 'su git -c "wget https://dl.gogs.io/0.12.3/gogs_0.12.3_linux_armv7.zip -O ~/gogs_download.zip"'

After the download completes, unzip the package and then you may delete it.

sudo su -c 'su git -c "unzip ~/gogs_download.zip -d ~/"'
sudo su -c 'su git -c "rm ~/gogs_download.zip"'

Start the server

Now let’s setup the Gogs service to manage the server. Download the service script from the Gogs repo:

sudo wget https://raw.githubusercontent.com/gogs/gogs/main/scripts/systemd/gogs.service -O /lib/systemd/system/gogs.service

And then, enable the service and run it.

sudo systemctl enable gogs
sudo service gogs start

Follow the web installer

Now browse to the installation located at http://<your-raspbery-ip>:3000/install and complete the installation steps.

Setup your Argon One as a server

Argon One is one of the best Raspberry Pi cases, it combines decent cooling, easy access to the GPIO pins, and power button, and an awesome design.

By default, the built-in fan is inactive and the device does not automatically boot when plugged in power. Ideally, when running your Raspberry Pi as a server, you expect it to reboot after a blackout. For this, let’s go through some configuration on our Raspberry Pi to control to control the Argon One case for our server.

Installing Argon Config

The first thing to do is to install get and run the argon1.sh which installs a utility that helps us configure the case’s fan:

curl https://download.argon40.com/argon1.sh | bash

I also uploaded the script on GitHub as a Gist here in case the argon40 website is offline. Now the argonone-config command should be available on your system.

Configuring Argon’s Fan

If you want to change the fan behavior at a later time, you will be able to use the argonone-config command. Here is an example setting Fan to always on:

thanos@dinodevs:~$ argonone-config
--------------------------------------
Argon One Fan Speed Configuration Tool
--------------------------------------
WARNING: This will remove existing configuration.
Press Y to continue:y
Thank you.

Select fan mode:
  1. Always on
  2. Adjust to temperatures (55C, 60C, and 65C)
  3. Customize behavior
  4. Cancel
NOTE: You can also edit /etc/argononed.conf directly
Enter Number (1-4):1

Fan always on.
thanos@dinodevs:~$

By following the printed instructions you will also be able to configure a custom fan behavior.

Configuring Power & Power Button Behavior

The next step is to change the behavior of the case when plugged in so that our device boots automatically. To do that we have to run:

i2cset -y 1 0x01a 0xfe

To restore the power behavior back to the default one, where you have to press the power button for the device to boot, you can run:

i2cset -y 1 0x01a 0xfd

If for any reason you want to uninstall the argonone utility, run:

argonone-uninstall

So now you know how to set up your Raspberry Pi with the Argon One case as a server!