Quickly bootstrapping a Minecraft server in AWS

In this post, we are going to go over how I deployed a minecraft server running inside a docker container in a EC2 instance on AWS.

I used to play minecraft earlier in life and this week my younger brother told me he was playing online with his friends, this got me feeling nostalgic and so I decided to try to set up my own server just for the lolz.

What this post is (in true honesty)

First of all, let me just say that this is not the best tutorial to follow if you want to run a really high performance, highly customizable minecraft server. This is a simple, quick implementation done in a relatively short time, with a minimal configuration setup. Nonetheless, I would say it’s worthy of discussion (one could learn something new, right?). I wouldn’t say that this post is particularly complex but I’ll try to make it as clear as possible for someone with few cloud knowledge to be able to follow along. This text is more about my self-reflections on why I did things the way I did than a tutorial.

Starting to picture things together

The first step into solving the problem was understanding what sort of application layer configuration was needed. Did I need to set up a JVM or simply run a binary, how does minecraft run after all?

After a quick search, I found out that there is a really well maintained and documented docker image itzg/minecraft-server. Even though I didn’t spend much time looking at the dockerfile configuration, it was obvious that the application ran on top of a JVM. I ran the container right away on my local machine with docker run -p 25565:25565 --name mc -e EULA=TRUE itzg/minecraft-server and then tried to access it with another host inside my LAN via private-ip. (You can find your private ip running ip addr show or ifconfig). It worked! Great! Let me say, I don’t own the most powerful laptop in the world, but the server was actually running pretty smoothly.

Running it globally instead of locally (or force your friends to come over and play at your house)

Running the server inside your LAN with docker is incredibly effortless, but how does one go about sharing the server with their friends?

Well, for that to happen the instance running the container needs to be accessible from the Internet, meaning it needs to have a public ip. There are many ways to make that happen, one option could be, for example, port-forwarding port 25565 form your home-router to your host, but that is not the safest thing to do in the world. One could, run the container on ECS, etc, etc. I eventually decided to run it on EC2 simply because it’s simpler, and I’m already familiar with the setup, plus, if necessary, the instance specs (RAM, CPU, etc) can be quickly upgraded by changing the instance type.

Infrastructure

Terraform seems to be the go-to tool to bootstrap cloud infra these days, I don’t particularly love it, to be honest, but it does have many benefits (read about it here). One of them being the open-source modules, that can save you a lot of searching, thinking, typing and most importantly head banging against the wall. To run the instance provisioning (installing and configuring software) I used cloud-init, which is an improvement from ad-hoc shell-scripts. You can check the “code” at Github.

Terraform

I used two really handy modules terraform-aws-modules/ec2-instance/aws and terraform-aws-modules/ec2-instance/aws. Without going into much detail, this “code” bootstraps a t3.medium instance running Ubuntu 20.04 in the default Virtual Private Cloud. The VPC security group is allowing traffic in ports 22 (ssh) and 25565 (minecraft tcp). Also, we are using ssh keys to manage authentication (with the aws_key_pair resource).

Cloud-init and systemd

Systemd is one of many service managers available in the linux world, thankfully it’s the default one in Ubuntu. It does much more than running one-off services, it’s responsible for spawning the system’s process tree in a rightfully-ordered manner when the kernel starts. For this configuration, the most important thing to notice is that our minecraft service depends on the docker daemon to be up and running. In as few as 30 lines of “code”, we are making sure that our instance packages are up to date (package_upgrade: true). We are copying our service’s definition to /etc/systemd/system/minecraft.service. We are installing docker with the o get-docker.sh script. After that, we enable and start our service. Also, in the last lines we make sure that the default user ubuntu is inside the docker group, which makes him available to use docker.

Creating the infra

Run terraform apply, that will do the heavy lifting. The commands bellow will allow you to retrieve the instance public ip and the ssh key.

PUBLIC_IP=$(terraform show -json | jq -r '.values[].child_modules[].resources[] | select(.address == "module.ec2.aws_instance.this[0]") | .values.public_ip')
PRIVATE_SSH_KEY=$(terraform show -json | jq -r '.values[].resources[] | select(.address == "tls_private_key.example") | .values.private_key_pem')
echo ${PRIVATE_SSH_KEY} > key.pem 
chmod 400 key.pem

# by now you can test your tcp connectivity to the instance
nc -zvw3 ${PUBLIC_IP} 22
nc -zvw3 ${PUBLIC_IP} 25565

# to access your instance
ssh -i key.pem ubuntu@${PUBLIC_IP}

# after logging in you can check the cloud-init logs
cat /var/log/cloud-init.log

# you can also run `docker ps` and `docker logs` to make sure all is running smoothly.

By now you anyone should be able to access your server, all you need is to provide them the value of PUBLIC_IP.

· aws, terraform, docker