dimmaski2023-01-31T13:54:36-06:00https://dimmaski.comDiogo Fernandesdiogo.fernandes.12@hotmail.comServe Vue from Fastapi in a breeze2023-01-29T00:00:00-06:00https://dimmaski.com/serve-vue-fastapi<p>If you don’t care about text and just want to go straight to the code, I’ll make your life easy, here you go <a href="https://github.com/dimmaski/fastapi-vue">github.com/dimmaski/fastapi-vue</a>. This repo contains a very simple setup of Vue + Vite being served from a Fastapi endpoint.</p>
<p>Recently, I’ve been playing around with Vue 3, and was looking for a simple way to serve my static assets from inside Fastapi, without losing development efficiency. I’ll like to share my current setup with you.</p>
<h2 id="cdn">CDN?</h2>
<p>To CDN or not to CDN 💭? Nowadays most of the SPA assets that your browser pulls come from <a href="https://aws.amazon.com/what-is/cdn/">CDNs</a>. And that’s all fine if you need to scale globally, set up cache, control TTLs, etc. If you don’t have any of those requirements, why not simply serve your static assets from your API?</p>
<h2 id="setup-from-scratch">Setup from scratch</h2>
<h3 id="setup-vue--vite">Setup Vue + Vite</h3>
<p>First of all, build your vue app. I recommend using <a href="https://vitejs.dev/">Vite</a> for bootstrapping.</p>
<pre><code class="language-sh">npm create vite@latest ui -- --template vue
</code></pre>
<p>Once the Vue code is generated, head to your <code>package.json</code> file, and add the <code>vite build --watch</code> command to the script’s configuration. Then make sure to run <code>npm install</code> and then <code>npm run build</code>.</p>
<h3 id="setup-fastapi">Setup Fastapi</h3>
<p>Make sure that you have the fastapi package installed.</p>
<pre><code class="language-python"># main.py
from fastapi import FastAPI
from fastapi.staticfiles import StaticFiles
app = FastAPI()
app.mount('/', StaticFiles(directory='../ui/dist', html=True))
</code></pre>
<h3 id="folder-structure">Folder Structure</h3>
<p>Your folder structure should look something like the following.</p>
<pre><code class="language-sh">.
├── api
│ └── main.py
└── ui
├── dist
├── index.html
├── node_modules
├── package.json
├── package-lock.json
├── public
├── README.md
├── src
└── vite.config.js
</code></pre>
<h2 id="serve-locally-with-hot-reload">Serve locally with hot reload</h2>
<p>For local development, you can simply run the command below, which:</p>
<ul>
<li>runs vite on watch mode so that the changes you perform on the Vue code are reflected by the vite server.</li>
<li>also runs fastapi with hot reload so that changes on the code are picked up and served by uvicorn.</li>
</ul>
<pre><code class="language-sh">npm run watch --prefix ui & uvicorn api/main:app --reload && fg
</code></pre>
<p>To cancel click <code>Ctrl+C</code> twice.</p>
<blockquote>
<p><strong><em>Info:</em></strong> If you want to learn how to deploy your FastAPI as a Lambda function take a look at <a href="https://dimmaski.com/fastapi-aws-sam/">this post</a>.</p>
</blockquote>
<p>🏌🏼♂️ Keep kicking!</p>
How to get your public IP on terraform2022-10-17T00:00:00-05:00https://dimmaski.com/get-publicip-terraform<h2 id="using-http">Using HTTP</h2>
<p>Getting your public IP on terraform using HTTP is simple, but it requires an external dependency. Online you’ll find tons of open APIs like <a href="https://ipinfo.io/">https://www.ipify.org</a>, <a href="https://seeip.org/">https://seeip.org</a>, <a href="https://ipinfo.io/">https://ipinfo.io</a>, etc, that you can use. Once you have picked one, simply HTTP GET your JSON response with <code>hashicorp/http</code> and parse the output appropriately.</p>
<pre><code class="language-terraform">terraform {
required_providers {
http = {
source = "hashicorp/http"
version = "3.1.0"
}
}
}
data "http" "ipinfo" {
url = "https://ipinfo.io"
}
</code></pre>
<p>This can be extremely useful to enable traffic between your work station and remote servers, without exposing them to the rest of the internet.</p>
<pre><code class="language-terraform">module "openvpn" {
source = "terraform-aws-modules/security-group/aws"
# (...)
ingress_with_cidr_blocks = [
{
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = format("%s/32", jsondecode(data.http.ipinfo.body).ip)
},
]
}
</code></pre>
<h2 id="feeling-frisky-dns">Feeling frisky? DNS?</h2>
<p>If you’re feeling hipster, you might consider using DNS. This still requires an external dependency (DNS server). I don’t recommend it, but who am I to advise on this matter anyway?</p>
<pre><code class="language-terraform">terraform {
required_providers {
dns = {
source = "hashicorp/dns"
version = "3.2.3"
}
}
}
data "dns_a_record_set" "whoami" {
host = "myip.opendns.com"
}
output "ip" {
value = data.dns_a_record_set.whoami.addrs
}
</code></pre>
<p>This is roughly the same as running <code>dig myip.opendns.com @resolver1.opendns.com -4 +short</code>.</p>
<blockquote>
<p><strong><em>NOTE:</em></strong> Take into account that for the above code to work you’ll need to add a new <code>nameserver</code> entry in your <code>/etc/resolv.conf</code>, find the IP of the DNS resolver with dig, i.e. <code>dig resolver1.opendns.com +short</code>.</p>
</blockquote>
Private Kubernetes Cluster with kops2022-10-08T00:00:00-05:00https://dimmaski.com/kops-private-cluster<p>I’ve scattered the web looking for a simple tutorial on how to set up kops, with a private API, private DNS zone, and private network topology.</p>
<p>Most of the blog posts I’ve come across go with the default configuration or just tweak it at a superficial level.</p>
<p>That’s the motivation for this post, we are going to set up a highly available Kubernetes cluster on AWS using kops and terraform, the cluster will have an unexposed API, private nodes (all running in private subnets), and a private DNS zone.</p>
<h2 id="terraform-setup">Terraform setup</h2>
<p>We’ll use infrastructure as code best practices to make sure our setup is repeatable using terraform.</p>
<p>Bellow is a bad drawing that tries to explain how the infrastructure will look like. I know of the existence of draw.io, but we are trying to be edgy here, ok!? 🔥</p>
<p align="center"><img src="/assets/images/kops-kubernetes-private-aws.jpeg" /></p>
<p align="center">
Fig.1 - VPC setup to help explain the terraform configuration
</p>
<p>First of all, we’ll set up a VPC, private subnets where our nodes will run, public subnets (those will be necessary to run ELBs for Ingress and Services of type LoadBalancer), a private DNS zone, an s3 bucket to store the kops state and we’ll also create a security group that we’ll attach to the internal API ELB. Here is the code:</p>
<pre><code class="language-terraform"># main.tf
provider "aws" {
region = "us-west-2"
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.32.0"
}
}
}
data "aws_region" "current" {}
locals {
region = data.aws_region.current.name
azs = formatlist("%s%s", local.region, ["a", "b", "c"])
domain = "datacenter.example.com"
cluster_name = "test.datacenter.example.com"
}
module "zones" {
source = "terraform-aws-modules/route53/aws//modules/zones"
version = "2.9.0"
zones = {
(local.domain) = {
vpc = [
{
vpc_id = module.vpc.vpc_id
}
]
}
}
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.16.0"
name = format("%s-vpc", local.cluster_name)
cidr = "10.0.0.0/16"
azs = local.azs
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
one_nat_gateway_per_az = false
enable_dns_support = true
enable_dns_hostnames = true
private_subnet_tags = {
"kubernetes.io/role/internal-elb" = 1
}
public_subnet_tags = {
"kubernetes.io/role/elb" = 1
}
tags = {
Application = "network"
format("kubernetes.io/cluster/%s", local.cluster_name) = "shared"
}
}
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.4.0"
bucket = "kops-state-datacenter"
acl = "private"
versioning = {
enabled = true
}
}
module "kops_internal_lb_secg" {
source = "terraform-aws-modules/security-group/aws"
version = "4.13.1"
name = "Security group for internal ELB created by kops"
vpc_id = module.vpc.vpc_id
ingress_cidr_blocks = [module.vpc.vpc_cidr_block]
ingress_rules = ["https-443-tcp", "http-80-tcp"]
}
</code></pre>
<p>We’ll need to pick up some information about our infrastructure to template our kops configuration file.</p>
<pre><code class="language-terraform"># outputs.tf
output "region" {
value = local.region
}
output "vpc_id" {
value = module.vpc.vpc_id
}
output "vpc_cidr_block" {
value = module.vpc.vpc_cidr_block
}
output "public_subnet_ids" {
value = module.vpc.public_subnets
}
output "public_route_table_ids" {
value = module.vpc.public_route_table_ids
}
output "private_subnet_ids" {
value = module.vpc.private_subnets
}
output "private_route_table_ids" {
value = module.vpc.private_route_table_ids
}
output "default_security_group_id" {
value = module.vpc.default_security_group_id
}
output "nat_gateway_ids" {
value = module.vpc.natgw_ids
}
output "availability_zones" {
value = local.azs
}
output "kops_s3_bucket_name" {
value = module.s3_bucket.s3_bucket_id
}
output "k8s_api_http_security_group_id" {
value = module.kops_internal_lb_secg.security_group_id
}
output "cluster_name" {
value = local.cluster_name
}
output "domain" {
value = local.domain
}
</code></pre>
<p>By now, you can run <code>terraform apply</code> and bootstrap the base infrastructure.</p>
<h2 id="templating-our-kops-manifest">Templating our kops manifest</h2>
<pre><code class="language-yml"># kops.tmpl
apiVersion: kops/v1alpha2
kind: Cluster
metadata:
name: {{.cluster_name.value}}
spec:
api:
loadBalancer:
type: Internal
additionalSecurityGroups: ["{{.k8s_api_http_security_group_id.value}}"]
dnsZone: {{.domain.value}}
authorization:
rbac: {}
channel: stable
cloudProvider: aws
configBase: s3://{{.kops_s3_bucket_name.value}}/{{.cluster_name.value}}
# Create one etcd member per AZ
etcdClusters:
- etcdMembers:
{{range $i, $az := .availability_zones.value}}
- instanceGroup: master-{{.}}
name: {{. | replace $.region.value "" }}
{{end}}
name: main
- etcdMembers:
{{range $i, $az := .availability_zones.value}}
- instanceGroup: master-{{.}}
name: {{. | replace $.region.value "" }}
{{end}}
name: events
iam:
allowContainerRegistry: true
kubernetesVersion: 1.25.2
masterPublicName: api.{{.cluster_name.value}}
networkCIDR: {{.vpc_cidr_block.value}}
kubeControllerManager:
clusterCIDR: {{.vpc_cidr_block.value}}
kubeProxy:
clusterCIDR: {{.vpc_cidr_block.value}}
networkID: {{.vpc_id.value}}
kubelet:
anonymousAuth: false
networking:
amazonvpc: {}
nonMasqueradeCIDR: {{.vpc_cidr_block.value}}
subnets:
# Public (utility) subnets, one per AZ
{{range $i, $id := .public_subnet_ids.value}}
- id: {{.}}
name: utility-{{index $.availability_zones.value $i}}
type: Utility
zone: {{index $.availability_zones.value $i}}
{{end}}
# Private subnets, one per AZ
{{range $i, $id := .private_subnet_ids.value}}
- id: {{.}}
name: {{index $.availability_zones.value $i}}
type: Private
zone: {{index $.availability_zones.value $i}}
egress: {{index $.nat_gateway_ids.value 0}}
{{end}}
topology:
dns:
type: Private
masters: private
nodes: private
---
# Create one master per AZ
{{range .availability_zones.value}}
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
labels:
kops.k8s.io/cluster: {{$.cluster_name.value}}
name: master-{{.}}
spec:
machineType: t3.medium
maxSize: 1
minSize: 1
role: Master
nodeLabels:
kops.k8s.io/instancegroup: master-{{.}}
subnets:
- {{.}}
---
{{end}}
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
labels:
kops.k8s.io/cluster: {{.cluster_name.value}}
name: nodes
spec:
machineType: t3.medium
maxSize: 2
minSize: 2
role: Node
nodeLabels:
kops.k8s.io/instancegroup: nodes
subnets:
{{range .availability_zones.value}}
- {{.}}
{{end}}
</code></pre>
<blockquote>
<p><strong><em>NOTE:</em></strong> Notice that the <code>api.loadBalancer</code> is of type <code>Internal</code> and that <code>additionalSecurityGroups</code> is defined. Also, notice the <code>dnsZone</code>. For networking we choose use <code>amazonvpc</code> (Find more networking models <a href="https://kops.sigs.k8s.io/networking/">here</a>). Finally, take a look at the <code>topology</code> stanza, where every key <code>dns</code>, <code>masters</code>, and <code>nodes</code> are defined as <code>private</code>.</p>
</blockquote>
<p>Now, to template our kops file let’s take a look at the following shell commands.</p>
<p>What do we want to do? We want to create a yaml file with the values that will be used to feed our <code>kops.tmpl</code> file. We also want to initialize some variables that will facilitate the next kops commands (like <code>KOPS_CLUSTER_NAME</code>).</p>
<pre><code class="language-sh">TF_OUTPUT=$(terraform output -json | yq -P)
KOPS_VALUES_FILE=$(mktemp /tmp/tfout-"$(date +"%Y-%m-%d_%T_XXXXXX")".yml)
KOPS_TEMPLATE="kops.tmpl"
KOPS_CLUSTER_NAME=$(echo $TF_OUTPUT | yq ".cluster_name.value")
KOPS_STATE_BUCKET=$(echo $TF_OUTPUT | yq ".kops_s3_bucket_name.value")
echo $TF_OUTPUT > $KOPS_VALUES_FILE
</code></pre>
<p>Once you have the above variables loaded you can issue the kops commands.</p>
<pre><code class="language-sh">kops toolbox template --name ${KOPS_CLUSTER_NAME} --values ${KOPS_VALUES_FILE} --template ${KOPS_TEMPLATE} --format-yaml > cluster.yaml
kops replace --state s3://${KOPS_STATE_BUCKET} --name ${KOPS_CLUSTER_NAME} -f cluster.yaml --force
</code></pre>
<p>Now, you have two options, you can either let kops create your cluster, like so:</p>
<h4 id="let-kops-create-the-cluster">Let kops create the cluster</h4>
<pre><code class="language-sh">kops update cluster --state s3://${KOPS_STATE_BUCKET} --name ${KOPS_CLUSTER_NAME} --yes
</code></pre>
<h4 id="let-kops-create-the-cluster-terraform-code">Let kops create the cluster terraform code</h4>
<p>Or you can let kops create the terraform definition for you, that you can then apply.</p>
<pre><code class="language-sh">kops update cluster --state s3://${KOPS_STATE_BUCKET} --name ${KOPS_CLUSTER_NAME} --out=. --target=terraform
</code></pre>
<p>The above command should create a <code>kubernetes.tf</code> file, you can now run <code>terraform apply</code> to bootstrap your kubernetes cluster.</p>
<h2 id="make-sure-that-the-cluster-is-healthy">Make sure that the cluster is healthy</h2>
<p>Give your cluster a few minutes to bootstrap after you issued the creation command.</p>
<blockquote>
<p><strong><em>NOTE:</em></strong> Now, here is something to take into account, your kubernetes master nodes are now running inside your private subnets, and the ELB created by kops is internal (as intended), plus the ELB security group that we created only accepts traffic from inside the VPC. So, in order to be able to talk to the API, and issue <code>kops</code> commands you’ll either need a bastion host, or a VPN service running in your VPC. Setting up a VPN server is outside the scope of this blog post. AWS has a <a href="https://aws.amazon.com/blogs/awsmarketplace/setting-up-openvpn-access-server-in-amazon-vpc/">nice blog</a> post about it.</p>
</blockquote>
<p>So, assuming that you have a VPN service running in your VPC, connect to it and make sure that your client can resolve DNS.</p>
<p>You can do that by running the following.</p>
<pre><code class="language-sh">dig +short api.test.datacenter.example.com
</code></pre>
<p>Then let kops set up your <code>kubect</code> context aproprietely.</p>
<pre><code class="language-sh">kops export kubeconfig --admin --state s3://${KOPS_STATE_BUCKET}
</code></pre>
<p>Then run the cluster validation command.</p>
<pre><code class="language-sh">kops validate cluster --wait 10m --state s3://${KOPS_STATE_BUCKET}
</code></pre>
<p>Your cluster should be now operational.</p>
<h2 id="delete-the-resources">Delete the resources</h2>
<p>If you are just playing around, don’t forget to delete the infrastructure once your done!</p>
<pre><code class="language-sh">kops delete cluster --name ${KOPS_CLUSTER_NAME} --state s3://${KOPS_STATE_BUCKET} --yes
terraform destroy
</code></pre>
Exploring cloud-init2022-07-21T00:00:00-05:00https://dimmaski.com/exploring-cloud-init<p>Let’s explore cloud-init.</p>
<p>I’ve been doing that since yesterday, my main goal was to define an Ansible role
alongside my Terraform code
and pass it to the instance via user-data.
I know that you are cringing right now and saying: The right tool for the job is Packer, why don’t you use it? Also, do you know that
there is a 16KB limit on user-data (on <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-add-user-data.html">AWS</a> at least)?</p>
<p>Those are valid points, but sometimes I just want to run the damn Ansible man, I don’t want to bother with the cloud-init syntax and its modules. Also, how am I passing my Ansible code to the instance if not via user-data, exactly? Am I setting an S3 bucket (plus IAM policies) and fetching it from there? Or am I using a git token so that I can fetch the role directly from a private Git repo? Bloat. Either way, you’ll most likely be inserting some proportion of cloud-init in there, so might as well learn the trade-offs.
Packer? It would be ideal to use it, yes, but the AMI needs to be stored somewhere, and that obviously isn’t free.</p>
<p>So, maybe, just maybe, if all you want to do is pass a small Ansible playbook to your instances and you’re trying to be frugal about it, maybe you can get away by using just cloud-init, dodging the bloat.</p>
<h1 id="cloud-init-pros-and-cons">Cloud-init pros and cons</h1>
<p>Let’s assume the following: we want to use a conjunction of Terraform and Ansible to provision and configure our servers. There are many ways to achieve that.</p>
<p>Across the internet, you’ll find multiple posts that go about using the Terraform provisioners (remote-exec and local-exec) to trigger Ansible. I would advise you not to do so, and so does the <a href="https://www.terraform.io/language/resources/provisioners/syntax">terraform documentation</a>, they provide a very solid explanation for why you should only use remote-exec and local-exec as a last resort, their behavior doesn’t match well with the statefulness of Terraform.</p>
<p>So, we are left with user-data.</p>
<h1 id="but-really-is-it-possible-to-pass-a-ansible-role-via-user-data">But really, is it possible to pass a Ansible role via user-data?</h1>
<p>Well, sadly I don’t think there is a clean way to pass an Ansible role directly via user-data. I’ve tried it, but cloud-init is very specific on how it acepts files. If I passed a file content and a path, cloud-init would flat-out the path structure. So <code>files/something.yml</code> would be turned into <code>files_something.yml</code>. Looking at the <a href="https://github.com/canonical/cloud-init/blob/a23c886ea2cd301b6021eb03636beb5b92c429dc/cloudinit/util.py#L326">Github repo</a> we can find where that behaviour is defined.</p>
<p>This behaviour makes inviable passing an Ansible role via user-data, because Ansible is very particular about the file structure of the roles.</p>
<p>So, what options do we have left? Well, if you really need to pass an ansible role, you’re back at square zero, you’ll need to pull the role from inside the machine somehow, and you’ll have to deal with authentications and all that jazz.</p>
<h1 id="how-to-use-an-ansible-playbook-with-cloud-init">How to use an Ansible playbook with cloud-init</h1>
<p>If on the other hand, you define all your ansible code in a single playbook file, it’s simple to pass it to the instance.</p>
<p>Here is a dummy example, the only cloud-init that you’ll write: installs ansible and invokes the playbook. You can now write all your bootstrapping logic in Ansible instead of cloud-init.</p>
<pre><code class="language-yaml">#cloud-config
packages:
- ansible
runcmd:
- ansible-playbook /var/lib/cloud/instance/scripts/playbook.yml
</code></pre>
<p>Our playbook template, notice the <strong>what</strong> variable.</p>
<pre><code class="language-yaml">---
- name: "Just a stupidly small example"
hosts: localhost
connection: local
vars:
what: ${what}
tasks:
- debug:
msg: I'm just a simple debug
</code></pre>
<p>You can then use the <code>cloudinit_config</code> resource to get your cloud-init definition ready on the Terraform side.</p>
<p>Notice that we are invoking <code>templatefile</code> on the playbook, also, we are doing a little bit of cheating, we’re saying that the <code>content_type</code> is <code>text/x-shellscript</code> when in reality we are providing a YAML file and not a script. You can bother with the proper format if you want to. <a href="https://cloudinit.readthedocs.io/en/latest/topics/format.html#mime-multi-part-archive">Check the docs on Multi-part archives</a> which is the functionality that <code>cloudinit_config</code> use under the hood, notice that on our terraform we are providing multiple <code>parts</code>.</p>
<pre><code class="language-terraform">data "cloudinit_config" "example" {
part {
filename = "playbook.yml"
content_type = "text/x-shellscript"
content = templatefile(
"files/playbook.yml",
{
what = "passed-by-terraform"
}
)
}
part {
content = file("files/cloud-init.yml")
content_type = "cloud-config"
}
}
</code></pre>
<p>Then all you need to do is provide your rendered cloud-init via user-data.</p>
<pre><code class="language-terraform">module "ec2" {
...
user_data = data.cloudinit_config.example.rendered
...
}
</code></pre>
<p>Just a couple of tips for debugging: you’ll most likely find your cloud-init configuration on the <code>/var/lib/cloud/</code> folder. Also, to check the logs look at <code>/var/log/cloud-init-output.log</code>.</p>
How to sort slice of slices in Go2022-07-19T00:00:00-05:00https://dimmaski.com/sort-slice-of-slices<p>The sort package in the Golang standard library is very powerful, it enables you to sort the standard Go types (int, string, etc) super easily.</p>
<p>If you need to sort a slice of slices, you can do so by providing an anonymous function to sort.Slice. In this example we intend to sort the slice by the second unit of each member of the slice (<strong>h, a, x</strong>).</p>
<p>Let’s remind ourselves that <strong>>=, <=, <, ></strong> operators can be used to find the lexical order of strings in Go. Like so:</p>
<pre><code class="language-golang">slice := [][]string{
{"2", "h"}, {"3", "a"}, {"1", "x"}
}
sort.Slice(slice, func(i, j int) bool {
return slice[i][1] < slice[j][1]
})
fmt.Printf("%v\n", slice)
</code></pre>
<pre><code class="language-sh">» go run main.go
[[3 a] [2 h] [1 x]]
</code></pre>
<p>Voilà. If you are working with your own types, you can also take advantage of the sort package by making sure that your type implements <a href="https://pkg.go.dev/sort#Interface">this interface</a>.</p>
3 ingredient gluten-free pancakes2022-07-17T00:00:00-05:00https://dimmaski.com/pancakes<p>I’m going to start a series named <b>recipes for dudes in a rush</b>.</p>
<p>It’ll consist of quick and easy recipes that I’ve kind of assembled and mutated in my own way. Will you end up here via a google search? Not likely. But I see myself getting questioned about these recipes whenever I post my cookings on the boys’ group chat, so why not post about them on my blog?</p>
<p>This is part 1. We have gluten-free oat pancakes.</p>
<h1 id="ingredients">Ingredients:</h1>
<ul>
<li>2 eggs</li>
<li>2 bananas</li>
<li>1 cup of gluten-free oats</li>
</ul>
<h1 id="preparation">Preparation:</h1>
<p>Throw it all together in the blender, mix it, and get them on the pan.</p>
<table>
<tr>
<td> <img src="/assets/images/photo_5951828875954730314_y.jpeg" alt="4" width="280px" height="280px" /></td>
<td> <img src="/assets/images/photo_5951828875954730318_y.jpeg" alt="5" width="280px" height="280px" /></td>
<td> <img src="/assets/images/photo_5951828875954730316_y.jpeg" alt="6" width="280px" height="280px" /></td>
</tr>
</table>
<p>Yep, that’s all there is to it, I usually do about 8 or 9 pancakes at once, and store some of them in the fridge.</p>
<table>
<tr>
<td> <img src="/assets/images/photo_5951828875954730319_y.jpeg" alt="1" width="280px" height="280px" /></td>
<td> <img src="/assets/images/photo_5951828875954730326_y.jpeg" alt="2" width="280px" height="280px" /></td>
<td> <img src="/assets/images/photo_5951828875954730333_y.jpeg" alt="3" width="280px" height="280px" /></td>
</tr>
</table>
Today I become 26 years old2022-07-12T00:00:00-05:00https://dimmaski.com/today-I-become-26<p>This post is a big rambling about many things that have been rushing through my head for the past week, it’s probably a lot messier than my usual tech posts, but I feel like I’ve captured my state of mind accurately. It’s about growing up, becoming a man, and looking back on my past.</p>
<h2 id="from-boy-to-man">From boy to man</h2>
<p>Sometimes, in movies the journey of the hero from boy to man is depicted in a flash, totally skipping the adolescent bit. This is the case in Disney’s Tarzan (one of my favorite movie as a little boy). I think that this is very intentional, it shows you that the child was able to reach adulthood without being corrupted. As if the set of good values and traits that the boy possesses fully transitioned into adulthood. This is a very romantic idea, but sadly it isn’t how things worked out for me while growing up, or anyone that I know of for that matter.</p>
<h2 id="my-personal-renaissance">My Personal Renaissance</h2>
<blockquote>
<p><em>I do not much believe in the Renaissance as generally described
by historians. The more I look into the evidence the less trace I
find of that vernal rapture which is supposed to have swept
Europe in the fifteenth century. I half suspect that the glow in the
historians’ pages has a different source, that each is
remembering, and projecting, his own personal Renaissance;
that wonderful reawakening which comes to most of us when
puberty is complete. It is properly called a re-birth not a birth, a
reawakening not a wakening, because in many of us, besides
being a new thing, it is also the recovery of things we had in
childhood and lost when we became boys. For boyhood is very
like the “dark ages” not as they were but as they are represented
in bad, short histories. The dreams of childhood and those of
adolescence may have much in common; between them, often,
boyhood stretches like an alien territory in which everything
(ourselves included) has been greedy, cruel, noisy, and prosaic,
in which the imagination has slept and the most un-ideal senses
and ambitions have been restlessly, even maniacally, awake.</em></p>
<p>– CS Lewis - Suprised by Joy</p>
</blockquote>
<p>I’ve concluded that the majority of my integral realizations about life all happened around the time I was 12 or before. Especially in the realm of the nonobjective.</p>
<p>Lewis describes a feeling in the book quoted above, a feeling like a ‘stab of Joy’, a moment of pure crystallized experience that vanishes before you can even grasp it’s happening; a divine experience. I see now that I’ve also experienced this as a Kid, in a different but similar manner, I’ve had experiences that I can only describe as religious, moments in which the purpose of my existence is in full match with the experience that I’m having in that precise moment. These experiences became rarer as I became an adult. I now come to the conclusion that its frequency is related to the sense of pure contemplation that we have earlier in life.</p>
<p>In many ways I feel like my teenage years rotted the groundedness that I used to have as a Kid, I got taller, more educated on sciences and alike subjects, and more experienced in all the things that a man is supposed to be experienced in, but at the same time, I cannot see it as anything other than a defeat. The ego and cynicism of the Teenager stopped him from grasping the full picture, a picture that the Kid was capturing much more brightly.</p>
<p>This boyhood Renaissance that Lewis talks about is very real, at least it was for me, I don’t know why, but I know for sure that in goodness the adult version of myself is much more similar to the younger me, and my worst traits match my teenage self perfectly.</p>
<p>Notice the following, when I talk about the Kid I’m not pointing specifically at the irresponsibility that he bears as a youngster, I’m talking about the things that make a child pure: the altruism, generosity, egolessness, and the infinite drive for exploration that we possess as infants. As for the Teenager, it’s not all bad, he is the wary one, he is clever. He has an idea of how the world works, and how to get things done, in a sense he kind of has everything figured out. He is the snake, but he is also the first who got bit. You get bit once or twice, and that’s enough to suck out the selflessness that you once had. Congratulations, you’re now a full-blown grown Egoist.</p>
<p>At 26, I see myself as these two personas. I try as hard I can to fetch back that younger drive for exploration, experience, awe, and contemplation. I know that this is the part of me that’s interested in mystery, interested in revelation, and exploration, he is the individual interested in deep semi-answered questions about everything but specifically in the realm of philosophy, art, and theology.
As for the Teenager, I’ve learned to appreciate his ambition and resourcefulness, he is the ship captain roaming to where the next treasure chest is, but he wants to reclaim it all for himself.</p>
<p>For all purposes and intentions, at 26 I know that the Teenager has an infinite amount to learn from the Kid. Maybe it’s time to introduce a third character, the Adult, the reborn Kid, capable of bearing the responsibilities and snake bites that the Teenager despises without perishing.</p>
<p align="center"><img width="500" height="500" src="/assets/images/photo_5940438931923384408_y.jpeg" /></p>
<p align="center">
<b>A kitchen pad that I offered my mother when I was 3.</b>
</p>
Decorate i3status with custom scripts2022-03-08T00:00:00-06:00https://dimmaski.com/i3status<p>Freedom, that’s what you get when running Linux. You get the potential for almost unlimited customizability.</p>
<p>In this post, we’ll see how to add custom information to the i3status bar using our custom scripts. In my case, I wanted to display the count of visitors on my blog on any given day. I’ve used the googleapiclient lib for that, as you can see bellow.</p>
<p><img src="/assets/images/i3status-bar.jpg" alt="logo" title="i3status bar" /></p>
<p>I’m honestly sorry for forcing you to bulge your eyes over this tiny image on the screen, I’m really bad at image editing, but That! The blue globe, that’s what we’ll be working on - 48 daily views, not bad for a newbie, ham!? The same process can be used for virtually anything that you want to display on i3status.</p>
<p>But how does one config i3status exactly?</p>
<h2 id="i3--i3status-configuration">i3 + i3status configuration</h2>
<p>First of all, make sure to add the i3bar output format to your i3status configuration.</p>
<pre><code class="language-sh"># ~/.i3status.conf
general {
(...)
output_format = "i3bar"
(...)
}
</code></pre>
<p>Update your i3 status_command, pointing to the wrapper script.</p>
<pre><code class="language-sh"># ~/.config/i3/config
bar {
status_command i3status | ~/scripts/i3status-wrapper.py
(...)
}
</code></pre>
<p>Adapted from <a href="https://github.com/i3/i3status/blob/master/contrib/wrapper.py">here</a>. This piece of code listens to the JSON like output of i3status that is piped through, in the invocation above, and adds our custom data.</p>
<pre><code class="language-python">#!/usr/bin/env python3
# i3status-wrapper.py
import sys
import json
from analytics import get_report, initialize_analyticsreporting
SCOPES = ['https://www.googleapis.com/auth/analytics.readonly']
KEY_FILE_LOCATION = ''
VIEW_ID = ''
def print_line(message):
""" Non-buffered printing to stdout. """
sys.stdout.write(message + '\n')
sys.stdout.flush()
def read_line():
""" Interrupted respecting reader for stdin. """
# try reading a line, removing any extra whitespace
try:
line = sys.stdin.readline().strip()
# i3status sends EOF, or an empty line
if not line:
sys.exit(3)
return line
# exit on ctrl-c
except KeyboardInterrupt:
sys.exit()
if __name__ == '__main__':
# Skip the first line which contains the version header.
print_line(read_line())
# The second line contains the start of the infinite array.
print_line(read_line())
analytics = initialize_analyticsreporting(KEY_FILE_LOCATION, SCOPES)
while True:
line, prefix = read_line(), ''
# ignore comma at start of lines
if line.startswith(','):
line, prefix = line[1:], ','
j = json.loads(line)
report = get_report(analytics, VIEW_ID)
# abominable data parsing, but this isn't a data-science course, so who cares
views = "🌐 " + \
report['reports'][0]['data']['rows'][0]['metrics'][0]['values'][0] + " views"
# this is where the magic happens
j.insert(0, {
'full_text': '%s' % views,
'name': 'views',
'separator_block_width': 25})
print_line(prefix+json.dumps(j))
</code></pre>
<p>Adapted from <a href="https://developers.google.com/analytics/devguides/reporting/core/v4/quickstart/service-py">here</a>.</p>
<pre><code class="language-sh"># analytics.py
def initialize_analyticsreporting(key_file_location, scopes):
"""Initializes an Analytics Reporting API V4 service object.
Returns:
An authorized Analytics Reporting API V4 service object.
"""
from googleapiclient.discovery import build
from oauth2client.service_account import ServiceAccountCredentials
credentials = ServiceAccountCredentials.from_json_keyfile_name(
key_file_location, scopes)
# Build the service object.
analytics = build('analyticsreporting', 'v4', credentials=credentials)
return analytics
def get_report(analytics, view_id):
"""Queries the Analytics Reporting API V4.
Args:
analytics: An authorized Analytics Reporting API V4 service object.
Returns:
The Analytics Reporting API V4 response.
"""
return analytics.reports().batchGet(
body={
'reportRequests': [
{
'viewId': view_id,
'dateRanges': [{'startDate': 'today', 'endDate': 'today'}],
'metrics': [{'expression': 'ga:users'}],
}]
}
).execute()
</code></pre>
<p>That’s it, just copy and paste the wrapper script, tweak it, and you are now able to add custom info to your i3status bar.</p>
Create locally trusted TLS certs2022-03-05T00:00:00-06:00https://dimmaski.com/https-locally<p>Here I’ll walk you through what is, in my opinion at least, the strainless way to upgrade your local development HTTP server to HTTPS.</p>
<p>We are going to use <a href="https://github.com/FiloSottile/mkcert">mkcert</a>. Why? Well, mkcert surpasses a series of roadblocks, that would fall on your hands if you decided to create a stand-alone certificate with openssl - the most annoying of them, by far, would be trust errors on all your browsers and tools like curl, wget, etc.</p>
<p>How does that happen? mkcert will create a local CA, and configure your root system’s store with it, moreover, it’ll configure your chrome and firefox stores if you have them installed. You can then issue certs from your local CA, for whatever domains you’d like.</p>
<h2 id="install">Install</h2>
<pre><code class="language-sh"># You can find the install procedures on the Github repo
# I did the following on my Ubuntu 20.04
>> sudo apt install libnss3-tools
>> curl https://github.com/FiloSottile/mkcert/releases/download/v1.4.3/mkcert-v1.4.3-linux-amd64 -o /usr/local/bin/mkcert
>> sudo chmod +x /usr/local/bin/mkcert
</code></pre>
<h4 id="create-the-certificate-authority">Create the Certificate Authority</h4>
<pre><code class="language-sh">>> mkcert -install
Created a new local CA 💥
The local CA is now installed in the system trust store! ⚡️
The local CA is now installed in the Firefox trust store (requires browser restart)! 🦊
</code></pre>
<h4 id="generate-certificates-for-your-fake-domain">Generate certificates for your fake domain</h4>
<pre><code class="language-sh">>> mkcert demo.com localhost 127.0.0.1
Created a new certificate valid for the following names 📜
- "demo.com"
- "localhost"
- "127.0.0.1"
The certificate is at "./demo.com+2.pem" and the key at "./demo.com+2-key.pem" ✅
It will expire on 5 June 2024 🗓
</code></pre>
<h4 id="serve-them-along-with-your-api">Serve them along with your API</h4>
<p>Showcasing it using an uvicorn worker.</p>
<p>Optionally: you can add the rule <code>127.0.0.1 demo-com</code> on your <code>/etc/hosts</code> file; serve the server on port 443 (most likely you’ll need to run your server as root for that to happen).</p>
<pre><code class="language-sh">uvicorn app:app --port 443 --ssl-keyfile=./demo.com+2-key.pem --ssl-certfile=./demo.com+2.pem
</code></pre>
<p>By now, all the annoying certificate validation errors, that you’re used to while editing code on your local workstation, are gone.</p>
<pre><code class="language-sh">>> curl -v https://demo.com
* Trying 127.0.0.1:443...
(...)
* Server certificate:
* subject: O=mkcert development certificate; OU=diogo@diogo-ThinkPad
* start date: Mar 5 15:58:45 2022 GMT
* expire date: Jun 5 14:58:45 2024 GMT
* subjectAltName: host "demo.com" matched cert's "demo.com"
* issuer: O=mkcert development CA; OU=diogo@diogo-ThinkPad; CN=mkcert diogo@diogo-ThinkPad
* SSL certificate verify ok.
> GET / HTTP/1.1
> Host: demo.com
(...)
< HTTP/1.1 200 OK
< date: Sat, 05 Mar 2022 17:21:40 GMT
< server: uvicorn
< content-length: 25
< content-type: application/json
(...)
</code></pre>
<p>🦝</p>
Sonic APIs with FastAPI, SQLModel, FastAPI-crudrouter and testcontainers2021-11-17T00:00:00-06:00https://dimmaski.com/fastapi-sqlmodel-crud<p>In this post, we’ll take a look at three cool FastAPI components that will allow you to write self-documented endpoints with zero boilerplate code. At the end of the post, we’ll also take a peak at an opinionated way of using testcontainers to perform integration tests on your service.</p>
<h2 id="dependency-overview">Dependency overview</h2>
<p><a href="https://fastapi.tiangolo.com/">FastAPI</a> - python framework. It’ll automatically generate your service’s OpenAPI schema based on your pydantic models and routes. Also, FastAPI allows you to use either async or sync routes without enforcing them.</p>
<p><a href="https://sqlmodel.tiangolo.com/">SQLModel</a> - built on top of sqlalchemy and pydantic, by the same striking creator of FastAPI, SQLModel allows you to define your database models (ORM) on top of your pydantic models. Having data validation and model definitions in the same place allows us to write less code.</p>
<p><a href="https://github.com/awtkns/fastapi-crudrouter">FastAPI-crudrouter</a> - automatically generates CRUD routes for you based on your pydantic models and Backends / ORMs. (In this post we’ll take a look at SQLAlchemy since that’s what SQLModel uses by default).</p>
<p><a href="https://testcontainers-python.readthedocs.io/en/latest/">Testcontainers</a> - launch containers in order to preform integration testing.</p>
<h2 id="setup">Setup</h2>
<pre><code class="language-sh"># requirements.txt
fastapi==0.70.0
uvicorn[standard]==0.15.0
sqlmodel==0.0.4
psycopg2==2.9.1
psycopg2-binary==2.9.1
fastapi-crudrouter==0.8.4
testcontainers==3.4.2
</code></pre>
<p>Just <code>mkdir</code> and run the following to set up your directory</p>
<pre><code class="language-sh">virtualenv -p python3.8 -v venv
source venv/bin/activate
pip3 install -r requirements.txt
</code></pre>
<h2 id="application-code">Application code</h2>
<p>Take a look at the code below and the comments under it.</p>
<pre><code class="language-sh"># main.py
import os
from datetime import time
from fastapi import FastAPI
from fastapi_crudrouter import SQLAlchemyCRUDRouter
from sqlmodel import Field, Session, SQLModel, create_engine
class DemoIn(SQLModel):
description: str
init: time
end: time
class Demo(DemoIn, table=True):
id: int = Field(primary_key=True)
DATABASE: str = os.getenv("DATABASE", "db")
DB_USER: str = os.getenv("DB_USER", "user")
DB_PASSWORD: str = os.getenv("DB_PASSWORD", "password")
DB_HOST: str = os.getenv("DB_HOST", "localhost")
DB_PORT: str = os.getenv("DB_PORT", "5432")
SQLALCHEMY_DATABASE_URL = f"postgresql://{DB_USER}:{DB_PASSWORD}@{DB_HOST}:{DB_PORT}/{DATABASE}"
engine = create_engine(SQLALCHEMY_DATABASE_URL)
# Dependency
def get_db():
db = Session(engine)
try:
yield db
finally:
db.close()
demo_router = SQLAlchemyCRUDRouter(
schema=Demo,
create_schema=DemoIn,
db_model=Demo,
db=get_db
)
app = FastAPI()
app.include_router(demo_router)
@app.on_event("startup")
async def startup_event():
SQLModel.metadata.create_all(engine)
</code></pre>
<pre><code class="language-sh"># Run the code
docker run --name postgres -p 5432:5432 -e POSTGRES_USER=user -e POSTGRES_PASSWORD=password -e POSTGRES_DB=db -d postgres:13
uvicorn main:app --reload
</code></pre>
<p>Once we have our database container up, we can launch an uvicorn worker and access our service’s <code>/docs</code> endpoint. We’ll find 6 CRUD routes provided by fastapi-crudrouter, automatically generated. We can then use the Swagger UI in order to trigger our endpoints and make sure everything is working as expected.</p>
<p><img src="/assets/images/fastapi.png" alt="logo" title="fastapi openapi" /></p>
<p>On <code>main.py</code> we create two classes that inherit from SQLModel, <code>DemoIn</code> that inherits directly and <code>Demo</code> because it inherits from <code>DemoIn</code>. SQLModel takes care of SQLAlchemy and Pydantic for us, under the hood.
We then create an instance of <code>SQLAlchemyCRUDRoute</code>. Provide our <code>db</code> session and <code>db_model</code> (defining <code>table=True</code> makes tells SQLModel to map that class as a table in your database). Provide our <code>schema</code> and optionally <code>create_schema</code>.</p>
<p>Taking a look at the contents of our Postgres container we can see that a table named <code>demo</code> was created with the proper fields (id, description, init and end).</p>
<pre><code class="language-sh">~/tmp » docker exec -it postgres psql -d db -U user -c "\d+ demo"
Table "public.demo"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
-------------+------------------------+-----------+----------+----------------------------------+----------+--------------+-------------
description | character varying | | not null | | extended | |
init | time without time zone | | not null | | plain | |
end | time without time zone | | not null | | plain | |
id | integer | | not null | nextval('demo_id_seq'::regclass) | plain | |
Indexes:
"demo_pkey" PRIMARY KEY, btree (id)
"ix_demo_description" btree (description)
"ix_demo_end" btree ("end")
"ix_demo_id" btree (id)
"ix_demo_init" btree (init)
Access method: heap
</code></pre>
<p>Taking a closer look we’ll see that all the columns are indexed with btree, interesting 🤔. Turns out this is actually a <a href="https://github.com/tiangolo/sqlmodel/pull/11">bug</a> which should be on its way to getting fixed (SQLModel is only in the 0.0.4 version at the time of writing).</p>
<h2 id="testing">Testing</h2>
<pre><code class="language-sh"># integration_test.py
import pytest
from fastapi.testclient import TestClient
from sqlalchemy.orm.session import Session
from sqlmodel import Session, SQLModel, create_engine
from testcontainers.core.container import DockerContainer
from testcontainers.core.waiting_utils import wait_for_logs
from main import app, get_db
POSTGRES_IMAGE = "postgres:13"
POSTGRES_USER = "postgres"
POSTGRES_PASSWORD = "test_password"
POSTGRES_DATABASE = "test_database"
POSTGRES_CONTAINER_PORT = 5432
@pytest.fixture(scope="function")
def postgres_container() -> DockerContainer:
"""
Setup postgres container
"""
postgres = DockerContainer(image=POSTGRES_IMAGE) \
.with_bind_ports(container=POSTGRES_CONTAINER_PORT) \
.with_env("POSTGRES_PASSWORD", POSTGRES_PASSWORD) \
.with_env("POSTGRES_DB", POSTGRES_DATABASE)
with postgres:
wait_for_logs(
postgres, r"UTC \[1\] LOG: database system is ready to accept connections", 10)
yield postgres
@pytest.fixture(scope="function")
def http_client(postgres_container: DockerContainer):
def get_db_override() -> Session:
url = f"postgresql://{POSTGRES_USER}:{POSTGRES_PASSWORD}@{postgres_container.get_container_host_ip()}:{postgres_container.get_exposed_port(POSTGRES_CONTAINER_PORT)}/{POSTGRES_DATABASE}"
engine = create_engine(url)
SQLModel.metadata.create_all(engine)
with Session(engine) as session:
yield session
app.dependency_overrides[get_db] = get_db_override
with TestClient(app) as client:
yield client
app.dependency_overrides.clear()
# Test our CRUD routes using the http client provided by FastApi
def test_demo_crud(http_client: TestClient):
post_resp = http_client.post("/demo", json={
"description": "Finally free, found the God in me, And I want you to see, I can walk on water 🌬",
"init": "10:00:00",
"end": "18:00:00",
})
assert post_resp.status_code == 200
get_resp = http_client.get("/demo/1")
assert get_resp.status_code == 200
# Let's transform our json response into a Demo object and preform some validations 🤝
from main import Demo
demo = Demo(**get_resp.json())
assert demo.id == 1
assert demo.end > demo.init
assert "God" in demo.description
assert "Enemy" not in demo.description
</code></pre>
<p>Above we have integration testing for our service. This is an opinionated way on how to structure your tests using, pytest fixtures, testcontainers, FastAPI’s <code>TestClient</code>.</p>
<p>First of all, we define a function fixture <code>postgres_container</code>, this function is responsible for launching our postgres containers and making sure that they are ready before being yielded (notice the <code>wait_for_logs</code> call).</p>
<p>We then define <code>http_client</code> (another function fixture), and we provide <code>postgres_container</code> as a dependency for it (notice the dependency injection <code>http_client: TestClient</code>), we could use the same pattern to inject other dependencies needed for our service, think blob storage containers, containers containing some sort of message queuing system, etc. Notice how we override <code>get_db</code> with <code>get_db_override</code>, this is crucial so that our http client knows where to point to in order to reach our postgres container (Notice that each container is exposing a different port on the host machine).</p>
<p>With this setup, each of our tests that receive <code>http_client</code> as a dependency will receive their own postgres container and http client, leaving them totally isolated from other tests. <code>test_demo_crud</code> serves as a dummy example for us to understand how to interact with our service.</p>
<h2 id="conclusion">Conclusion</h2>
<p>I hope this post was useful for you. We took a look at a spicy combination of libraries/framworks that allow for really quick development and testing. I wouldn’t say that this stack is the best choice for production critical systems, but it might be useful in many scenarios, take your conclusions 🤓.</p>
Deploy FastAPI on AWS Lambda using AWS SAM2021-10-05T00:00:00-05:00https://dimmaski.com/fastapi-aws-sam<p>So you have this FastAPI project, there aren’t any strings attached to it, you could very well package it inside a docker container and serve it on ECS or EC2. But if your API is a good fit for Lambda, I would argue that you should simply throw it there and forget about it. I’ll walk you through how to deploy an already developed API in AWS Lambda, without performing deep changes to your code logic.</p>
<h2 id="minimal-changes-in-code-logic-required">Minimal changes in code logic required</h2>
<p>There are several ways to architect your endpoints if you want to serve them using Lambda and API Gateway. You can read the official <a href="https://aws.amazon.com/blogs/compute/best-practices-for-organizing-larger-serverless-applications/">AWS word on this matter</a>. In this post, I argue that if your use-case is simple enough you should go with the ‘Monolithic Lambda function’ approach.</p>
<p>This will allow you to:</p>
<ol>
<li>Have a collection of endpoints gathered in a single service</li>
<li>Deploy your existing API with minimal changes to your code logic</li>
<li>Maintain your current development/testing strategy</li>
</ol>
<h2 id="demo-application-structure">Demo application structure</h2>
<pre><code class="language-sh">demo
├── app
│ ├── app.py
│ ├── __init__.py
│ └── requirements.txt
├── README.md
├── template.yaml
└── tests
├── __init__.py
├── requirements.txt
└── test_handlers.py
</code></pre>
<p>The example code for this post is available on <a href="https://github.com/dimmaski/fastapi-sam-poc">Github</a>. We have a simple FastAPI service, a collection of 3 endpoints, and some tests. I’m going for python 3.8, so the local setup goes like this:</p>
<pre><code class="language-sh"># Setup your local env
virtualenv -p python3.8 -v venv
source venv/bin/activate
pip install -r tests/requirements.txt -r app/requirements.txt
# Run the tests
pytest -v
# Serve the API locally using an uvicorn worker
uvicorn app.app:app --reload
</code></pre>
<p>As you can see, at this point our demo service is generic, its structure is not tied up with any sort of deployment strategy. It could be deployed on bare-metal, any container-orchestrator, or on lambda with minimal changes needed.</p>
<h2 id="adding-the-only-lambda-specific-change-to-our-code">Adding the only Lambda specific change to our code</h2>
<p>You’ll see that a handler is being added to the <code>app.py</code> file.</p>
<pre><code class="language-sh">handler = Mangum(app)
</code></pre>
<p>Mangum is just an adapter for ASGI (Asynchronous Server Gateway Interface) applications, in which FastAPI is included, that allows them to run on Lambda/API Gateway. This is the only code-specific change required for our service to be served by AWS Lambda. Once you have this setup, you can edit your <code>template.yaml</code> <code>Properties</code> key.</p>
<pre><code class="language-yaml">Transform: AWS::Serverless-2016-10-31
AWSTemplateFormatVersion: '2010-09-09'
Resources:
AppFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: app/
Handler: app.handler
Runtime: python3.8
Events:
Root:
Type: Api
Properties:
Path: /
Method: ANY
NonRoot:
Type: Api
Properties:
Path: /{proxy+}
Method: ANY
</code></pre>
<p>As you can see, we are instructing SAM to look for the code to deploy in the <code>app/</code> folder and we are pointing the proper Mangum handler <code>app.handler</code>. On the <code>Events</code> tab we are creating two routes:</p>
<ol>
<li>Root - for the root endpoint</li>
<li>NonRoot - that encapsulates all endpoints other than root.</li>
</ol>
<p>Both Events accept <code>ANY</code> HTTP methods (PUT, POST, etc). API Gateway will now route every HTTP request, regardless of it’s HTTP verb or path to our FastAPI service.</p>
<h2 id="deploy">Deploy</h2>
<p>You are ready to go!</p>
<pre><code class="language-sh"># to deploy your API run - fill-up the prompt that appears with the AWS region that you want to use, etc.
sam build --use-container
sam deploy --guided
# in case you want to take down the hole cloudformation stack generated by SAM
aws cloudformation delete-stack --stack-name <stack-name>
</code></pre>
<p>SAM should print out your recently created API Gateway endpoint, which you should use to trigger your endpoints.</p>
<pre><code class="language-sh">curl -v https://g1g234234fsfg1k49.execute-api.us-east-1.amazonaws.com/Prod/
curl -v https://g1g234234fsfg1k49.execute-api.us-east-1.amazonaws.com/Prod/ping
curl -v https://g1g234234fsfg1k49.execute-api.us-east-1.amazonaws.com/Prod/ping/ping
</code></pre>
<p>Voilà.</p>
Reading terraform state with jq2020-12-05T00:00:00-06:00https://dimmaski.com/reading-terraform-state<p>There are some methods available to fetch state data.
You can read the <code>.tfstate</code> file directly, either by reading the local file or by looking inside your remote backend (most commonly s3). You can also use <code>terraform state show</code> in conjunction with <code>terraform state list</code> which is an improvement over the latter. Both these methods are fine when you just want to take a look at the data, but they are not perfect for your tooling to interact with. You could always set up <code>outputs</code> but there they are not the best fit for sensitive values, for example.</p>
<h2 id="terraform-show--json">Terraform show -json</h2>
<p>For those cases, you probably want to use the <code>terraform show -json</code> command. It’ll output your entire state in json format, you can then can pipe that to jq or any other tool that handles json. I commonly use this to fetch the ssh private keys generated by the <code>aws_key_pair</code> and <code>tls_private_key</code> resources. One usage example would be the following:</p>
<pre><code class="language-sh"># for reference, this example is based on the following .tf file
# https://github.com/dimmaski/terraform-aws-minecraft-server/blob/master/terraform/ssh.tf
PRIVATE_SSH_KEY=$(terraform show -json | jq -r '.values[].resources[] | select(.address == "tls_private_key.example") | .values.private_key_pem')
</code></pre>
<p>It’s a fine way of retrieving sensitive data, but I also tend to use this pattern when I want to create some sort of script around a project. For example, you could have a generic script that automatically fetches the IP of one instance, an ssh private key, and runs an ssh command based on the terraform state of your current directory. It ain’t fancy but will save you some typing.</p>
Quickly bootstrapping a Minecraft server in AWS2020-11-05T00:00:00-06:00https://dimmaski.com/aws-minecraft<p>In this post, we are going to go over how I deployed a minecraft server running inside a docker container in a EC2 instance on AWS.</p>
<p>I used to play minecraft earlier in life and this week my younger brother told me he was playing online with his friends, this got me feeling nostalgic and so I decided to try
to set up my own server just for the lolz.</p>
<h2 id="what-this-post-is-in-true-honesty">What this post is (in true honesty)</h2>
<p>First of all, let me just say that this is not the best tutorial to follow if you want to run a really high performance, highly customizable minecraft server. This is a simple, quick implementation done in a relatively short time, with a minimal configuration setup. Nonetheless, I would say it’s worthy of discussion (one could learn something new, right?). I wouldn’t say that this post is particularly complex but I’ll try to make it as clear as possible for someone with few cloud knowledge to be able to follow along.
This text is more about my self-reflections on why I did things the way I did than a tutorial.</p>
<h2 id="starting-to-picture-things-together">Starting to picture things together</h2>
<p>The first step into solving the problem was understanding what sort of application layer configuration was needed. Did I need to set up a JVM or simply run a binary, how does minecraft run after all?</p>
<p>After a quick search, I found out that there is a really well maintained and documented docker image <a href="https://hub.docker.com/r/itzg/minecraft-server">itzg/minecraft-server</a>. Even though I didn’t spend much time looking at the dockerfile configuration, it was obvious that the application ran on top of a JVM. I ran the container right away on my local machine with <code>docker run -p 25565:25565 --name mc -e EULA=TRUE itzg/minecraft-server</code> and then tried to access it with another host inside my LAN via private-ip. (You can find your private ip running <code>ip addr show</code> or <code>ifconfig</code>). It worked! Great! Let me say, I don’t own the most powerful laptop in the world, but the server was actually running pretty smoothly.</p>
<h2 id="running-it-globally-instead-of-locally-or-force-your-friends-to-come-over-and-play-at-your-house">Running it globally instead of locally (or force your friends to come over and play at your house)</h2>
<p>Running the server inside your LAN with docker is incredibly effortless, but how does one go about sharing the server with their friends?</p>
<p>Well, for that to happen the instance running the container
needs to be accessible from the Internet, meaning it needs to have a public ip. There are many ways to make that happen, one option could be, for example, port-forwarding port 25565 form your home-router to your host, but
that is not the safest thing to do in the world. One could, run the container on <a href="https://aws.amazon.com/ecs/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc&ecs-blogs.sort-by=item.additionalFields.createdDate&ecs-blogs.sort-order=desc">ECS</a>, etc, etc.
I eventually decided to run it on EC2 simply because it’s simpler, and I’m already familiar with the setup, plus, if necessary, the instance specs (RAM, CPU, etc) can be quickly upgraded by changing the instance type.</p>
<h2 id="infrastructure">Infrastructure</h2>
<p>Terraform seems to be the go-to tool to bootstrap cloud infra these days, I don’t particularly love it, to be honest, but it does have many benefits (<a href="https://www.terraform.io/intro/index.html">read about it here</a>). One of them being the open-source modules, that can save you a lot of searching, thinking, typing and most importantly head banging against the wall.
To run the instance provisioning (installing and configuring software) I used cloud-init, which is an improvement from ad-hoc shell-scripts. You can check the “code” at <a href="https://github.com/dimmaski/terraform-aws-minecraft-server">Github</a>.</p>
<h2 id="terraform">Terraform</h2>
<p>I used two really handy modules <code>terraform-aws-modules/ec2-instance/aws</code> and <code>terraform-aws-modules/ec2-instance/aws</code>. Without going into much detail, this “code” bootstraps a <code>t3.medium</code> instance running Ubuntu 20.04
in the default Virtual Private Cloud. The VPC security group is allowing traffic in ports 22 (<code>ssh</code>) and 25565 (<code>minecraft tcp</code>). Also, we are using ssh keys to manage authentication (with the <code>aws_key_pair</code> resource).</p>
<h2 id="cloud-init-and-systemd">Cloud-init and systemd</h2>
<p>Systemd is one of many service managers available in the linux world, thankfully it’s the default one in Ubuntu. It does much more than running one-off services, it’s responsible for spawning
the system’s process tree in a rightfully-ordered manner when the kernel starts. For this configuration, the most important thing to notice is that our
minecraft service depends on the docker daemon to be up and running.
In as few as 30 lines of “code”, we are making sure that our instance packages are up to date (<code>package_upgrade: true</code>). We are copying our service’s definition to <code>/etc/systemd/system/minecraft.service</code>. We are installing docker with the <code>o get-docker.sh</code> script. After that, we enable and start our service.
Also, in the last lines we make sure that the default user <code>ubuntu</code> is inside the docker group, which makes him available to use docker.</p>
<h2 id="creating-the-infra">Creating the infra</h2>
<p>Run <code>terraform apply</code>, that will do the heavy lifting. The commands bellow will allow you to retrieve the instance public ip and the ssh key.</p>
<pre><code class="language-sh">PUBLIC_IP=$(terraform show -json | jq -r '.values[].child_modules[].resources[] | select(.address == "module.ec2.aws_instance.this[0]") | .values.public_ip')
PRIVATE_SSH_KEY=$(terraform show -json | jq -r '.values[].resources[] | select(.address == "tls_private_key.example") | .values.private_key_pem')
echo ${PRIVATE_SSH_KEY} > key.pem
chmod 400 key.pem
# by now you can test your tcp connectivity to the instance
nc -zvw3 ${PUBLIC_IP} 22
nc -zvw3 ${PUBLIC_IP} 25565
# to access your instance
ssh -i key.pem ubuntu@${PUBLIC_IP}
# after logging in you can check the cloud-init logs
cat /var/log/cloud-init.log
# you can also run `docker ps` and `docker logs` to make sure all is running smoothly.
</code></pre>
<p>By now you anyone should be able to access your server, all you need is to provide them the value of <code>PUBLIC_IP</code>.</p>
Handling dates in shell scripts2020-10-13T00:00:00-05:00https://dimmaski.com/shell-date<p>In shell scripts, sometimes it’s useful to act on events based on their timestamps.
For example, let’s say you want to kill some type of process that has been running for longer than 5 hours in your system, or
you want to check some remote API and act on the value of a timestamp in the response payload.
When using <a href="https://tools.ietf.org/html/rfc3339">rfc3339</a> it’s simple to handle that in bash.</p>
<p>Let’s take a look at the following function.</p>
<pre><code>job_running_for_too_long() {
export max_running_time="5 hours"
export modified_date="2020-10-09T13:50:00Z"
[ "$(date +%s)" -gt "$(date --date "${modified_date} +${max_running_time}" +%s)" ]
}
</code></pre>
<p>In this case, a job is defined as been running for too long if the time
is past <code>2020-10-09T18:50:00Z</code>.</p>
<p>When provided with the <code>+%s</code> flag the date command returns the number of
seconds since <code>1970-01-01 00:00:00</code> in UTC. The number of seconds can
then be used to perform arithmetic operations. In this case, we are
checking if 5 hours have passed since <code>2020-10-09T13:50:00Z</code>.</p>
<p>The above function was tested on Debian 10.</p>
Managing Jaeger and Elasticsearch ILM2020-04-28T00:00:00-05:00https://dimmaski.com/ilm-elasticsearch-jaeger<h1 id="purpose">Purpose</h1>
<p>By default jaeger stores data in daily indices which may not be the ideal approach for many use cases. If your performance and retention requirements are a little more strict
you can delegate the responsibility of rolling-over your indices to Elasticsearch.</p>
<p>The <a href="https://www.jaegertracing.io/docs/1.17/deployment/#elasticsearch-rollover">jaeger documentation</a> already describes a use-case with ILM (I encourage you to read it),
but it’s kinda funky since it relies on periodical API calls encapsulated inside the <code>jaegertracing/jaeger-es-rollover</code> docker image.
The same outcome can be achieved by configuring elasticsearch beforehand, as we will see on this post.</p>
<h1 id="demo-setup">Demo setup</h1>
<p>Jaeger uses two indices <strong>jaeger-span</strong> and <strong>jaeger-service</strong>, all the operations described below need to be applied to both, but for the sake of simplicity, and to keep the post as concise and clear as possible, I will use <strong>jaeger-span</strong> as an example (all you need to do is replicate the exact same API requests replacing span with service).
I’m running elasticsearch in a docker container locally. You can follow along by running the following docker-compose.</p>
<pre><code class="language-docker">version: "3.7"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
environment:
- xpack.security.enabled=false
- discovery.type=single-node
ports:
- 9200:9200
</code></pre>
<h1 id="create-a-lifecycle-policy">Create a lifecycle policy</h1>
<p>The bellow policy states that a rollover will happen when the current index either gets the max_size of 50 GB or the max_age of 30 days, it also states that
indices older that 90 days will be deleted.</p>
<pre><code class="language-shell">curl -X PUT "localhost:9200/_ilm/policy/jaeger-span?pretty" -H 'Content-Type: application/json' -d'
{
"policy": {
"phases": {
"hot": {
"actions": {
"rollover": {
"max_size": "50GB",
"max_age": "30d"
}
}
},
"delete": {
"min_age": "90d",
"actions": {
"delete": {}
}
}
}
}
}
'
</code></pre>
<h1 id="create-an-index-template">Create an index template</h1>
<p>Adapted from <a href="https://github.com/jaegertracing/jaeger/blob/master/plugin/storage/es/mappings/jaeger-span-7.json">here</a>.
One should notice that in the lifecycle parameter we are referring our already defined <code>jaeger-span-policy</code>, and our rollover_alias to be <code>jaeger-span-write</code>.</p>
<pre><code class="language-shell">curl -X PUT "localhost:9200/_template/jaeger-span?pretty" -H 'Content-Type: application/json' -d'
{
"index_patterns": ["jaeger-span-*"],
"settings":{
"index": {
"number_of_shards":2,
"number_of_replicas":2,
"mapping": {
"nested_fields": {
"limit": 50
}
},
"requests": {
"cache": {
"enable": true
}
},
"lifecycle": {
"name": "jaeger-span-policy",
"rollover_alias": "jaeger-span-write"
}
}
},
"aliases": {
"jaeger-span-read": {}
},
"mappings":{
"dynamic_templates":[
{
"span_tags_map":{
"mapping":{
"type":"keyword",
"ignore_above":256
},
"path_match":"tag.*"
}
},
{
"process_tags_map":{
"mapping":{
"type":"keyword",
"ignore_above":256
},
"path_match":"process.tag.*"
}
}
],
"properties":{
"traceID":{
"type":"keyword",
"ignore_above":256
},
"parentSpanID":{
"type":"keyword",
"ignore_above":256
},
"spanID":{
"type":"keyword",
"ignore_above":256
},
"operationName":{
"type":"keyword",
"ignore_above":256
},
"startTime":{
"type":"long"
},
"startTimeMillis":{
"type":"date",
"format":"epoch_millis"
},
"duration":{
"type":"long"
},
"flags":{
"type":"integer"
},
"logs":{
"type":"nested",
"dynamic":false,
"properties":{
"timestamp":{
"type":"long"
},
"fields":{
"type":"nested",
"dynamic":false,
"properties":{
"key":{
"type":"keyword",
"ignore_above":256
},
"value":{
"type":"keyword",
"ignore_above":256
},
"tagType":{
"type":"keyword",
"ignore_above":256
}
}
}
}
},
"process":{
"properties":{
"serviceName":{
"type":"keyword",
"ignore_above":256
},
"tag":{
"type":"object"
},
"tags":{
"type":"nested",
"dynamic":false,
"properties":{
"key":{
"type":"keyword",
"ignore_above":256
},
"value":{
"type":"keyword",
"ignore_above":256
},
"tagType":{
"type":"keyword",
"ignore_above":256
}
}
}
}
},
"references":{
"type":"nested",
"dynamic":false,
"properties":{
"refType":{
"type":"keyword",
"ignore_above":256
},
"traceID":{
"type":"keyword",
"ignore_above":256
},
"spanID":{
"type":"keyword",
"ignore_above":256
}
}
},
"tag":{
"type":"object"
},
"tags":{
"type":"nested",
"dynamic":false,
"properties":{
"key":{
"type":"keyword",
"ignore_above":256
},
"value":{
"type":"keyword",
"ignore_above":256
},
"tagType":{
"type":"keyword",
"ignore_above":256
}
}
}
}
}
}
'
</code></pre>
<h1 id="creating-the-first-index">Creating the first index</h1>
<p>We need to create the first index.</p>
<pre><code class="language-shell">curl -X PUT "localhost:9200/jaeger-span-000001" -H 'Content-Type: application/json' -d'
{
"aliases" : {
"jaeger-service-write": {
"is_write_index": "true"
}
},
"settings" : {
"number_of_shards" : 2,
"number_of_replicas" : 2
}
}
'
</code></pre>
<p>From now on, our rollover policy will take care of the index rotations. Naming them incrementally (e.g jaeger-span-000002, jaeger-span-000003…).</p>
<p>NOTE: you need to use the flags <code>--es.use-aliases=true</code>, <code>--es.create-index-templates=false</code>.</p>
<h2 id="references">References</h2>
<ul>
<li><a href="https://www.elastic.co/guide/en/elasticsearch/reference/master/indices-rollover-index.html">https://www.elastic.co/guide/en/elasticsearch/reference/master/indices-rollover-index.html</a></li>
</ul>
How to install third-party plugins in Terraform2020-04-18T00:00:00-05:00https://dimmaski.com/install-external-plugins-terraform<p>Hashicorp already distributes plugins that it maintains along with the
community (<a href="https://github.com/terraform-providers">check them here</a>),
for instance,
<a href="https://github.com/terraform-providers/terraform-provider-aws">aws</a>.
That’s great, but soon enough you’ll need to use some provider that does
not work out of the box by running <code>terraform init</code>.</p>
<p>In this post I will be showing you the quickest way of installing
external plugins in terraform, without the need to dig deep into
Terraform’s docs. We will be installing the
<a href="https://github.com/phillbaker/terraform-provider-elasticsearch"> phillbaker / terraform-provider-elasticsearch</a>
as an example.</p>
<h1 id="terraform-plugins-directory">Terraform plugins directory</h1>
<p>Whenever you installed terraform, a <code>~/.terraform.d/</code> directory was
created. To allow the use of external plugins we will need to create the
<code>plugins</code> directory. Run the following command.</p>
<pre><code class="language-shell">mkdir -p ~/.terraform.d/plugins/linux_amd64
</code></pre>
<p>Right now, I’m running Terraform on a linux machine, therefore the need
for the <code>linux_amd64</code> directory. If you are running a different
operating system architecture please use the
<a href="https://www.terraform.io/docs/configuration/providers.html#os-and-architecture-directories">documentation</a>
to find the right one for you.</p>
<h1 id="downloading-the-binary">Downloading the binary</h1>
<p>Most providers will expose their software packages via github releases,
so that’s the place to look for the right binary for your architecture.
For our example,
<a href="https://github.com/phillbaker/terraform-provider-elasticsearch/releases">this</a>
is the place.</p>
<pre><code class="language-shell">curl -sL https://github.com/phillbaker/terraform-provider-elasticsearch/releases/download/v1.0.0/terraform-provider-elasticsearch_v1.0.0_linux_amd64 --output ~/.terraform.d/plugins/linux_amd64/terraform-provider-elasticsearch_v1.0.0
</code></pre>
<p>Notice that we are outputting the binary to the
<code>~/.terraform.d/plugins/linux_amd64/</code> directory. Also, we are naming the
binary with the <code>terraform-provider-<NAME>_vX.Y.Z</code> format. This is
really important since it associates the <code>version = "X.Y.Z"</code> in the
terraform provider code with the actual binary (It’s possible to have
multiple versions of the same binary in your plugins folder).</p>
<p>Finally don’t forget to set the binary file as executable.</p>
<pre><code class="language-shell">chmod +x ~/.terraform.d/plugins/linux_amd64/terraform-provider-elasticsearch_v1.0.0
</code></pre>
<p>That’s all there is to it. Now you can use the provider like you always
do.</p>
<pre><code class="language-hcl-terraform">provider "elasticsearch" {
version = "1.0.0"
}
</code></pre>
<p>Run <code>terraform init</code>, and then you can run <code>terraform version</code> to make
sure that the installation has been successful.</p>
<pre><code class="language-shell">Terraform v0.12.18
+ provider.elasticsearch v1.0.0
</code></pre>
<p>Hope this helps!</p>
<h2 id="references">References</h2>
<ul>
<li><a href="https://www.terraform.io/docs/plugins/basics.html">https://www.terraform.io/docs/plugins/basics.html</a></li>
<li><a href="https://github.com/phillbaker/terraform-provider-elasticsearch">https://github.com/phillbaker/terraform-provider-elasticsearch</a></li>
<li><a href="https://www.terraform.io/docs/configuration/providers.html#os-and-architecture-directories">https://www.terraform.io/docs/configuration/providers.html#os-and-architecture-directories</a></li>
</ul>
Testing Ansible roles against ec2 instances with molecule2020-04-03T00:00:00-05:00https://dimmaski.com/molecule-with-ec2<p>Hey, so lately I’ve been trying to get more familiar with Ansible. This weekend I decided to test out <a href="https://molecule.readthedocs.io/en/latest/">molecule</a> with it’s ec2 driver.
Docker containers are cool and all but they are limited, <code>systemd</code> is a pain for example. Also, the ec2 driver gives you the possibility to use your AMIs and choose the instance type, recreating even closer your production environment.
All the solutions I found online seemed quite outdated, some of the molecule commands didn’t event work. So I decided to write down step by step
what I did to start testing roles against ec2 machines.</p>
<p>First lets create our virtualenv and install the needed dependencies there, to reduce clutter.</p>
<pre><code class="language-shell"># requirements.txt
ansible==2.9.6
boto3==1.12.36
molecule==3.0.2
molecule-ec2==0.2
</code></pre>
<pre><code class="language-shell"># creating our virtualenv
virtualenv -p python3 venv
source venv/bin/activate
pip install -r requirements.txt
# you can run pip freeze to check what you've installed
# let's init our molecule scenario
molecule init role test --driver-name ec2
</code></pre>
<p>By now we have installed all that we’ll need, and created our scenario with molecule. You can get into the <code>test</code> folder and check what molecule
has already setup for us. Now, let’s get this baby rolling.</p>
<p>You will need to created a subnet “by hand”, in your AWS account, where your instances will spawn.
Edit your <code>molecule.yml</code> inside the <code>molecule/default</code> folder, and define the <code>platforms</code> key. Here you could add a list of different instances to test the role against.</p>
<p>We will simply use a <code>t2.micro</code> instance, so that we keep things simple and most importantly free-tier eligible (I’m cheap).</p>
<pre><code class="language-yaml">platforms:
- name: test-instance
image: ami-0fc61db8544a617ed # standard linux AMI
instance_type: t2.micro
vpc_subnet_id: ${SUBNET_ID}
</code></pre>
<pre><code class="language-shell"># provide your subnet-id and the region
export SUBNET_ID=subnet-99894324fsddasd
export EC2_REGION=us-east-1
</code></pre>
<p>You can provide your AWS credentials by either running <code>aws configure</code> (you’ll need to have the <code>awscli</code> installed to do this), or via environment variables. Pick what suits you best.
By now, we can now try to run molecule, with <code>molecule test</code>, and you’ll get both errors and deprecation warnings, lovely right? Let’s just close our eyes to how disengaging this is and fix it.</p>
<p>In my case molecule was able to spin-up a ec2 instance, but later on wasn’t able to connect to it via ssh. (<code>Permission denied (publickey, gssapi-keyex, gssapi-with-mic)</code>). Gosh…
Edit your <code>create.yml</code>, change the <code>ssh_user</code> from the default value <code>ubuntu</code>, to <code>ec2-user</code>, also change <code>ec2_ami_facts</code> by <code>ec2_ami_info</code>.
Next, to remove the warning <code>playbook.yml was deprecated</code>, by renaming your <code>playbook.yml</code> to <code>converge.yml</code>.
Now run <code>molecule test</code> again, and you should be all setup.</p>
<p>Hope this helps.</p>
Looking for maximum productivity #1: fasd vs autojump vs jump2020-03-25T00:00:00-05:00https://dimmaski.com/looking-for-max-terminal-prod<p>Moving in the terminal is one of those things… fun the first three
times, and an absolute pain the rest of them. Most terminal quickly
users find out that there are better ways of roaming inside the
filesystem, using aliases, and using tabs to autocomplete. After a while
your <code>rc</code> file becomes piled with stuff like <code>alias p='cd
$HOME/Projects'</code>, and things start to get hard to manage from there. You
begin forgetting things, and there isn’t much point in maintaining
twenty aliases if you will only remember four of them in three weeks.
One can do better. In the near future, I’ll be posting more about
terminal utilities. Today’s post is about <code>fasd</code>, my favorite “directory
jumper”.</p>
<p>I will talk about the installation process, tweaks, and outcome.</p>
<p>If you use zsh you have some decent options besides <code>fasd</code>, I tried <code>jump</code> and <code>autojump</code>. The downside of jump is that you have to mark each of the directories you want to be able to jump to, it’s basically a fancy
way of creating aliases. <code>autojump</code> remembers the directories you visit the most and makes your life easier by remembering them for you.</p>
<p>So if you want do go to <code>~/Documents/Github/your-great-code-repo/</code>. All you have to type is <code>j grea</code>, our <code>j code-r</code>. <code>auto-jump</code> will know how to get you there because it remembers all the paths you’ve been to. (<code>j</code> is short for <code>autojump</code>).</p>
<p><code>autojump</code> was almost everything I looked for, the only problem I found was the auto-complete. It was dirty, and buggy. When using tab completion, the file names
were prepended with <code>*__N__/*</code>, which made it unbearably ugly to look at. Also, sometimes, when using tab completion the same folder would appear more than one time, for unknown reason, polluting the screen and forcing me
to spend more time deciding which option to pick. <a href="https://github.com/wting/autojump/issues/348">Read about the issue here</a>.</p>
<p>The right option for me was ultimately <a href="https://github.com/clvv/fasd">fasd</a>.</p>
<h1 id="fasd">fasd</h1>
<p>As described in the repo README the installation in MacOs, was quite straightforward. I installed it via brew (<code>brew install fasd</code>), and added the following to my <code>.zshrc</code> file.</p>
<pre><code class="language-shell">...
plugins=(... fasd)
...
# fasd init
eval "$(fasd --init zsh-wcomp-install zsh-hook zsh-ccomp)"
alias j='fasd_cd -d'
...
</code></pre>
<p>The <code>j</code> alias makes <code>fasd</code> feel the way <code>autojump</code> was supposed to.
Note that <code>fasd</code> is designed to do more than changing directories, but that’s the feature I was aiming to discuss in this post.</p>
<p>I feel like I kind of rage posted this, things like this should be simple by default. Take care.</p>
How I started deploying my jekyll blog with Github Actions in 15 minutes2020-03-21T00:00:00-05:00https://dimmaski.com/deploy-jekyll-with-github-actions<p>This week I decided to take a look at
<a href="https://github.com/features/actions">Github Actions</a>, and found out
that the free tier includes 2k minutes of <strong>free</strong> build time.</p>
<p>In the
past I deployed this jekyll blog by triggering some commands with a
<code>pre-push</code> git hook. My blog is hosted in cloudfront and deploying it is
as simple as running a <code>s3 cp</code> command with <code>awscli</code>.</p>
<pre><code class="language-shell">#!/bin/bash
# deploy
if [[ -n $AWS_ACCESS_KEY_ID ]] && [[ -n $AWS_SECRET_ACCESS_KEY ]]; then
cd site
bundle exec jekyll build
aws s3 cp _site/ s3://dimmaski/ --recursive
echo "Finished deploy"
exit 0
else
echo "AWS creds not set."
exit 1
fi
</code></pre>
<p>I had the following git hook.</p>
<pre><code class="language-shell"># .git/hooks/pre-push
protected_branch='master'
current_branch=$(git symbolic-ref HEAD | sed -e 's,.*/\(.*\),\1,')
if [ $protected_branch = $current_branch ]; then
./deploy
exit $?
else
exit 0
fi
</code></pre>
<p>In case you are not familiar with git hooks, they are pretty straight
forward. You tie a script (e.g a series of commands) to a given git
action, <code>pre-push</code> <code>pre-commit</code>, etc. (<code>ls .git/hooks</code> to find all the
triggers). Git hooks got me going initially, but it’s not the best option
since it forces me to load my AWS credentials into my running shell
every time I want to commit, plus it makes it annoying if I want to write a post in
another machine. So I found the right alternative that is working for
me, at least for now.</p>
<h2 id="github-actions">Github Actions</h2>
<p>What you got here for. Github allows you to define a <code>yaml</code> file that
lives inside your repo and contains all the steps necessary to build and
deploy your app. You can find lots of plugins in the
<a href="https://github.com/marketplace?type=actions">marketplace</a>. To achieve
the purpose of this post I only needed to look for one specific action:
<code>jakejarvis/s3-sync-action</code>. Since I decided to run my <code>build</code> step inside a docker
container.</p>
<pre><code class="language-yaml">
name: Jekyll site CI
on:
push:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Build the site in the jekyll/builder container
run: |
docker run \
-v ${{ github.workspace }}/site:/srv/jekyll -v ${{ github.workspace }}/_site:/srv/jekyll/_site \
jekyll/builder:latest /bin/bash -c "chmod 777 /srv/jekyll && jekyll build --future"
- uses: jakejarvis/s3-sync-action@master
name: push site to s3 bucket
with:
args: --follow-symlinks --delete
env:
AWS_S3_BUCKET: 'dimmaski'
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: 'us-west-1'
SOURCE_DIR: '_site'
</code></pre>
<p>The CI/CD workflow, if we can even call it that, for a jekyll site
deployed in s3 is quite straight forward. Even more when described in
<code>yaml</code>. The only tricky part is that two-line <code>docker run</code> command, that
looks horrendous, but is doing its job. We are setting two volumes: <code>${{
github.workspace }}/site</code> mounts our repo inside the container; and <code>${{
github.workspace }}/_site</code> allows us to retrieve our static files
generated inside the container back to our workspace. (Note that I have
a folder called <code>site</code> in my repository where my jekyll files live. If
you’re hosting your files in the root of your repo you should only
specify <code>${{ github.workspace }}</code>). The deploy step is encapsulated in
the <code>jakejarvis/s3-sync-action</code>. The secrets live inside the repository settings.
Go to <code>Settins > Secrets > Add a new secret</code>.</p>
<p>That’s it. Simple!</p>
Find your SSH keys based on their fingerprint2020-03-15T00:00:00-05:00https://dimmaski.com/managing-ssh-keys<p>I’m a sloppy guy, I have a tendency to forget things. I work in more
than one computer, and at the time of writing this post I’m not using a
password manager like <code>gopass</code> (yes, I hate myself that much). That has
given me the opportunity to create a small chaotic environment when it
comes to manage passwords and keys. It got to the point where I could’t
differentiate between my ssh keys, different machines, accounts, and
similar key names.</p>
<p>So if your trying to figure out which public key that
<code>lol:lal:lel:023:lol</code> fingerprint represents, you’re in the right place.</p>
<h1 id="fingerprints">Fingerprints</h1>
<p>A fingerprint is the MD5 hashing over the binary data within the
base64-encoded ssh public key, meaning it unmistakably identifies it.</p>
<h1 id="managing-keys-on-github">Managing keys on Github</h1>
<p>So, genius as I am, at some point in time I decided that naming this key
<code>ssh</code> would be enough for me to identify it later on. That obviously was
not the case.</p>
<p><img src="/assets/images/ssh.jpg" alt="logo" title="github keys" /></p>
<p>So where should you look if you have a similar issue? The Github web UI
only allows you to see your public keys fingerprint, and based on that
you can find the correspondent public key.</p>
<p>Running the following command allows you to get the fingerprint of a
given key. With this you can easily find all the key fingerprints inside
your <code>.ssh</code> directory.</p>
<pre><code class="language-shell">ssh-keygen -E md5 -lf ~/.ssh/<key_name>.pub
</code></pre>
<p>The Github API is more permissive than the web interface and you can
actually retrieve your public keys directly from it.</p>
<p>This one liners can be useful.</p>
<pre><code class="language-shell"># to list all your key fingerprints
curl -sL https://github.com/<user>.keys | while read; do ssh-keygen -E md5 -lf - <<<"$REPLY"; done | awk '{print $2}'
# to list all your public keys
curl -sL https://github.com/<user>.keys | while read; do echo "$REPLY\n"; done
</code></pre>
<p>Hope this helps.</p>
Understanding the basics of Prometheus / grafana / telegraf stack2020-03-14T00:00:00-05:00https://dimmaski.com/prometheus-telegraf-grafana<p>In the last few days I tried to learn something which is totally new to me, monitoring.
Still haven’t taken many deep conclusions, but here’s where I’m at.</p>
<h1 id="monitoring-as-a-thing">Monitoring as a “thing”</h1>
<p>Enough chachacha is enough, regarding monitoring:</p>
<p>We have many options Prometheus, InfluxDB… (Can’t remember more, sorry, reading random google findings isn’t all that didactic after all). Well, let’s pick one, for now, stick with it, and try to understand how it works and what we can get out of it.
In this post, we will be looking at Prometheus, Grafana, and Telegraf.
I won’t show you a real-world example, because I haven’t actually dealt and understand one completely, yet.
So let’s start with the BASICS. SIMPLE thighs.</p>
<h2 id="prometheus">Prometheus</h2>
<p>Prometheus works primarily as a time-series database, saving key/value pairs for you. What do these key-value pairs hold actually?
What problems does it solve? Well, in the mainstream, people have been using it to hold mostly information about their systems, CPU, disk usage (I/O), etc,
but with enough tweaking, you can also make your apps expose end-points to serve costume metrics.
You can also trigger alerts based on these metrics with Alertmanager, but that is a topic for other times. People don’t want to bother looking at screens all the time, hence the need for a 01 friend (Slack, Vitorops, etc).
As most DB’s, it can be queried. It also comes up with a modest web UI, that isn’t really that complete, which justifies the need for Grafana.</p>
<h2 id="grafana">Grafana</h2>
<p>Well, put it this way, it allows you to create dashboards using the key-value pairs Prometheus allow’s it to read.
Giving you the opportunity to build the cockpit dashboard you probably won’t bother to look at as much as you should.</p>
<h2 id="telegraf">Telegraf</h2>
<p>Our “producer”. Here’s the basics: It’s highly customizable with plugins (input and output).
You use input plugins to gather your systems information, and output plugins to allow some other “piece of software” to scrape that information. In this case, we will use Prometheus although it’s most commonly used with InfluxDB, another influxData product.
It’s starting to get annoying having to deal with all these names, “tools”, rockstar companies and their .io websites.
I miss the times when I naively taught all the servers were Linux, and that C the language of the Gods, but one has to grow right? Let’s not lose focus.</p>
<h2 id="docker-compose-stack">Docker-compose stack:</h2>
<pre><code class="language-yaml">version: "3"
services:
prometheus:
image: quay.io/prometheus/prometheus:v2.0.0
volumes:
- ./monitor/prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus
command: --config.file=/etc/prometheus/prometheus.yml
ports:
- 9090:9090
depends_on:
- telegraf
telegraf:
image: telegraf:1.8
volumes:
- ./monitor/telegraf.conf:/etc/telegraf/telegraf.conf:ro
ports:
- 9100:9100
grafana:
image: grafana/grafana
volumes:
- grafana_data:/var/lib/grafana
ports:
- 3000:3000
depends_on:
- prometheus
volumes:
prometheus_data: {}
grafana_data: {}
</code></pre>
<p>The same compose jaba-daba as usual, exposing ports and using volumes to mount config files. You can find them <a href="https://github.com/dimmaski/grafana-prometheus-telegraf">here</a>.</p>
<p>Run it <code>docker-compose -p Telegraf-Prometheus-Grafana up</code>. Head over to <a href="http://localhost:9090/targets">prometheus/targets</a>. As you can see Prometheus is collecting metrics from two places, our Telegraf service, and Prometheus itself.
By default, it scrapes information from the <code>/metrics</code> endpoint. Head over to the Main Page, and execute a query based on some metric (provided by the input plugins). If you’re simply lazy click <a href="http://localhost:9090/graph?g0.range_input=1h&g0.expr=cpu_usage_iowait&g0.tab=0">here</a>.
Kinda basic right? How did we manage to get Telegraf - Prometheus to communicate?</p>
<pre><code class="language-yaml"># monitor/prometheus.yml
- job_name: telegraf
scrape_interval: 15s
static_configs:
- targets: ['telegraf:9100']
</code></pre>
<pre><code class="language-config"># monitor/telegraf.conf
# Configuration for the Prometheus client to spawn
[[outputs.prometheus_client]]
# /metrics exposed by default
listen = "telegraf:9100"
</code></pre>
<p>As you can see <code>prometheus_client</code> is an output plugin. It’s exposing information. If you dig the <code>telegraf.conf</code> you will also find the input plugins that are allowing Telegraf to read the system’s data.
For example inputs CPU, disk, diskio, Kernell, mem, etc.</p>
<p>Now, time to check <a href="http://localhost:3000/login">grafana</a>. Write the usual 2-word magic (admin / admin). Go <a href="http://localhost:3000/datasources/edit/3/?gettingstarted">here</a>. Add <code>http://prometheus:9090</code>.
Now feel free to click buttons and such. Create your own dashboard, in the Metrics tab of the query thingy, you will find the information that your Telegraf inputs are reading. It seems like magic.</p>
<p>There is much more to talk but, many more questions. How would this setup be built in the cloud, in a micro service oriented environment? How would we deal with security groups, service discovery etc?</p>
<p>Well, this is my first approach into this topic, it’s a local, dummy and basic setup, but it’s a point of start, at least for me. Let’s see where we go from here.</p>
<p>I think this post has already gotten way bigger than I anticipated or wanted, so I will leave it here. I hope, I was able to help someone. Now that you have this basic setup
you can tweak it to your liking, experiment with Telegraf plugins, or other “reporter” like node-exporter instead of Telegraf.</p>
<p>Don’t forget to cleanup those containers ma’ men.</p>
Working with .env files in docker and docker-compose2020-03-01T00:00:00-06:00https://dimmaski.com/.env-files-docker<p>One thing that annoyed me in the past week was dealing with environment
variables in docker-compose. I digged the web and found out
that some good fellas had already discussed and solved my problems over
here
<a href="https://github.com/docker/compose/issues/6170">–env-file option #6170</a>,
and here
<a href="https://github.com/docker/compose/pull/6535">Support for –env-file option for docker-compose #6535</a>.
I decided to make that a little bit more obvious for anyone lost enough to end up here.</p>
<p>The first thing I should clarify is that both docker and docker-compose
have a <code>--env-file</code> option. Although similar, they are meant to achieve
different purposes. Let’s try to understand how they work by example.</p>
<h2 id="--env-file-in-docker"><code>--env-file</code> in docker</h2>
<p>Creating this file in my current directory.</p>
<pre><code class="language-text"># .env.container
TEST1=Remember this: "Be it a rock or a grain of sand, in water they sink as the same." - Woo-jin Lee
</code></pre>
<p>And then running <code>docker run -it --env-file .env.container ubuntu:latest
bash -c "echo \$TEST1"</code> we find out in the output that the contents of
<code>.env.container</code> are loaded inside the ubuntu container. The echo
command is written with a trailing slash to prevent my current shell to
expand that variable. Conclusion? Running <code>docker run --env-file</code> gets
your <code>.env</code> variable’s exported inside the container.</p>
<h2 id="--env-file-in-docker-compose"><code>--env-file</code> in docker-compose</h2>
<p>Now, what if we want to replicate this exact behaviour but using
docker-compose?</p>
<pre><code class="language-yaml"># docker-compose.yml
version: '3.7'
services:
ubuntu:
image: ubuntu:latest
entrypoint: bash -c
tty: true
command:
- echo $$TEST1
env_file:
- .env.container
</code></pre>
<p>After defining our ubuntu service, we can run it with <code>docker-compose
run ubuntu</code>, and see that it prints out exactly the same as our docker
command.</p>
<p>Where am I trying to get here? Well, let’s try to feed our compose with
some variables read from <code>.env.compose</code> and understand how the
<code>--env-file</code> behaves in docker-compose.</p>
<pre><code class="language-text"># .env.compose
TEST2=I thought I'd lived a simple life. But I've sinned too much - Dae-su Oh
TAG_UBUNTU=18.04
</code></pre>
<p>Let’s update our compose to look like this.</p>
<pre><code class="language-yaml">version: '3.7'
services:
ubuntu:
image: ubuntu:${TAG_UBUNTU:-latest}
entrypoint: bash -c
tty: true
command:
- echo $$TEST1; echo $TEST2; echo $$TEST2
environment:
- TEST2=$TEST2
env_file:
- .env.container
</code></pre>
<p>Check this out. <code>docker-compose --env-file .env.compose config</code> outputs:</p>
<pre><code class="language-yaml">services:
ubuntu:
command:
- echo $$TEST1; echo I thought I'd lived a simple life. But I've sinned too much
- Dae-su Oh; echo $$TEST2
entrypoint: bash -c
environment:
TEST1: 'Remember this: "Be it a rock or a grain of sand, in water they sink
as the same." - Woo-jin Lee'
TEST2: I thought I'd lived a simple life. But I've sinned too much - Dae-su
Oh
image: ubuntu:18.04
tty: true
version: '3.7'
</code></pre>
<p>Interesting… let’s run this and try to take conclusions after.</p>
<p>Running <code>docker-compose --env-file .env.compose run ubuntu</code> outputs</p>
<pre><code class="language-text">Remember this: "Be it a rock or a grain of sand, in water they sink as the same." - Woo-jin Lee
I thought Id lived a simple life. But Ive sinned too much - Dae-su Oh
I thought I'd lived a simple life. But I've sinned too much - Dae-su Oh
</code></pre>
<p>I added some echos of variables for us to better understand what’s going
on.</p>
<p>How is compose dealing with the variables? We are clearly importing
<code>TAG_UBUNTU</code> from <code>.env.compose</code>. And compose seems to understand really
well what to pass to our container using our <code>environment</code> and
<code>env_file</code> tags. Looking at the echo command in <code>docker-compose config</code>.
<code>$$TEST1</code> is not replaced by any value. That seems obvious since
<code>env_file</code> is feeding that variable directly to the container (it’s not
available for compose to use). What about <code>TEST2</code>? Well, we can see that
<code>$TEST2</code> is being replaced, that’s because compose has direct access to
it (<code>.env.compose</code>). <code>$$TEST2</code> is not replaced when we look at docker
config, but we see that the variable it’s outputted when we run the
container. <code>$$</code> is basically telling to compose to not print your local
<code>TEST2</code> variable, but to print the container’s one. The thing is, when
you run <code>docker-compose config</code> you are not actually running containers
and executing commands. The config command is simply showing you what
compose is interpreting from the yaml and variables, may they be placed
in a <code>.env</code> file or exported directly in your shell.</p>
<p>I hope that this helped someone in need.</p>
<h2 id="takeaways">Takeaways</h2>
<ul>
<li>In docker-compose you can import the variables directly to the container in your yaml
with <code>env_file</code>.</li>
<li>You can import the variables in your docker-compose e.g.
<code>docker-compose --env-file .env.compose run ubuntu</code>, pickup the
variables in the <code>enviornment</code> tag like we saw above with <code>TEST2</code> and
feed them to the container.</li>
<li>When inside compose: <code>$VAR</code> refers to a “local” compose’s variable,
while <code>$$VAR</code> refers to a container’s var, escaping interpolation in
compose.</li>
</ul>
<p>Versions:</p>
<pre><code class="language-text">Docker version 19.03.5, build 633a0ea838
docker-compose version 1.25.4, build unknown
</code></pre>
Configuring Thinkpad X230 keys in i32019-12-11T00:00:00-06:00https://dimmaski.com/configure-thinkpad-i3-buttons<p>So you’ve just started to use the i3 window manager but can’t get your
Thinkpad multimedia keys to function properly? Here is how: Open your i3
config file in your favourite text editor <code>code ~/.config/i3/config</code>.
First of all, lets configure the XF86Launch1 button. I like to have it
configured to pop up a terminal when pressed.</p>
<pre><code class="language-bash">bindsym XF86Launch1 exec gnome-terminal
</code></pre>
<p>Next the volume and mute buttons.</p>
<pre><code class="language-bash"># Pulse Audio controls
# +/- sound volume
bindsym XF86AudioRaiseVolume exec --no-startup-id pactl -- set-sink-volume 0 +10%
bindsym XF86AudioLowerVolume exec --no-startup-id pactl -- set-sink-volume 0 -10%
# mute volume
bindsym XF86AudioMute exec --no-startup-id pactl set-sink-mute 0 toggle
# mute mic
bindsym XF86AudioMicMute exec pactl set-source-mute 1 toggle
</code></pre>
<p><strong>Extra:</strong> Not a special Thinkpad button, but to get PrtSc (print
screen) to work add the following:</p>
<pre><code class="language-bash">bindsym Print exec gnome-screenshot -i
</code></pre>