GithubHelp home page GithubHelp logo

garritfra / garrit.xyz Goto Github PK

View Code? Open in Web Editor NEW
10.0 3.0 7.0 75.78 MB

Personal Website

Home Page: https://garrit.xyz

License: MIT License

JavaScript 0.97% Dockerfile 4.70% SCSS 36.12% Shell 1.54% Go 2.50% TypeScript 53.47% Makefile 0.70%

garrit.xyz's Introduction

garrit.xyz

This is the repository for my personal website.

Generating posts

Running the following command will generate a new blog post with the necessary boilerplate.

./contrib/gen-post.sh My first post
# -> 2021-04-12-my-first-post.md

https://garrit.xyz

garrit.xyz's People

Contributors

adastx avatar benjifs avatar code-factor avatar copyrip avatar dependabot[bot] avatar garritfra avatar github-actions[bot] avatar hacdias avatar imgbotapp avatar jr-b avatar lippoliv avatar nice42q avatar pernat1y avatar renovate[bot] avatar snyk-bot avatar tianheg avatar yigitsever avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

garrit.xyz's Issues

Principles of DevOps: Flow

This post is part of a series called Principles of DevOps.

"Flow" refers to the performance of a system, as opposed to the performance of a specific silo or department.

In our daily work, we often only see what's inside our own silo. As a developer, we see requirements come in and code going out. In operations, we see code being pushed to a repository and pipelines deploying it to production. However, it's crucial to understand the flow of work in a broader context.

Most services or products have a "value stream" which describes how work is performed. You can think of the value stream as a carrier belt with multiple work centers. The first work center might be the business department, followed by a design team, a dev team, QA, operations and finally the customer. This structure might look different, depending on what product or service you are building.

The flow of work should always go in one direction

Work is typically generated at some point of the value stream. A requirement by the business department, an issue created by the QA team, an incident in operations, feedback from a customer, and so on.

Regardless of where the work originates, the flow of work should always go in one direction: forward. Work moving backwards, or even standing still, introduces bottlenecks that prevent downstream work centers from properly working. Always seek to resolve these bottlenecks as soon as possible.

Always seek to increase flow

The goal of almost any product or service is to bring value to the customer. More work flowing through the system means more value generated. But how do we increase the flow of work?

Eliminate work in progress. This almost always indicates a bottleneck in one of the workcenters. Work should always flow smoothly through the system. If a ticket is stuck in one department for too long, ask why and how this can be avoided in the future.

Match the pace of the customer. If the value stream is pumping out more features than the customer demands, you are not generating value to the business.

Reduce work batch size. By iterating in small steps, you can adapt to changes more quickly. Split up requirements into smaller tickets and increase the amount of deployments.

If you want to learn more about how to increase flow in a system, see theory of constraints and the Toyota production system.

Never unconsciously pass known defects downstream

Aim to fix problems immedietely as they occur, especially if it's higher up in the value stream. Everyone should be responsible for the work of the entire value stream instead of just their work center.

If you notice work or problems being introduced multiple times, it's likely the upstream work center is unaware of this. Immediately seek awareness and work together on resolving it.

Never allow local optimization to create global degradation

Optimizing local work is important, but it should never introduce friction in other workcenters and, by extension, decrease performance of the value stream.

Local optimization is often linked to the "tribal warfare" between organizations (e.g. development vs. operations, business vs. development, etc.).

Conclusion

Understanding and optimizing the flow of work within a value stream is crucial for achieving efficient and effective software delivery. By ensuring that work moves in one direction and continuously seeking to increase the flow, we can generate more value for customers and the business.

Eliminating bottlenecks, matching the pace of the customer, and reducing work batch size are all key strategies to enhance flow. Moreover, actively addressing known defects, promoting collaboration across the value stream, and avoiding local optimizations that hinder overall performance are essential for achieving successful outcomes. By embracing these principles, we can unlock the full potential of DevOps and drive continuous improvement in software delivery processes.

Resources


This is post 073 of #100DaysToOffload.

Visual Distractions

Everywhere we look, we're bombarded with flashy symbols trying to grab our attention. This is even the case where we think that we're in control of what we're looking at. I made two simple changes that reduce visual distractions in my life.

Android App Icons

App icons play a serious role in how we interact with our phone. Over the years, there has been a constant battle for the most flashy icon on our home screen. But there's a cure: newer versions of Android let you choose a color theme for apps that implement it. It's by far not supported by every app out there, but in my case 90% of the app icons now have the same color. I feel way more comfortable looking at my phone, knowing that less things are trying to grab my attention right when I unlock my phone.

With this change, I found that I am more mindful about what app icon I tap on, since I was used to each icon having a different color. This makes it harder for my muscle memory to develop bad habits.

RSS-Reader Favicons

If you're using an RSS reader, chances are you're used to seeing a favicon next to the articles. I had the feeling that I was drawn more towards the favicon than the headline of the article, so I started looking for ways to disable favicons all together.

Miniflux provides a way to override the stylesheet of the feed in the settings. Simply append the following code-snippet and the favicons will be history:

.item-title img, .entry-website img {
  display: none;
}

Of course every reader is different, so you might want to look into the documentation of your reader of choice.

Conclusion

These changes might seem insignificant, but I found that they made a huge difference in how I interact with my phone. The suggestions above might not apply to your life, but I'd like to encourage you to keep an eye out for unnecessary visual distraction in your life. Try to avoid it as much as possible.


This is post 051 of #100DaysToOffload.

Fullscreen Terminals in VSCode

I often find myself using a "real" terminal alongside my VSCode setup, because for some tasks the built-in terminal, due to its small size, is quite flimsy to use. But! I just found out there's a a way to switch the terminal into fullscreen mode, using the "View: Toggle Maximized Panel" command.

You can bind it to a shortcut, which makes switching between editor and terminal a breeze! Simply add this to your keybindings.json (also accessible via the command palette):

    {
        "key": "cmd+alt+m",
        "command": "workbench.action.toggleMaximizedPanel"
    }

References


This is post 059 of #100DaysToOffload.

Pachinko

On my trip to Japan this June, I came across various gambling halls. Japan is famous for their bright and flashy culture, so I decided to take a walk through one of them. What I saw left me completely baffled.

Where I'm from, gambling halls can be quite noisy. But in Japan, they're on another level. To set the mood, make yourself comfortable and listen to this video:

<iframe width="560" height="315" src="https://www.youtube.com/embed/iADzWQj4Qz0?si=Htm-F-8htMeSydVz" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

Terraform and Kubernetes are fundamentally different

On the surface, Infrastructure as Code tools like Terraform or CloudFoundation may seem to behave similar to Kubernetes YAMLs, but they are in fact fundamentally different approaches to cloud infrastructure.

Terraform tries to provide a declarative way to express imperative actions. If you tell Terraform that you need an EC2 instance, it will notice that no such resource exists and instruct the AWS API to create one. If you don't need the instance anymore and remove the resource definition from your code, Terraform will also pick that up and instruct the AWS API to delete the instance. This works well in most cases, but every once in a while the declarative state may get out of sync with the real world, resulting in errors that are hard to debug and resolve.

Kubernetes on the other hand is a fully declarative system. In a previous post I touched on how Kubernetes constantly compares the desired state with the actual state of the resources and tries to match the two. Although it is theoretically possible to issue imperative actions, Kubernetes is built from the ground up to be declarative.

Serverless Framework Retrospective

A current project requires the infrastructure to be highly scalable. It's expected that > 50k Users hit the platform within a five minute period. Regular ECS containers take about one minute to scale up. That just won't cut it. I decided to go all in on the serverless framework on AWS. Here's how it went.

Setup

Setting up a serverless application was a breeze. You create a config file and use their CLI to deploy the app.

The rest of the infrastructure

I decided to define the rest of the infrastructure (VPC, DB, cache, ...) in Terraform. But, since I wasn't familiar with how the Serverless Framework worked, I struggled to draw the line between what serverless should handle vs. what the rest of the infrastructure (Terraform) should provide. In a more traditional deployment workflow, you might let the CI deploy a container image to ECR and point the ECS service to that new image.

I chose to let Serverless deploy the entire app through CI and build the rest of the infrastructure around it. The problem with this approach is that we lose fine-grained control over what's deployed where, which leads to a lot of permission errors.

In retrospect, I should've probably chosen the location of the S3 archive as the deployment target for the CI, and then point the lambda function to the location of the new artifact. This defeats the purpose of the framework, but it gives you a lot more control over your infrastructure. Once the next project comes along, I'll probably go that route instead.

Permissions

Serverless suggests to use admin permissions for deployments, and I see where they're coming from. Managing permissions in this framework is an absolute mess. Here's what the average deployment workflow looks like, if you want to use fine grained permissions:

  1. Wait for CloudFormation to roll back changes (~2 minutes)
  2. Update IAM role
  3. Deploy Serverless App
  4. If there's an error, go to 1

Thankfully, some people have already gone through the process of figuring this out. Here's a great guide with a starting point of the needed permissions.

Conclusion

Using the serverless framework is a solid choice if you just want to throw an app out there. Unfortunately the app I was deploying isn't "just" a dynamic website. The next time I'm building a serverless application it's probably not going to be with the Serverless Framework, though I learned a lot about serverless applications in general.


This is post 067 of #100DaysToOffload.

Pods vs. Containers

In Kubernetes, pods and containers are often confused. I found a great article going over the differences of the two terms.

Containers and Pods are alike. Under the hood, they heavily rely on Linux namespaces and cgroups. However, Pods aren't just groups of containers. A Pod is a self-sufficient higher-level construct. All pod's containers run on the same machine (cluster node), their lifecycle is synchronized, and mutual isolation is weakened to simplify the inter-container communication. This makes Pods much closer to traditional VMs, bringing back the familiar deployment patterns like sidecar or reverse proxy.

In my own words: Containers are made up of Linux namespaces and cgroups. Pods can be thought of as a cgroup of cgroups (though not really), mimicing the behavior of a virtual machine that runs multiple containers with a synchronized lifecycle. The containers in a pod are losely isolated, making it easy to communicate between each other. Containers in a pod can however set individual resource requests, enabled by Linux namespaces.

I'd highly encourage you to check out the original article if you want to learn more about this topic.

Terraform project learnings

I just finished my first ever infrastructure project for a client. My Terraform skills are enough to be dangerous, but during the development of this project I learned a lot that I would do differently next time.

Project structure

Having worked with semi-professional Terraform code before, I applied what I knew to my new project. That was mainly that we have a shared base and an overlay directory for each environment. I went with a single Terraform module for the shared infrastructure, and variables for each environment. Naively, roughly every service had their own file.

.
├── modules
│   └── infrastructure
│       ├── alb.tf
│       ├── cache.tf
│       ├── database.tf
│       ├── dns.tf
│       ├── ecr.tf
│       ├── ecs.tf
│       ├── iam.tf
│       ├── logs.tf
│       ├── main.tf
│       ├── network.tf
│       ├── secrets.tf
│       ├── security.tf
│       ├── ssl.tf
│       ├── state.tf
│       └── variables.tf
├── production
│   ├── main.tf
│   └── secrets.tf
└── staging
    ├── main.tf
    └── secrets.tf

This works very well, but I already started running into issues extending this setup. For my next project, I would probably find individual components and turn them into smaller reusable submodules. If I were to rewrite the project above, I would probably structure it like this (not a complete project, but I think you get the idea):

.
├── modules
│   └── infrastructure
│       ├── main.tf
│       ├── modules
│       │   ├── database
│       │   │   ├── iam.tf
│       │   │   ├── logs.tf
│       │   │   ├── main.tf
│       │   │   ├── outputs.tf
│       │   │   ├── rds.tf
│       │   │   └── variables.tf
│       │   ├── loadbalancer
│       │   │   ├── alb.tf
│       │   │   ├── logs.tf
│       │   │   ├── main.tf
│       │   │   ├── outputs.tf
│       │   │   └── variables.tf
│       │   ├── network
│       │   │   ├── dns.tf
│       │   │   ├── logs.tf
│       │   │   ├── main.tf
│       │   │   ├── outputs.tf
│       │   │   ├── ssl.tf
│       │   │   ├── variables.tf
│       │   │   └── vpc.tf
│       │   ├── service
│       │   │   ├── ecr.tf
│       │   │   ├── ecs.tf
│       │   │   ├── iam.tf
│       │   │   ├── logs.tf
│       │   │   ├── main.tf
│       │   │   ├── outputs.tf
│       │   │   └── variables.tf
│       │   └── state
│       │       ├── locks.tf
│       │       ├── main.tf
│       │       ├── outputs.tf
│       │       ├── s3.tf
│       │       └── variables.tf
│       ├── outputs.tf
│       └── variables.tf
├── production
│   ├── main.tf
│   └── secrets.tf
└── staging
    ├── main.tf
    └── secrets.tf

Secrets

I decided to use git-crypt to manage secrets, but that was only before I learned about SOPS. It's too late to migrate now, but if I could, I would choose SOPS for secrets any day of the week for upcoming projects. It even has a Terraform provider, so there's no excuse not to use it. ;)

Conclusion

Overall I'm pretty happy with how the project turned out, but there are some things that I learned during this project that will pay off later.


This is post 057 of #100DaysToOffload.

What's next for modern infrastructure?

Modern infrastructure is incredibly complex. I identified 4 main "levels" of infrastructure abstraction:

Level 1: A website on a server

This is the most straight forward way to host a website. A webserver hosted on bare metal or a VM.

Level 2: Multiple servers behind a load balancer

At this stage, you start treating servers as cattle rather than pets. Servers may be spun up and down at will without influencing the availability of the application.

Level 3: An orchestrated cluster of servers

Instead of a server serving a specific purpose (e.g. webserver, DB server, etc.), a server becomes a worker for arbitrary workloads (see Kubernetes, ECS).

Level 4: Multicluster service mesh

If an organization manages multiple clusters (e.g. multiple application teams), they can be tied together into a service mesh to better optimize communication and observability.

Level 5: ???

History shows that we never stop abstracting. Multicluster service meshes are about the most abstract concept many people (including myself) can comprehend, but I doubt that this is the end of this journey. So, what's next for modern infrastructure?

This is post 049 of #100DaysToOffload.

DRAFT: Container Interfaces

There are a couple of interfaces that container orchestration systems (like Kubernetes) implement to expose certain behavior to their container workloads. I will only be talking about Kubernetes in this post since it's the orchestrator I'm most comfortable with, but some interfaces are also implemented in other orchestrators (like HashiCorp Nomad) too, which makes the interfaces cross-plattform.

Container Storage Interface (CSI)

Storage behavior used to be built into Kubernetes. The container storage interface (CSI) defines a unified interface to manage storage volumes, regardless of the orchestrator (as long as they implement the CSI). This makes it way easier for third-party storage providers to expose data to Kubernetes. If a storage provider implements this interface, orchestrators can use it to provision volumes to containers. Notable storage providers are:

A full list of CSI drivers can be found here.

Container Runtime Interface (CRI)

TODO

Container Network Interface (CNI)

TODO

Dockerignore troubles

I commonly used to create a .Dockerignore file next to my Dockerfile. After countless hours of ignoring the problems in my setup, I found out that the uppercase .Dockerignore doesn't get picked up by Docker on MacOS. Only the lowercase .dockerignore is valid.

I didn't find official documentation on this, but I think it's because both MacOS and Linux are case-sensitive, while Windows isn't. I don't remember why I got used to the .Dockerignore convention, but I swear I saw someone using it in the wild. Or it's my (un)logical reasoning that, because Dockerfile is uppercased, .Dockerignore should be uppercased as well.

Either way, stay away from .Dockerfiles and stick to .dockerfiles.

This is post 050 of #100DaysToOffload.

Designing resilient cloud infrastructure

As mentioned in a previous post, I'm currently finishing up building my first cloud infrastructure for a client at work. During the development, I learned a lot about designing components to be resilient and scalable. Here are some key takeaways.

One of the most critical components of a resilient infrastructure is redundancy. On AWS, you place your components inside a "region". This could be eu-central-1 (Frankfurt) or us-east-1 (North Virgina), etc. To further reduce the risk of an outage, each region is divided into multiple Availability Zones (AZs). The AZs of a region are usually located some distance apart from each other. In case of a flood, a fire or a bomb detonating near one AZ, the other AZs should in most cases still be intact. You should have at least two, preferably three replicas of each component across multiple availability zones in a region. By having replicas of your components in different availability zones, you reduce the risk of downtime caused by an outage in a single availability zone.

Another way to ensure scalability and resilience for your database is to use Aurora Serverless v2. This database service is specifically designed for scalable, on-demand, and cost-effective performance. The database scales itself up or down based on the workload, which allows you to automatically and dynamically adjust the database capacity to meet the demand of your application, ensuring that your application is responsive and performs well without the need for manual intervention. Adding Serverless instances to an existing RDS cluster is also a seemless proccess.

In addition to switching to Aurora Serverless v2, using read replicas for cache and database in a separate availability zone can act as a hot standby without extra configuration. Keep in mind that read replicas are only utilized by explicitly using the read-only endpoint of a cluster. But even if you're only using the "main" cluster endpoint (and therefore just the primary instance), a read replica can promote itself to the primary instance in case of a fail over, which drastically reduces downtime.

When using Amazon Elastic Container Service (ECS), use Fargate as opposed to EC2 instances. Fargate is a serverless compute engine for containers that allows you to run containers without having to manage the underlying infrastructure. It smartly locates instances across availability zones, ensuring that your application is always available.

In conclusion, you should always ensure that there are more than one instance of a component in your infrastructure. There are also services on AWS that abstract away the physical infrastructure (Fargate, S3, Lambda) and use a multi-AZ pattern by default.


This is post 061 of #100DaysToOffload.

fix error handling

console.error node_modules\react-dom\cjs\react-dom.development.js:9747
The above error occurred in the component:
in ProjectCard (at ProjectCard.test.js:32)

  Consider adding an error boundary to your tree to customize error handling behavior.
  Visit https://fb.me/react-error-boundaries to learn more about error boundaries.

Single Page Applications on GitHub Pages

My latest project, sendpasswords.net is a Single Page Application deployed on GitHub Pages.

GitHub Pages is configured in a way to host static HTML files without any bells and whistles. This means that if you try to fetch a document that's not the index, for example /foo, the server will try to load the file with that name.

By nature, SPAs only consist of a single HTML entry point (index.html in most cases). It's responsible for routing the user to the correct page if there are multiple paths. And here's the crux: if the user tries to load /foo, he will not land at the SPA entry point. Instead, he will see a 404 error.

The solution

A 404 response will automatically return a file called 404.html, which we can use to our advantage. After building the application, simply copy the index.html to 404.html, as demonstrated by this commit. This will use index.html to serve the application on the root level, and 404.html to load the same app if the page doesn't exist as a file. Whether the index.html is needed if there's already a 404.html is up to you. I left it in to make clear that this is just a workaround.

This is a well known workaround, but I wanted to bring some extra awareness to it, since it's a problem I ran into a couple of times so far. Happy SPAing!


This is post 069 (nice) of #100DaysToOffload.

Quality vs. Quantity

The amount and quality of the posts on this blog strongly depends on my mood. Some times I want to write about every thought that crosses my mind, leading to 4+ average- to low-quality posts a week. I think I'm in one of these phases right now.

Some other times, I feel like most of my thoughts are not worth the effort to write about, but the ones that I do write about, often become the ones that I'm most proud of.

These phases act like opposing forces that coexist very well. Pumping out more thoughts of lesser quality frees up my mind for higher quality ones, and when I'm writing about higher quality thoughts, I get the urge to write more often, completing the circuit.

In the end, it's not about what you write about, but about the process of writing itself. Every post on a personal blog is a snapshot of your thoughts at a point in time, no matter if you're feeling qualitative or quantitative.


This is post 062 of #100DaysToOffload.

Work batch sizing

I've been playing
Carcassonne a lot with my girlfriend recently. It's a boardgame about building cities, roads and farms, and each completed "project" earns you some amount of points. The twist is that there's only a limited number of tiles, and once all tiles are used, the game is over unfinished projects are discarded.

The first couple of playthroughs I tried to maximize my score by increasing the number of projects I actively had going. I'd start a new city or road whenever I could, thinking that the multipliers you sometimes get would pay off in the end. Boy was I wrong.

Where I'm from, we have multiple sayings for this approach. "Having too many irons in the fire" or "dancing on too many parties". I was too busy starting new projects instead of making actual progress.

A far better approach is to finish projects early, earning less points, but with a greater certainty that they will pay off. With every project you start, the likelyhood of the other projects paying off decreases.

Keeping batch sizes small was a key concept of the lean manufacturing movement in the 1980s, and has since been adopted by the DevOps movement for the IT industry. If you want to learn more about this topic, you should check out The DevOps Handbook. It goes well beyond the basics of making IT processes more productive and efficient.

After realizing that small batch sizes are the key to success, I haven't lost a game of Carcassonne since. I hope you're not reading this, honey.🤭


This is post 068 of #100DaysToOffload.

The role of a DevOps Engineer

The term "DevOps" can be interpreted in many different ways. It's often thrown around as a buzzword whenever somebody is talking about "what comes after development". Obviously, it's not just that. Or is it? It depends on whom you're talking to.

Although I just recently started my new role as a "DevOps Engineer", I'm still discovering what that term means to me. I just had a fruitful conversation with the DevOps lead of a client, and they phrased this role in a very fitting way.

A DevOps Engineer doesn't push the button, they enable the developers to push the button themselves.

To me this role is fascinating, since it touches so many different aspects of software delivery.


This is post 065 of #100DaysToOffload.

Debugging ECS Tasks

I just had to debug an application on AWS ECS. The whole procedure is documented in more detail in the documentation, but I think it's beneficial (both for my future self and hopefully to someone out there) to write down the proccess in my own words.

First of all, you need access to the cluster via the CLI. In addition to the CLI, you need the AWS Session Manager plugin for the CLI. If you're on MacOS, you can install that via Homebrew:

brew install --cask session-manager-plugin

Next, you need to allow the task you want to debug to be able to execute commands. Since I'm using Terraform, this was just a manner of adding the enable_execute_command attribute to the service:

resource "aws_ecs_service" "my_service" {
  name            = "my-service"
  cluster         = aws_ecs_cluster.my_cluster.id
  task_definition = aws_ecs_task_definition.my_task_definition.id
  desired_count   = var.app_count
  launch_type     = "FARGATE"
  enable_execute_command = true # TODO: Disable after debugging
}

You may also need specify an execution role in the task definition:

resource "aws_ecs_task_definition" "my_task_definition" {
  family              = "my-task"
  task_role_arn       = aws_iam_role.ecs_task_execution_role.arn
  execution_role_arn  = aws_iam_role.ecs_task_execution_role.arn  # <-- Add this

Make sure that this role has the correct access rights. There's a nice troubleshooting guide going over the required permissions.

If you had to do some modifications, make sure to roll out a new deployment with the fresh settings:

aws ecs update-service --cluster my-cluster --service my-service --force-new-deployment

Now, you should be able to issue commands against any running container!

aws ecs execute-command --cluster westfalen --task <task-id-or-arn> --container my-container --interactive --command="/bin/sh"

I hope this helps!


This is post 055 of #100DaysToOffload.

Migrating Homeassistant from SD to SSD

I finally got frustrated with the performance of my Raspberry Pi 4 running Homeassistant on a SD card, so I went ahead and got an SSD.

The migration was very easy:

  1. Create and download a full backup through the UI
  2. Flash Homeassistant onto the SSD
  3. Remove the SD card and plug the SSD into a USB 3.0 port of the Pi
  4. Boot
  5. Go through the onboarding procedure
  6. Restore Backup
  7. Profit

It worked like a charm! The speed has improved A LOT, and everything was set up as it should be.

...Until we turned on the lights in the livingroom. My ZigBee-dongle, plugged into another USB port, wasn't able to communicate with the devices on the network.

After some digging around, I came across several threads stating that an SSD over USB 3.0 apparently creates a lot of interference to surrounding hardware, including my ZigBee dongle. The fix was simple: either get an extension port for the dongle, or plug the SSD into a USB 2.0 port of the Pi. Since I didn't have an extension cord to get the dongle far away enough from the SSD, I went with the latter option for now. And that fixed it! The performance was much worse, but still better than the SD I used before. My next step will be to grab an extension cord from my parents. I'm sure they won't mind.

I hope this helps!


This is post 066 of #100DaysToOffload.

TEST ISSUE

Test

This is the body of a test issue

It has now been edited again and again

So, you want to win the lottery

The lottery is often used as a comparison for something that's far out of reach. But how far out of reach exactly is the lottery?

1 in 302 Million

Apparently, you have a 1 in 302 million chance to win the lottery. Well, that sounds like a lot... but how much is that exactly?

Imagine 66 bathtubs.

Now, imagine each of these bathtubs is filled to the brim with rice.

One of the grains of rice inside one of the bathtubs is painted gold. This is our jackpot.

Whenever you purchase a lottery ticket, imagine grabbing one grain of rice from this sea of bath tubs. Do you think you'd have a chance to find the golden grain?

Source: The Scam No One Sees


This is post 074 of #100DaysToOffload.

I won't buy a YubiKey

I have a YubiKey that I use for work, and I love using it. But I won't get one for my personal life.

I've been thinking about this for some time now, but I ultimately don't think the benefits outweigh the hassle of always carrying around another device that I risk losing or breaking.

A Yubikey provides a very good second factor, but so does my phone. My phone, just like a Yubikey, is locked behind a third factor (a pin or biometric sensor), so my phone essentially is a Yubikey. You can argue that the authenticator app on my phone (bitwarden) can be hacked, but I'm willing to take that risk if it means I have to reset all security measures on all accounts if I lose the key.

So, I'm not getting a Yubikey.


This is post 058 of #100DaysToOffload.

Instant dark theme

Thanks to Jacksons update to darktheme.club, I just came across a neat little CSS property that turns a mostly CSS-free document into a pleasantly dark site:

:root {
  color-scheme: light dark;
}

This will adjust all elements on the page to the color scheme preferred by the user - without any other custom styles! 🤯 It is also widely supported by browsers.

I've always been quite dependent on CSS-frameworks for any project I'm starting. Going forward, I'd be interested to see how framework-less sites would feel using this property. If all else fails, there's always the awesome simple.css library, which you can slap on top of a raw document to make it pretty (and dark, if preferred) without using custom classes.


This is post 064 of #100DaysToOffload.

DRAFT: Ditching coffee

I love coffee. Many of us do. But I feel like I've become so addicted to it that it influences how I go about my day. I really don't want a drug to dictate how I'm living my life.

I usually drink one or two cups of coffee a day.

Undo on mobile phones

Maybe this is just wishful thinking from someone who grew up in front of a physical keyboard, but I sometimes wish that my phone had a global "undo" button.

I'm sure there's a reason why phones have evolved away from having such a feature, but sometimes, especially when editing text, it would be very useful to "just hit ctrl-z", or whatever it would be on a phone. Some apps do have such a button, but they explicitly had to place it there and craft a feature around it.

I can imagine that Android and other OSes are capable of exposing an "undo" API that apps can implement to make this easier. A gesture well known to the user, similar to the "back" button or gesture on Android, would trigger an undo. For text, apps probably won't even have to implement anything. The Textbox element of the UI framework should be capable of undoing changes. For anything more complex, the app decides what to do with this action.

Again, probably just wishful thinking, but I really want my undos back!


This is post 078 of #100DaysToOffload.

Make a List

My dad taught me this important lesson.

Whenever you're feeling stuck or you don't know what to do, make a list. The next step will often become clear.

Whether you're going shopping or you're feeling mentally overloaded. It helps to write down your thoughts in an actionable form. I don't care if it's in some fancy mobile app or on a napkin. Just make a list.

Lists are the answer to almost anything. And where they're not, a spreadsheet is.


This is post 070 of #100DaysToOffload.

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Ignored or Blocked

These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.

Detected dependencies

dockerfile
Dockerfile
  • node 16-alpine
  • node 16-alpine
  • node 16-alpine
lib/jurassic/Dockerfile
  • golang 1.19.2-alpine3.16
github-actions
.github/workflows/build-docker.yml
  • actions/checkout v3
  • elgohr/Publish-Docker-Github-Action v4
.github/workflows/ci.yaml
  • actions/cache v3.0.11
.github/workflows/deploy.yml
  • actions/cache v3.0.11
  • JamesIves/github-pages-deploy-action v4.4.1
npm
package.json
  • markdown 0.5.0
  • next 12.1.0
  • next-plausible 3.6.3
  • preact 10.11.2
  • react 17.0.2
  • react-dom 17.0.2
  • react-markdown 8.0.3
  • rehype-raw 6.1.1
  • remark-gfm 3.0.1
  • glob 8.0.3
  • gray-matter 4.0.3
  • next-sitemap 3.1.25
  • raw-loader 4.0.2
  • rfc822-date 0.0.3
  • sass 1.55.0

  • Check this box to trigger a request for Renovate to run again on this repository

Principles of DevOps: Introduction

I recently changed roles in my company, and I can officially call myself a "DevOps Engineer" now. But what does that really mean?

In an attempt to write down my thoughts about this topic, I'm starting a series of blog posts called "Principles of DevOps". I'm usually very bad at sticking to things, so I'm curious to see if this series will lead anywhere.

To collect the posts of this series, I created a tag called #PrinciplesOfDevOps. If you're reading this in the future, be sure to check out this tag to see all installments.

What is DevOps?

Let's kick off the series with a very basic question: What on earth is DevOps?

DevOps is often as an inflationary term to describe "whatever comes after dev". This can't be further from the truth.

In the past, developers, operations, designers, QA and other stakeholders of an applications were often implicitly trained to work in "silos". Once designers have finished their job, they pass their mockups to developers. When developers are done writing the application, they pass their code to operations, whose job it is to deploy it.

DevOps is a set of practices that aims to combine the work of project stakeholders to unite people, process, and technology in application planning, development, delivery, and operations. Although the term DevOps only consists of "Dev" and "Ops", it has since evolved to include design, quality assurance and security. You may have heard of "DevSecOps", which aims to incorporate more roles into the term, but "DevOps" seems to stick the best with most people.

What does a "DevOps Engineer" do?

I recently wrote a blog post about this: The role of a DevOps Engineer.

In short, the job of a DevOps Engineer is to reduce the friction between stakeholders of a project. A collegue of explained this in a really good way:

A DevOps Engineer doesn't push the button, they enable the developers to push the button themselves.

Let's jump in!

I hope by now you have a vague sense of what DevOps is. Next up, I want to uncover the principles and practices of DevOps. Thanks for reading to the end!


This is post 072 of #100DaysToOffload.

I'm skeptical about sonic toothbrushes

After years of persuasion by my dentist, I finally got a sonic toothbrush. Indeed, my teeth feel very clean, but I honestly can't believe that they're supposed to be better than those old brushes with the wobbling heads.

DRAFT: Simplifying my publishing workflow

A personal goal of mine this year is to blog more often. My current flow to publish a blog post looks like this:

  1. Open up the repo for this website in a terminal
  2. Generate a new post from a template using a nifty script I wrote for this purpose
  3. Write!
  4. Commit and push the new file

The CI (GitHub Actions) will take care of building and deploying the changes.

Even though this process is fairly easy in theory, there are still some mental hoops that I have to jump through before I can start writing.

Notes on containerizing PHP applications

I was recently tasked with building a rudimentary infrastructure for a PHP application. Coming from a Node.js-driven world where every human and their grandmother has a blog post about containerizing your application, it was very interesting to see where PHP differs to other applications.

One major gotcha for me was that PHP code is executed on request-time, meaning a new process is spawned for each incoming request. Most other languages have dedicated runtimes that handle incoming requests. This unique approach is very flexible and scalable, but it comes with the implication that there is a separate webserver that calls into the PHP interpreter when it needs to.

In Node.js (and most other languages), you can "just run the app", as demonstrated by this Dockerfile:

FROM node:18.14.2-alpine3.17 AS build

WORKDIR /usr/src/app

COPY package*.json ./

RUN npm ci

COPY . .

EXPOSE 3000

CMD [ "node", "server.js" ]

PHP on the other side is rarely used on its own. Most of the time, it needs a webserver alongside it:

FROM php:8.1-apache-bullseye

# <snip>

COPY . /var/www/html
WORKDIR /var/www/html

# <snip>

As you can see, I'm using the official PHP docker image. The PHP maintainers know that adding a webserver alongside PHP is a very common pattern, so most of the variants of the image ship with a webserver. In this example I'm using Apache, but we might as well use NGINX or some other webserver. There's also the option to use FPM as a FastCGI implementation and a webserver in a separate container.

Grasping this took me some time, but after it clicked it made many things a lot clearer.


This is post 052 of #100DaysToOffload.

Software is not defined by the language it's written in

Rust is not just a programming language, it's also a status symbol. By now, it kind of became a meme that people writing programs in Rust have to make explicit that "X is written in Rust".

How fast or safe the language is doesn't define how good the software is. Software in TypeScript can be just as good as software written in C, if written by the right people.

When starting a new project, try to focus on the domain of the problem and pick a language based on that. Don't decide on the language before you know what problem you're trying to solve. If the answer to this is always one option (like Rust), you might be in a bubble.


This is post 060 of #100DaysToOffload.

A software requirements checklist

I just found a great post on the Etsy Engineering blog suggesting a possible checklist for new product requirements. In reality, this checklist is very hard to fulfill, but it's a nice reminder of what a well thought out requirement could look like.

Scope

  • Is the feature meant to be very polished and finished or are we just trying to get user feedback as an MVP?
  • If we are running a MVP, is the current feature a true MVP? How can we simplify or cut scope?

Eligibility

  • What populations should be included or excluded from the experiment? When should users see this feature? (Which pages, signed in/signed out, mobile, desktop, etc.)
    Where/when should bucketing occur?
  • Will the experiment conflict with any other experiments? Do the experiments need to run exclusively?
  • What countries should the experiment run in (can impact translations)?

A11Y

  • Is there any special accessibility work this feature will require? If extra work is anticipated, check in early with our a11y team.
  • When testing and developing we should keep two users in mind - a keyboard user and a voice over user, do we need to add other code for these users?

Translations

  • Are there any strings to be translated that should be submitted ASAP?
  • Do we need to translate any labels for a11y?

Observability

  • How will we know that the feature is working? Are there existing graphs we can use or do we need new ones?
  • Should any of these metrics have a threshold or alerting?
  • Are we missing any key events to obtain user feedback?
  • How will we compare our control and variant?

Performance

  • Is there anything in my experiment that could degrade performance of the site?
  • Do I need an operational experiment to verify that I’m not impacting performance?

Error States

  • Do we have designs for loading states?
  • Do we have designs for unsuccessful requests and error handling?
  • Do we have informative logging when there are errors?

QA

  • What set of browsers and devices should we test our new feature against?
  • Which user perspectives do we need to test?

Ramping

  • What will our ramping strategy be?

This is post 056 of #100DaysToOffload.

Yet another test

Just testing if "double quotes" break the action. Testing 'single quotes' too, just in case.

What problem does Kubernetes solve?

This is a common question that many people (including me) ask themselves.

I recently came across a great post which explains the problem really well:

Kubernetes exists to solve one problem: how do I run m containers across n servers?

The post also nails the answer to how Kubernetes solves this problem:

It's a big abstract virtual computer, with its own virtual IP stack, networks, disk, RAM and CPU. It lets you deploy containers as if you were deploying them on one machine that didn't run anything else. Clusters abstract over the various physical machines that run the cluster.

I'd highly encourage you to read through the article if you want to learn more about why Kubernetes exists.

This is post 048 of #100DaysToOffload.

The Mind Rope Experiment

Close your eyes. Picture yourself standing at a ledge with a rope tied to another ledge that's 10 steps away. Now, step by step, slowly walk across this rope until you're at the other side.

I often use this experiment as an indication for how focused I am at a given moment. As easy as this sounds, I rarely make it to the other side. On some days, I can't even step on the rope before my mind gets distracted and I have to start over. On some other days, I may get a few steps before falling off. With enough practice however, I may get to the other side.

This technique also works very well for short meditation sessions. I close my eyes and try to get to the other side. Whenever I fall off, I try again until my mind is focused enough to walk the entire rope.

I'm curious: are you able to walk the entire rope, or are you also struggling to take control over this mindscape?


This is post 079 of #100DaysToOffload.

Older is often better

I'm guilty of buying shiny new things. After being unhappy with the Bluetooth connectivity in the OnePlus Nord I bought in december of 2020, I bought a brand new Pixel 7 last december. I told myself that t would be using the OnePlus Nord for at least three years, preferably four, yet I gave in after two years due to some issue that could have been fixed with a pair of cable-headphones. I'm now asking myself again: am I able to use my new Pixel 7 for more than 3 years?

I just stumbled upon a blog post called "My long goodbye to Windows XP", in which the author explains that he is currently replacing his 2008 laptop running Windows XP with a "new" PC running Windows 7. He states that he knew the ins and outs of this operating system, so why switch to a new one? Eventually he did switch, but to an OS that is already end of life.

I totally love this. If you're happy with what you got, you shouldn't let a new feature dictate how to change your workflow. Does one really need the features introduced in some software/hardware/tool in the past year? Wouldn't it make sense to use the things that have been battle-tested for at least a few years?

As I said, I'm very guilty of living at the cutting edge of technology. Maybe it's time to slow down. I'm certain it would simplify a lot of things.


This is post 063 of #100DaysToOffload.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.