GithubHelp home page GithubHelp logo

tuber's Introduction

tuber

logo

Tuber is a CLI and server app for continuous delivery and cluster interactability.

So what does that mean? Server side:

  • New services deployed to GCP do so through tuber
  • Change config inside your repo for how it is deployed
  • Canary releases that monitor sentry
  • Staging and prod versions of your service just slightly different, without drift
  • Autodeploy off certain branches on staging and prod
  • Automatic rollbacks of all your resources if just one of them fails during in a release
  • Monitor different things per resource while deploying
  • Automatic cleanup of resources you no longer need
  • Automigrate rails apps
  • Easily adjust the levers of your resources
  • Slack channels per-app for release notifications
  • Review apps
  • Dashboard for doing all of the above

 

Here's the CLI side (most useful commands at least):

  • switch - tuber switch staging points you at a cluster
  • context - tell what cluster you're currently working with locally
  • auth - log in and get credentials (most commands use the running tuber's graphql api)
  • exec - interactive command runner (rails c, etc)
  • rollback - we cache whatever was applied in the last successful release, this reapplies that.
  • pause - pause releases, resume resumes
  • apps info for everything tuber knows about your app
  • apps set - subcommands for every field
  • deploy -t - deploy a specific image rather than pulling latest
  • env - get, set, unset, and list env vars
  • apps install - deploy a new service in seconds.

 

all commands and subcommands support -h/--help, and we highly encourage using that as you learn your way around.

 

We do all of this with NO esoteric custom resources, and ALL of tuber's interaction with the cluster occurs through kubectl.

You can run our entire release process yourself just by copying the kubectl commands in the debug logs.

You need only the Kubernetes (and ok fine Istio) documentation to understand a tuber app's footprint.

So, welcome to "kubectl as a service" 🎉

 

 

Design Mentality

Tuber is built on a few core principles that will be helpful to understand.

Prod is Prod

While many pipeline solutions offer support for running staging versions alongside prod versions on the same cluster, Tuber intentionally does not.

Network security and permissions are alone enough reason to keep things separated not only by cluster, but by project.

The multi-cluster solutions designed to stay true to the "dashboard with a pipeline" look also typically depend on CD control existing on a 3rd, operations cluster to coordinate.

Tuber foregoes that look to encourage this separation, under the goal of "prod is prod".

As a result, "deploying to prod" with Tuber will never be an action of "promotion" - rather, it's a separate process dedicated to prod, and guided by VC, and a tuber server will never cross clusters or projects -- that's what pubsub's for.

 

Subtractive Staging Environments

Many platforms we've looked at fall into what we call "additive" environments, where the resources an app needs are added to it in each separate environment.

Examples include Addons and Dynos on heroku, or even the separate directories of resources on a typical Helm solution

This inevitably leans towards drift, and controlling the drift usually becomes an issue.

Tuber instead follows what we call "subtractive" staging environments, where production configuration is put forth as official, and staging environments can trim down or edit down from there.

 

Review Apps First

For apps that are complicated and quickly changing, local testing is a tough proof of success. Automated testing is a separate question entirely.

Much of Tuber's architecture is based around supporting Review Apps to solve this problem.

Tuber offers isolated, ephemeral test apps that are every bit as valid as the standard staging app they're created from.

 

Keep it simple

It's often tough to see just what a deployment pipeline is doing while it's doing it. It's also tough to see the real footprint of an app when deployed through a pipeline.

Many solutions rely on CRDs to power the configuration.

Tuber's take is that CRDs are fine if built-in resources are insufficient, and in the case of deploying an app to a cluster, built-in's were of course just fine.

This means 3 major things - no esoteric rules to perfect a yaml to tuber's satisfaction, a tuber app's footprint stays identical to if it was manually deployed, and steps it takes when interacting with the cluster are exceptionally easy to track.

 

 

Project Status

We are pushing for Tuber to be more generally applicable, but it currently makes too many assumptions about how things outside your cluster are configured.

 

 

What's a Tuber App?

- TuberApp
  - Name
  - ImageTag
  - Vars
  - State
    - Current
    - Previous
  - ExcludedResources
  - SlackChannel
  - Paused?
  - ReviewApp?
  - ReviewAppsConfig
    - ReviewAppsEnabled?
    - Vars
    - ExcludedResources

So that's our data model. This is all stored for each app, in an in-memory database loaded in as a local file to the running tuber server.

Below you'll find explanations for some of the less intuitive aspects of it all.

These should also give some context as to "why tuber is the way that it is" - these are crucial to how we can handle any app in any env:

 

Vars

Vars are custom interpolation variables. We use these all over the monolith's resources.

These are internally referred to as "App-Specific Interpolatables", or "ASIs" to the chagrin of everyone involved. It's accurate though.

Tuber offers the following Vars to every app's .tuber/ resources automatically:

{{ .tuberImage }}             <- the full image tag for a release
{{ .tuberAppName }}           <- the tuber app name (vital for review apps)
{{ .clusterDefaultHost }}     <- default hostname to offer to virtualservices
{{ .clusterDefaultGateway }}  <- default gateway powering the default host
{{ .clusterAdminHost }}       <- secondary hostname to offer to virtualservices, ideal for an IAP domain
{{ .clusterAdminGateway }}    <- secondary gateway powering the secondary host

Those are the Go format for string interpolation (like ruby's "#{hi}"), and are hard-coded like that in the resources in .tuber/.

Tuber interpolates those in every release based on its own context.

So if these "interpolatables" help us do review apps, and differentiate Staging vs Production, what's a Var?

It's anything specific to your app that needs interpolation to support reuse for review apps or different environments.

A clear example is something like {{ .sidekiqWorkers }} different on staging and prod.

You can make a Var for anything, and interpolate anything, including booleans and integers.

 

 

ExcludedResources

Sometimes Vars don't cut it. Sometimes you just need to cut entire resources out of a specific environment, or from a review app.

ExcludedResources is a hash of kubernetes Kind (Deployment, CronJob, etc), mapped to the Name of a resource.

When your app is deployed, any resources contained in .tuber/ matching these Kind/Name pairs will be skipped.

We also interpolate Exclusion names prior to comparison, so a resource coded as name: "{{.tuberAppName}}-canary" can be excluded with the same name, to support excluding that resource on review apps.

It's a lot of setup, but that's the point.

If this makes for a bunch of work, you are doing that work to actively make staging different from prod.

We WANT that to be a bunch of work. If staging is different from production, it should be EXPLICITLY different.

 

 

ReviewAppsConfig

This specifies how an app's review apps should be created.

The field has its own Vars and ExcludedResources, which are merged with the top-level Vars and ExcludedResources.

Let's say you want myapp-demo to exclude deployment myapp-canary, and review apps created off of it to ALSO exclude deployment myapp-sidekiq?

This is how. And the final merged lists of Vars and ExcludedResources are set on the review app, so if you then have a review app that needs to change something about deployment myapp-sidekiq, you can DE-exclude it from that review app specifically, after it's created.

The Vars are the most useful here - for example, letting you specify lower CPU limits and memory limits for an app's review apps.

 

 

Installation

Prerequisites

  • gcloud
  • kubectl

Homebrew

brew tap freshly/taps
brew install tuber

Scoop

scoop bucket add freshly https://github.com/freshly/scoops
scoop install tuber

Download binary

Download the binary file from the latest release: https://github.com/Freshly/tuber/releases/

tuber's People

Contributors

caelra avatar dbennett-freshly avatar dependabot[bot] avatar goreleaserbot avatar hbbb avatar jefferson-faseler avatar jkminneti avatar llparse avatar mxygem avatar quinn avatar quinn-freshly avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

tuber's Issues

Encrypted Credentials in Source Control

Add encrypt and decrypt justfile definitions.

Something like this:

encrypt:
	cd {{ invocation_directory() }}; ../bin/encrypt credentials.json

decrypt:
	cd {{ invocation_directory() }}; ../bin/decrypt credentials.json.enc

pull credentials from cluster

🎉 jordan got his way

this would open the door to apps install resulting in a deployed app. or otherwise, using deploy like ever.

creds are used only for gcr, k8s rolebindings would still be in play

Custom secrets and configmaps

Right now, an end user can control anything about their namespace through .tuber's. Custom resources, extra resources, whatever. Except for secrets and configmaps.

Secrets and configmaps and their data shouldn't be committed to code, but we should be able to tear down and recreate a tuberapp entirely from 1 - the resources tuber creates, and 2 - the resources in .tuber.

How do we do this? Not sure.

portforward command

kubectl port-forward -n someapp `kubectl get pods -n someapp -o jsonpath="{.items[0].metadata.name}"` 8080:8080

ill have none of that

add gateway

This has changed quite a bit, now that we know what we're doing with istio.

I think this is now "add cluster setup" commands - such that once you have a fresh cluster, tuber can istio-ify.

Is that best done through giving tuber access to istio-system, and tuberizing the tuber-farm? Or.. what else?

Config Get/Set/List Commands

Is there any other configuration that an app would need to read/write to besides env vars? If not, we can drop the env argument in the below command suggestions.

tubectl config set env NAME=VALUE
tubectl config get env NAME
tubectl config list env

Debating on the fact that people even need to know that configs are stored as Kubernetes secrets as opposed to plain config maps. It seems reasonable to group all of these commands under the name config.

Can possibly migrate without deploying

If you're silly and merge migrations (or anything that prereleasers handle) at the same time as invalid yamls (or otherwise a failed deploy), prerelease will run but the deploy will not.

this kills the rails app

todo: wtf how does heroku get around this

Wait on status, on release

It's a goroutine already, we might as well add a wait for the success of a release.

If the new pods enter crashloop, or otherwise don't run successfully, we should know about it.

Hopefully we can query on the revision, not on created pods, though.

Rollback on errors

On errors during setup, created resources should be deleted
create definitely

Tuber `apps` command

  • Add (adds app to the tuber-apps config map, doesn't do anything else)
  • Remove
  • Create
  • List

Exec command

Tentative plan:
Add exec.yaml, a blank pod with no command - then

  1. Exec applies it, but interpolates in a randomized name
  2. Waits for success
  3. Once successful, kubectl exec's with whatever command you passed in

except that won't work, WIP

Better Logging

  • Does everything need to log to sderr?
  • Log Levels
  • Always use zap, stop using the builtin logger
  • Use context?

Configure Cluster

  • Create tuber-apps kubernetes secret
  • Read credentials.json file into tuber-secrets

Maybe Terraform?

  • Grant service account permissions from the given cluster to the freshly-docker project
    • Pub/Sub Subscriber
    • Container Registry Reader

Some ideas for naming
tuber install <cluster-name>
tuber configure <cluster-name>
tuber setup <cluster-name>

gcr requests error output

mostly the 404 on pulling the latest image - it tells you it 404'd, but it's unclear that the actual issue is that the build cant be found

Add confirmation step to write commands

any command that alters anything on the cluster should have a confirmation.
any command that reads data and does not alter anything should not require confirmation.

print context name to STDERR.

Get Tuber running on Staging

Outstanding issues (that we know of):

  • response from container registry does not contain Docker-Content-Digest header
  • Add better error handling and logging around that ^
  • Add Sentry env vars to cluster
  • Add new subscription for staging tuber
  • Add env var for subscription name in both staging and eng-internal

Support Specifying Cluster `-c --cluster|context`

Require a cluster or context flag to specify which cluster to operate on. It's similar to Heroku's -a --app flag.

Poor mans version of this is to add a confirm step before running any commands that modify the state of the cluster

Tuber Init Command

Tuberize an app

  • Create .tuber directory in project root
  • Create deployment.yaml
  • Create service.yaml

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.