GithubHelp home page GithubHelp logo

traefik / mesh Goto Github PK

View Code? Open in Web Editor NEW
2.0K 48.0 142.0 10.1 MB

Traefik Mesh - Simpler Service Mesh

Home Page: https://traefik.io/traefik-mesh

License: Apache License 2.0

Go 98.29% Makefile 1.11% Dockerfile 0.60%
traefik mesh service-mesh service-mesh-interface traefik-mesh

mesh's Introduction

Traefik Mesh

Semaphore CI Build Status Docs Go Report Card Release GitHub Discourse status

Traefik Mesh: Simpler Service Mesh

Traefik Mesh is a simple, yet full-featured service mesh. It is container-native and fits as your de-facto service mesh in your Kubernetes cluster. It supports the latest Service Mesh Interface specification SMI that facilitates integration with pre-existing solution. Moreover, Traefik Mesh is opt-in by default, which means that your existing services are unaffected until you decide to add them to the mesh.

SMI

Non-Invasive Service Mesh

Traefik Mesh does not use any sidecar container but handles routing through proxy endpoints running on each node. The mesh controller runs in a dedicated pod and handles all the configuration parsing and deployment to the proxy nodes. Traefik Mesh supports multiple configuration options: annotations on user service objects, and SMI objects. Not using sidecars means that Traefik Mesh does not modify your Kubernetes objects, and does not modify your traffic without your knowledge. Using the Traefik Mesh endpoints is all that is required.

Traefik Mesh Traefik Mesh

Prerequisites

To run this app, you require the following:

Install (Helm v3 only)

helm repo add traefik https://traefik.github.io/charts
helm repo update
helm install traefik-mesh traefik/traefik-mesh

You can find the complete documentation at https://doc.traefik.io/traefik-mesh.

Contributing

Contributing guide.

mesh's People

Contributors

0rax avatar bavarianbidi avatar brennerm avatar carlpett avatar charlie-haley avatar chenrui333 avatar dtomcej avatar emilevauge avatar ereslibre avatar frelon avatar geraldcroes avatar jlevesy avatar jspdown avatar kevinpollet avatar kevtainer avatar lbenguigui avatar ldez avatar matthieuh avatar mmatur avatar nbyl avatar newtondev avatar riker09 avatar santode avatar skwair avatar tommoulard avatar traefiker avatar ullaakut avatar valerauko avatar walbertus avatar yekangming avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mesh's Issues

Code de-dupe

There seems to be a bunch of duplicated code in the meshcontroller handler.

Look into using interfaces to allow for dedupe.

Dashboard

Did we want a dashboard for this project? Or did we want to do something like have a deployable prometheus + grafana container sort of thing (like we do with our advocacy demos?)

Controller improvement/consolidation

Currently we use separate controllers for watching each type.

Once we get further along, we may not need separate controllers for each type, or we may be able to consolidate these controllers in a more efficient manner.

Tracing Implementation

How did we want to implement tracing?

Not sure how tracing is implemented in Traefik v2

Features and Integration tests

The following features and integration tests are needed:

  • TCP Routing
    • Feature
    • Integration test
  • HTTP Routing
    • Feature
    • Integration test
  • Metrics - #65
    • Feature
    • Integration test
  • Tracing - #63
    • Feature
    • Integration test
  • Service load balancing - #64
    • Feature
    • Integration test
  • Routing Rules - This one could be combined with TCP and HTTP, since they will both use rules, however this is also possibly done part of the traffic routing part of SMI
    • Feature
    • Integration test
  • Retries - #166
    • Feature
    • Integration test
  • Failover
    • Feature
    • Integration test
  • Access control
    • Feature
    • Integration test
  • Rate limit - #168
    • Feature
    • Integration test
  • circuit breakers - #167
    • Feature
    • Integration test

Separate clusterRoleBinding into rolebinding

Many of the operations in the controller RBAC are due to updates in the mesh namespace

It might be nice to separate out the mesh namespace RBAC into a roleBinding,

and keep a much more minimal ClusterRoleBinding, since we don't need global permissions for everything in the cluster

IngressRouteTCP has no middlewares

This means that IP filtering and whitelisting are not options that we have available to us.

I am going to speak with @juliens and @ldez about extending this with our provider in our i3o traefik implementation.

Demo Data

Do we need to have demo data in the app anymore?

Metrics implementation

Once metrics are enabled in traefik v2, we need to have them implemented in i3o. This will also be required for SMI

SMI Traffic-Split Implementation

Traffic-split should be fairly easy to accomplish, once we have some weighting etc from Traefik v2.

Until then, we could do a janky workaround where we add the same route x/y times to mathematically make it work, but that is terrible

How to define static config for I3o mesh nodes

Some of the traefik features (such as metrics) are static.

Did we want to manage these by a configmap (toml file)? or did we want to use CLI for everything?

CLI is so much cleaner than having to hunt down configmaps. We also don't have to worry about changes not propagating if we use CLI.

Documentation

Eventually we are going to need documentation,

Because people.

Improve controller queues

Currently each controller has its own queue for processing events. We may want to have a global controller queue to be able to process so that we can move processing to a higher level controller (mesh controller)

Misnamed Imports

Since we have been using sources from different codebases, we need to standardize our imports so that we don't duplicate names.

	corev1 "k8s.io/api/core/v1"
	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
	appsv1 "k8s.io/api/apps/v1"

Core pods all deleted at once

A regression was introduced in #34 that deletes all the core pods in the cluster.

This leads to total DNS outage for the entire cluster (including API).

I think it would be better to add a meta label to force a restart natively instead of deleting the pods.

Core Functionality

There are a few things that this controller needs:

  • Dockerization - #4
  • CI integration - #11
  • Tests - #29
  • Helm chart - #16

Functionality requirements missing:

  • Has to be able to CRUD Traefik CE KubernetesCRDs - #30
  • Has to be able to interact with SMI CRDs - #66

Nice to haves:

  • Be able to verify its own RBAC permissions to check if it has sufficient permissions - Not needed, since we do not want to grant access to rbac. Delegating to helm chart to ensure correct permissions
  • Be able to run multiple replicas, and handle these CRUD operations atomically.

Define MVP features for v1 launch

We should define a set of features that we want for v1 launch, so that we can ensure we have the features ready, and tests written to ensure correct behavior for these features.

Using "=" in flags breaks kubeconfig

Using:

./i3o patch --kubeconfig ~/.kube/config --master 127.0.0.1:9000 --debug
INFO[0000] Building Kubernetes Client...                
INFO[0000] Building Kubernetes CRD Client...            
INFO[0000] Preparing Cluster...         

Works, but:

./i3o patch --kubeconfig=~/.kube/config --master=127.0.0.1:9000 --debug
FATA[0000] Error building clients: stat ~/.kube/config: no such file or directory 

Breaks

Create Service Controller for Access Control

We need a service controller to manage Shadow services for access control.

This will allow SMI implementation of Access Control, and will may allow more features to be implemented in TrafficSplit etc.

Update CoreDNS patching mechanism

Right now we do a string replacement to patch in our coreDNS config.

We should look at having our own server block that we can just append to the data.

This would allow us to be compatible on more systems without risk of breaking things.

And be less janky.

SMI Traffic-Access-Control Implementation

So it appears that traffic control via SMI is done by checking Service Accounts of running pods and tying them to destinations with running service accounts.

This is not difficult to do, but there are a bunch of ways we can accomplish it with traefik.

We can either do an IP whitelist, or we can do some other sort of filtering.

I have a few concerns:

  1. If we are running in a large network, we may run into issues with large whitelists/blacklists
  2. If we have a large dynamic network, there may be lots of changes that we have to watch for, and that may be a lot of load.

These concerns may not be big in reality, but just something to think about

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.