GithubHelp home page GithubHelp logo

istio / istio Goto Github PK

View Code? Open in Web Editor NEW
35.0K 35.0K 7.6K 256.84 MB

Connect, secure, control, and observe services.

Home Page: https://istio.io

License: Apache License 2.0

Shell 0.80% Python 0.08% Go 98.15% Makefile 0.37% Smarty 0.07% Dockerfile 0.01% Java 0.01% CSS 0.32% JavaScript 0.01% HTML 0.19%
api-management circuit-breaker consul enforce-policies envoy fault-injection kubernetes lyft-envoy microservice microservices nomad polyglot-microservices proxies request-routing resiliency service-mesh

istio's Introduction

Istio

CII Best Practices Go Report Card GoDoc

Istio logo

Istio is an open source service mesh that layers transparently onto existing distributed applications. Istio’s powerful features provide a uniform and more efficient way to secure, connect, and monitor services. Istio is the path to load balancing, service-to-service authentication, and monitoring – with few or no service code changes.

  • For in-depth information about how to use Istio, visit istio.io
  • To ask questions and get assistance from our community, visit Github Discussions
  • To learn how to participate in our overall community, visit our community page

In this README:

In addition, here are some other documents you may wish to read:

You'll find many other useful documents on our Wiki.

Introduction

Istio is an open platform for providing a uniform way to integrate microservices, manage traffic flow across microservices, enforce policies and aggregate telemetry data. Istio's control plane provides an abstraction layer over the underlying cluster management platform, such as Kubernetes.

Istio is composed of these components:

  • Envoy - Sidecar proxies per microservice to handle ingress/egress traffic between services in the cluster and from a service to external services. The proxies form a secure microservice mesh providing a rich set of functions like discovery, rich layer-7 routing, circuit breakers, policy enforcement and telemetry recording/reporting functions.

    Note: The service mesh is not an overlay network. It simplifies and enhances how microservices in an application talk to each other over the network provided by the underlying platform.

  • Istiod - The Istio control plane. It provides service discovery, configuration and certificate management. It consists of the following sub-components:

    • Pilot - Responsible for configuring the proxies at runtime.

    • Citadel - Responsible for certificate issuance and rotation.

    • Galley - Responsible for validating, ingesting, aggregating, transforming and distributing config within Istio.

  • Operator - The component provides user friendly options to operate the Istio service mesh.

Repositories

The Istio project is divided across a few GitHub repositories:

  • istio/api. This repository defines component-level APIs and common configuration formats for the Istio platform.

  • istio/community. This repository contains information on the Istio community, including the various documents that govern the Istio open source project.

  • istio/istio. This is the main code repository. It hosts Istio's core components, install artifacts, and sample programs. It includes:

    • istioctl. This directory contains code for the istioctl command line utility.

    • operator. This directory contains code for the Istio Operator.

    • pilot. This directory contains platform-specific code to populate the abstract service model, dynamically reconfigure the proxies when the application topology changes, as well as translate routing rules into proxy specific configuration.

    • security. This directory contains security related code, including Citadel (acting as Certificate Authority), citadel agent, etc.

  • istio/proxy. The Istio proxy contains extensions to the Envoy proxy (in the form of Envoy filters) that support authentication, authorization, and telemetry collection.

  • istio/ztunnel. The repository contains the Rust implementation of the ztunnel component of Ambient mesh.

Issue management

We use GitHub to track all of our bugs and feature requests. Each issue we track has a variety of metadata:

  • Epic. An epic represents a feature area for Istio as a whole. Epics are fairly broad in scope and are basically product-level things. Each issue is ultimately part of an epic.

  • Milestone. Each issue is assigned a milestone. This is 0.1, 0.2, ..., or 'Nebulous Future'. The milestone indicates when we think the issue should get addressed.

  • Priority. Each issue has a priority which is represented by the column in the Prioritization project. Priority can be one of P0, P1, P2, or >P2. The priority indicates how important it is to address the issue within the milestone. P0 says that the milestone cannot be considered achieved if the issue isn't resolved.


Cloud Native Computing Foundation logo

Istio is a Cloud Native Computing Foundation project.

istio's People

Contributors

andraxylia avatar ayj avatar bianpengyuan avatar costinm avatar douglas-reid avatar ericvn avatar esnible avatar frankbu avatar geeknoid avatar hanxiaop avatar howardjohn avatar hzxuzhonghu avatar istio-testing avatar jimmycyj avatar kyessenov avatar ldemailly avatar linsun avatar mandarjog avatar myidpt avatar nmittler avatar ostromart avatar ozevren avatar ramaraochavali avatar richardwxn avatar rshriram avatar sebastienvas avatar stevenctl avatar yangminzhu avatar ymesika avatar zirain avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

istio's Issues

Demo walkthrough

Need a simple polyglot demo app with a walk through doc that illustrates various features of isio. Current proposal is to rework the (in)famous Bookinfo demo app from amalgam8 and add features such as ACLs, rate limits, etc. into the end to end story.

bookinfo tutorial documentation problems

I went through the https://istio.io/docs/demos/bookinfo.html instructions today.

istioctl list route-rule must be changed to istioctl get route-rule --output yaml.

There is a trademark usage concern. “rotten tomatoes” should be “The Rotten Tomatoes API” or similar.

I also must admit I found the following fragment difficult to understand as a beginner:

The addons yaml files contain services configured as type LoadBalancer. If services are deployed with type NodePort, start kubectl proxy, and edit Grafana’s Istio-dashboard to use the proxy. 

I believe this sentence implies that the addons configuration can be edited to use a NodePort. (If that is the case I strongly advise shipping Istio with addons that uses LoadBalancer and some kind of nodeport_addons for Minikube users. It is too hard to be editing YAML files. Furthermore it might be better to give explicit instructions for kubectl proxy and explaining how to find Grafana and the dashboard and describing the edits that are being recommended.

Update bookinfo diagram

The bookinfo diagram is grossly out of date. It needs to show ingress, envoy handling both ingress and egress traffic, service mesh and the istio control plane.

BookInfo GATEWAY_URL resulting in connection refused

Commit: d75c67b
Fedora 23

Followed the https://istio.io/docs/samples/bookinfo.html instructions, only skipping the 'apply addons' step.

NAME READY STATUS RESTARTS AGE
details-v1-3876634233-n3wx6 2/2 Running 0 27m
istio-ca-2343704264-3j184 1/1 Running 0 31m
istio-ingress-controller-1227707491-6g33q 1/1 Running 0 31m
istio-manager-3903014443-psq3f 1/1 Running 0 31m
istio-mixer-2291004578-n7blt 1/1 Running 0 31m
productpage-v1-1048626081-fg644 2/2 Running 0 27m
ratings-v1-4140089007-9nmlb 2/2 Running 0 27m
reviews-v1-743947990-24s48 2/2 Running 0 27m
reviews-v2-1273544408-0jms0 2/2 Running 0 27m
reviews-v3-1803140826-xn8hl 2/2 Running 0 27m

All pods seem to be running ok, but when obtaining the ENDPOINT_URL I got connection refused in a browser and also using curl.

Document the required cluster size to execute the bookinfo demo

I had a 3-instance (default machine-size) GKE cluster and started to see scheduling failures after deploying:

I was seeing

Events:
  FirstSeen	LastSeen	Count	From						SubObjectPath				Type		Reason			Message
  ---------	--------	-----	----						-------------				--------	------			-------
  11m		2m		36	default-scheduler									Warning		FailedScheduling	No nodes are available that match all of the following predicates:: Insufficient cpu (3)

I had to increase the node count to 4 to accommodate.

It would be preferable if the tutorial started with a "Before you begin" section telling me exactly what I need. I had to figure this out myself.

Update Slack for new logo, channels, org name

As discussed, we need to update Slack:

  • Need to update to use our new logo.
  • Need to rename to istio instead of istio-dev
  • Need to create channels for users
  • And finally once the above is done, need to update references to slack in istio.github.io

[install/kubernetes] applying istio-rbac.yaml fails on 1.6.2

I was following https://istio.io/docs/tasks/installing-istio.html which says, run kubectl get clusterrole and if it works, run kubectl apply -f istio-rbac.yaml. However this fails:

$ kubectl apply -f istio-rbac.yaml
error: error validating "istio-rbac.yaml": error validating data: couldn't find type: v1alpha1.ClusterRole; if you choose to ignore these errors, turn validation off with --validate=false
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:33:11Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:22:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

#hackathon

Cleanup the istio install directories

The istio install directory has become a confusing mess (istio-15.yaml, istio-16.yaml, istio-auth-15.yaml, istio-auth-16.yaml, istio-rbac.yaml). There's also a README.md in the install directory that is obsolete. Users will have no idea what to use where. The documented install task doesn't even work on IBM clusters anymore (needs rbac).

We need to cleanup and explain in the README.md what it all is for. Then we need to update the istio install instructions (https://istio.io/docs/tasks/installing-istio.html) to give a simple set of setup instructions that work.

Define schema for configs posted to TPR

We need to define the schema of the end-to-end config resource that the user would post to k8s TPRs. This would help us refine the demo steps. ( #3 ). The config includes several aspects of istio (configuring the mixer, routing, etc.).

Define testing strategy for istio modules

Some modules depends on each other, and its not clear where the test should live in that case. We need to understand each module dependencies and define a strategy

Tracing Support

Latency Tracing, Functional Tracing and Trace Propagation

Rate Limiting by other metrics besides Request/Second

In scenarios where API responses vary in size, it would be ideal to apply rate limiting by metrics other than requests per second. Often cloud providers bill egress traffic per GB of data via load balancers, so costs per api request can be hard to correlate to actual infrastructure cost associated with the API request/response when the response size can vary.

Use-case:

  1. Bandwidth bytes/second. For example network capacity estimation or egress charges.
  2. Total Bytes. For example, a free API service tier up to 100Mb

Make e2e test framework shell/command functions variadic

i.run(fmt.Sprintf(...)) and Shell(fmt.Sprintf(...)) are a fairly common pattern in the e2e integration framework. It would be useful to make the underlying functions variadic and handle the Sprintf formatting instead of pushing it onto the caller.

bookinfo get ingress should use -o wide in the example

See istio/old_pilot_repo#635

# per doc:
ldemailly-macbookpro:bookinfo ldemailly$  kubectl get ingress 
NAME      HOSTS     ADDRESS            PORTS     AGE
gateway   *         104.196.224.1...   80        41m
# should be:
ldemailly-macbookpro:bookinfo ldemailly$  kubectl get ingress -o wide
NAME      HOSTS     ADDRESS                                         PORTS     AGE
gateway   *         104.196.224.151,35.185.193.212,35.185.196.158   80        41m

Rationalize the canonical attributes

PR #92 makes public our canonical attributes but during review a few issues were pointed out for specific attributes (that were out of scope of that PR):

  • {source, target, origin}.namespace may be too k8s specific.
  • source.port is ephemeral and we may not be able to infer the port of the container making the request.
  • source.name may not be derivable, e.g. a k8s batch job. K8s does not natively have a concept of job or application either.

Improve INSTALL.md to help new users

This issue is multi-part.

I found the optional step to create the "Kubernetes namespace" hard to understand so I skipped it. I would prefer my Istio components to be in istio-system instead of default but was afraid to try because 1) I have been running Istio in default for weeks and don't know the effects running it elsewhere will have and 2) Frank's bookinfo included instructions for starting it in default and I asked him about that and didn't understand why he wanted it there.

The second problem is the optional add-ons. I would love to see Grafana and the control panel. I have not been able to get anyone to show them to me. However when I run the instructions the services don't come fully up. They stay in "pending" state. I suspect this is because I am on Minikube. An alternate path for Minikube users would be very helpful.

Pay for Slack

We should set up a paying account with Slack so that our messages are archived for ever. As it is right now, stuff falls off the end of the world after a while.

cannot execute `istioctl`, 404 from istio manager endpoint

I am working through the bookinfo demo at https://istio.io/docs/tasks/request-routing.html. I stood up a cluster using minikube (v0.18.0) and successfully deployed all the istio components and the bookinfo demo components. The productpage displays as expected and stats are being tracked and displayed in grafana appropriately.

I had exposed the istio-manager service as a NodePort and execute ./istioctl -m http://$(minikube ip):42422 get route-rules -o yaml for example. I get an Error: received non-success status code 404 with message 404: Page Not Found

How should I specify the istio manager address when executing istioctl outside the cluster? (BTW, I have NOT enabled auth).

getting started no longer works

Report by @costinm
istio/docs/getting-started
there are 2 yaml files
doc/hello-and-proxy.yaml and doc/frontend-and-proxy.yaml
And instructions on how to deploy the frontend (NGINX) with and without istio
Neither works 😞

Explain/reconcile mixer policies vs. proxy destination policies

Our current top level architecture doc (https://github.com/istio/istio/blob/master/ARCHITECTURE.md) says "The proxy delegates policy decisions to the mixer ...", but we also have a couple of policies (load balancing policy and circuit breakers) that the proxy (envoy) handles on its own. These two are, in fact, already supported by istioctl create destination-policy.

We need to figure out and then document how these two kinds of policies are different and why they are handled differently.

First thing we need to understand is why they are named as they are and come up with clear definitions. I can sort of imagine a definition for "destination policy" as something like "a policy that is executed at the destination of a service request". On the other hand, I'm not sure what the definition of "mixer policy" would/should be. In fact, what is the definition of "mixer" itself, and is mixer the best name for it?

Bookinfo demo does not work in RBAC-enabled cluster

This seems to be due to that the service account of productpage does not required permission to list the ingress extensions:

I0428 19:06:15.787751       1 queue.go:97] Work item failed (Waiting till full synchronization), repeating after delay 1s
E0428 19:06:16.721977       1 reflector.go:201] /home/jenkins/.cache/bazel/_bazel_jenkins/28b2a0a5386b3ff3ee1362d0e46f06af/bazel-sandbox/d8b6f713-567c-42bd-8aed-e92338e5c215-392/execroot/manager/bazel-out/local-fastbuild/bin/external/io_k8s_client_go/tools/cache/go_default_library.a.dir/k8s.io/client-go/external/io_k8s_client_go/tools/cache/reflector.go:96: Failed to list *v1beta1.Ingress: User "system:serviceaccount:default:default" cannot list ingresses.extensions in the namespace "default".: "Unknown user \"system:serviceaccount:default:default\"" (get ingresses.extensions)
I0428 19:06:16.787957       1 queue.go:97] Work item failed (Waiting till full synchronization), repeating after delay 1s
E0428 19:06:17.724059       1 reflector.go:201] /home/jenkins/.cache/bazel/_bazel_jenkins/28b2a0a5386b3ff3ee1362d0e46f06af/bazel-sandbox/d8b6f713-567c-42bd-8aed-e92338e5c215-392/execroot/manager/bazel-out/local-fastbuild/bin/external/io_k8s_client_go/tools/cache/go_default_library.a.dir/k8s.io/client-go/external/io_k8s_client_go/tools/cache/reflector.go:96: Failed to list *v1beta1.Ingress: User "system:serviceaccount:default:default" cannot list ingresses.extensions in the namespace "default".: "Unknown user \"system:serviceaccount:default:default\"" (get ingresses.extensions)
I0428 19:06:17.788290       1 queue.go:97] Work item failed (Waiting till full synchronization), repeating after delay 1s
E0428 19:06:18.726011       1 reflector.go:201] /home/jenkins/.cache/bazel/_bazel_jenkins/28b2a0a5386b3ff3ee1362d0e46f06af/bazel-sandbox/d8b6f713-567c-42bd-8aed-e92338e5c215-392/execroot/manager/bazel-out/local-fastbuild/bin/external/io_k8s_client_go/tools/cache/go_default_library.a.dir/k8s.io/client-go/external/io_k8s_client_go/tools/cache/reflector.go:96: Failed to list *v1beta1.Ingress: User "system:serviceaccount:default:default" cannot list ingresses.extensions in the namespace "default".: "Unknown user \"system:serviceaccount:default:default\"" (get ingresses.extensions)

Improve top-level README.md

I found the Istio documentation, starting at https://github.com/istio/istio/blob/master/README.md , difficult to follow.

Istio is a “service mesh” that claims to provide “a uniform way to manage and connect microservices.” The word “manage” is very general. I know Istio can manage access checking, load balancing, and quota, but not for example instance lifecycle.

The relationship between Istio and the underlying cluster management platform needs to be spelled out early so that someone new to the project can understand the separation of concerns between Istio and Kubernetes. In the istio/manager documentation it is revealed that the Istio Manager “provides an abstraction layer over the underlying cluster management platform, such as Kubernetes”. We should be informed of that sooner.

The reference implementation of Istio runs on Kubernetes. It would be nice if the top-level or Milestone Plan docs mention that MVP-1 and Alpha-1 are Kubernetes-only and gave a hint if the developers plan to implement support, or even hooks, for other cluster management platforms. The manager contains a platform/kube package and the current manager/cmd/root.go is married to it. There is no plug-in way I could implement platform/mesos and get istioctl to use it without providing a platform/* discovery mechanism.

Istio is a manager “handling system configuration, discovery, and automation.” These three concepts are very general! After looking at the manager I know it supports “route rules” for traffic between microservices. A very specific sentence would really help with the understanding of what the manager provides.

Istio is a mixer “supporting access checks, quota allocation and deallocation, monitoring and logging.” I need some help understanding this. Does “access checks” mean network security? Feature flags? Rate limiting? Does monitoring mean network traffic that is service-to-service and external-to-service and is it tied to any particular monitoring package such as Graphite/Grafana? Does logging mean logs of authorization and quota violations to ElasticSearch? Does Istio provide a logging framework that I should configure my micro services to log upon? Some of this is in the architectural overview but if the opening docs are frustrating few readers will reach them.

The proxy handles “service-to-service and external-to-service traffic”. What about service-to-external? The architecture overview says it mediates “inbound and outbound traffic for all Istio-managed services.” The implementation of Istio on Kubernetes is implementing by pairing each unmodified microservice instance with a network proxy co-located in the same pod. For Istio developers the proxy is an important component but would users ever see it outside of a debugging context?

The top-level documentation next lists the repositories. This is great, but again the vocabulary is very general and the reader has little idea what functions Istio provides and how the concerns are separated in this design. There is little comfort in knowing that the manager configures and propagates. I wanted a list of the types of configuration the manager is responsible for implementing, and how to interact with it. Does it provide a REST API? A CLI? What kinds of operations are exposed?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.