GithubHelp home page GithubHelp logo

kuadrant / kuadrant-operator Goto Github PK

View Code? Open in Web Editor NEW
31.0 8.0 30.0 11.39 MB

The Operator to install and manage the lifecycle of the Kuadrant components deployments.

License: Apache License 2.0

Dockerfile 0.14% Makefile 2.98% Go 94.48% Shell 2.25% Jsonnet 0.16%
authentication authorization k8s-services kubernetes rate-limiting

kuadrant-operator's Introduction

Kuadrant Operator

Code Style Testing codecov License OpenSSF Best Practices

The Operator to install and manage the lifecycle of the Kuadrant components deployments.

Overview

Kuadrant is a re-architecture of API Management using Cloud Native concepts and separating the components to be less coupled, more reusable and leverage the underlying kubernetes platform. It aims to deliver a smooth experience to providers and consumers of applications & services when it comes to rate limiting, authentication, authorization, discoverability, change management, usage contracts, insights, etc.

Kuadrant aims to produce a set of loosely coupled functionalities built directly on top of Kubernetes. Furthermore, it only strives to provide what Kubernetes doesn’t offer out of the box, i.e. Kuadrant won’t be designing a new Gateway/proxy, instead it will opt to connect with what’s there and what’s being developed (think Envoy, Istio, GatewayAPI).

Kuadrant is a system of cloud-native k8s components that grows as users’ needs grow.

  • From simple protection of a Service (via AuthN) that is used by teammates working on the same cluster, or “sibling” services, up to AuthZ of users using OIDC plus custom policies.
  • From no rate-limiting to rate-limiting for global service protection on to rate-limiting by users/plans

Architecture

Kuadrant relies on Istio and the Gateway API to operate the cluster (Istio's) ingress gateway to provide API management with authentication (authN), authorization (authZ) and rate limiting capabilities.

Kuadrant components

CRD Description
Control Plane The control plane takes the customer desired configuration (declaratively as kubernetes custom resources) as input and ensures all components are configured to obey customer's desired behavior.
This repository contains the source code of the kuadrant control plane
Kuadrant Operator A Kubernetes Operator to manage the lifecycle of the kuadrant deployment
Authorino The AuthN/AuthZ enforcer. As the external istio authorizer (envoy external authorization serving gRPC service)
Limitador The external rate limiting service. It exposes a gRPC service implementing the Envoy Rate Limit protocol (v3)
Authorino Operator A Kubernetes Operator to manage Authorino instances
Limitador Operator A Kubernetes Operator to manage Limitador instances
DNS Operator A Kubernetes Operator to manage DNS records in external providers

Provided APIs

The kuadrant control plane owns the following Custom Resource Definitions, CRDs:

CRD Description Example
AuthPolicy CRD [doc] [reference] Enable AuthN and AuthZ based access control on workloads AuthPolicy CR
RateLimitPolicy CRD [doc] [reference] Enable access control on workloads based on HTTP rate limiting RateLimitPolicy CR
DNSPolicy CRD [doc] [reference] Enable DNS management DNSPolicy CR
TLSPolicy CRD [doc] [reference] Enable TLS management TLSPolicy CR

Additionally, Kuadrant provides the following CRDs

CRD Owner Description Example
Kuadrant CRD Kuadrant Operator Represents an instance of kuadrant Kuadrant CR
Limitador CRD Limitador Operator Represents an instance of Limitador Limitador CR
Authorino CRD Authorino Operator Represents an instance of Authorino Authorino CR

Kuadrant Architecture

Getting started

Pre-requisites

Installing Kuadrant

Installing Kuadrant is a two-step procedure. Firstly, install the Kuadrant Operator and secondly, request a Kuadrant instance by creating a Kuadrant custom resource.

1. Install the Kuadrant Operator

The Kuadrant Operator is available in public community operator catalogs, such as the Kubernetes OperatorHub.io and the Openshift Container Platform and OKD OperatorHub.

Kubernetes

The operator is available from OperatorHub.io. Just go to the linked page and follow installation steps (or just run these two commands):

# Install Operator Lifecycle Manager (OLM), a tool to help manage the operators running on your cluster.

curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.23.1/install.sh | bash -s v0.23.1

# Install the operator by running the following command:

kubectl create -f https://operatorhub.io/install/kuadrant-operator.yaml

Openshift

The operator is available from the Openshift Console OperatorHub. Just follow installation steps choosing the "Kuadrant Operator" from the catalog:

Kuadrant Operator in OperatorHub

2. Request a Kuadrant instance

Create the namespace:

kubectl create namespace kuadrant

Apply the Kuadrant custom resource:

kubectl -n kuadrant apply -f - <<EOF
---
apiVersion: kuadrant.io/v1beta1
kind: Kuadrant
metadata:
  name: kuadrant-sample
spec: {}
EOF

Protect your service

If you are an API Provider

  • Deploy the service/API to be protected ("Upstream")
  • Expose the service/API using the kubernetes Gateway API, ie HTTPRoute object.
  • Write and apply the Kuadrant's RateLimitPolicy and/or AuthPolicy custom resources targeting the HTTPRoute resource to have your API protected.

If you are a Cluster Operator

  • (Optionally) deploy istio ingress gateway using the Gateway resource.
  • Write and apply the Kuadrant's RateLimitPolicy and/or AuthPolicy custom resources targeting the Gateway resource to have your gateway traffic protected.

User guides

The user guides section of the docs gathers several use-cases as well as the instructions to implement them using kuadrant.

Documentation

Docs can be found on the Kuadrant website.

Contributing

The Development guide describes how to build the kuadrant operator and how to test your changes before submitting a patch or opening a PR.

Join us on the #kuadrant channel in the Kubernetes Slack workspace, for live discussions about the roadmap and more.

Licensing

This software is licensed under the Apache 2.0 license.

See the LICENSE and NOTICE files that should have been provided along with this software for details.

kuadrant-operator's People

Contributors

adam-cattermole avatar alexsnaps avatar art-tapin avatar boomatang avatar david-martin avatar dependabot[bot] avatar didierofrivia avatar dlaw4608 avatar eguzki avatar ehearneredhat avatar eloycoto avatar grzpiotrowski avatar guicassolato avatar jasonmadigan avatar jmprusi avatar jsmolar avatar kevfan avatar makslion avatar maleck13 avatar markmc avatar mikenairn avatar pehala avatar philbrookes avatar pmccarthy avatar r-lawton avatar rahulanand16nov avatar smccarthy-ie avatar thomasmaas avatar trepel avatar ygnas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kuadrant-operator's Issues

[investigation] Using Istio/Gateway API to manage envoy filterchains per domain (or wildcard domain) name

For HTTP level rate limiting using the Envoy Rate Limit filter, there is a limitation on the rate limit domain field. It has been described in the kuadrant API Protection doc and in this envoy issue.

One potential workaround to this limitation would be the use of TLS instead of plain HTTP. Envoy allows specifying multiple filterchains per listener. Each filterchain has a set of match criteria deinfed in config.listener.v3.FilterChainMatch. The criteria is limited to TCP level domain, so the traffic domain (or hostname) can only be used as matching criteria when using SNI for TLS protocol.

Investigate how to define multiple envoy filterchains, one per domain, to have one ratelimit filter per traffic domain and with the rate limit domain set to that traffic domain. Istio/Gateway APIs should be used. Specifically:

CI/CD workflows

We want to improve automation in all repos for the Kuadrant components. We're aiming for:

  1. good coverage of automation tasks related to code style, testing, CD/CD (image builds, releases), etc
  2. consistency across components
  3. automation as manageable code – i.e. less mouse clicks across scaterred UI "settings" pages and more Gitops, more YAMLs hosted as part of a code base.

As part of a preliminary investigation (#21) of the current state of such automation, the following desired workflows and corresponding status for the Kuadrant controller repo were identified. Please review the list below.

1 Currently configured in Quay instead of GHA.

Workflows do not have to be implemented exactly as in the list. The list is just a driver for the kind of tasks we want to cover. Each component should assess it as it makes sense considering the component's specificities. More details in the original epic: #21.

You may also want to use this issue to reorganize how current workflows are implemented, thus helping us make the whole thing consistent across components.

For an example of how Authorino and Authorino Operator intend to organise this for Golang code bases, see respectively Kuadrant/authorino#351 (comment) and Kuadrant/authorino-operator#96 (comment).

RateLimitPolicy controller reconcile logic hardening

The ratelimitpolicy API and how it is referenced by networking resources is constantly evolving. Doing hardening work sometimes is a waste of time, as the API or network references may change. As of today (HEAD is Kuadrant/kuadrant-controller@8317588), I have detected some hardening work to be done in order to provide consistent UX when using the kuadrant API. Instead of working on those, as it may take quite time, I will drop here my findings and proposal for a robust reconciliation logic, covering all use cases.

Known issues

  • RateLimitPolicy spec is not correctly reconciled.
    • If you update, delete, create rate limit actions, the change will not take effect in associated envoy filters
    • If you delete limitador's rate limit entries, associated RateLimitCR's are not deleted. The implementation with the index in the name of the RateLimit CRs should be refactored to something that allows to identify ratelimit CRs with rate limit entries in the RLP
  • Handle removal of the networking resources Virtual Service / HTTPRoute
    • When the VS is removed, the RLP controller gets a event. However, the controller tries to read the VS object and fails as it is not found. So the annotation to delete order is never deleted from the RLP and the "detaching from network" job is never done
  • Watch for reconciled resources like AuthorizationPolicies, EnvoyFilters, RateLimits. If they are deleted or updated, the controller will be notified and react. The controller may allow changes or revert them back, depending on the policy implemented.
  • Reconcile when network resources (VS / HTTPRoute) update their gateway references.

Proposal

Trying to follow the controller pattern, the RLP controller should be heavily refactored. The pattern tells us to read desired state, read existing state, then build (and execute) a list of actions to make the existing state look like the desired state. Always with the mindset of being idempotent, as the controller should expect reconciliation loops to happen even when the existing state is equal to the desired state.

The reconciliation of RateLimit objects is "simpler" as it depends entirely on the spec of the RLP object. However it still needs refactoring. The implementation with the index in the name of the RateLimit CRs should be refactored to something that allows to identify ratelimit CRs with rate limit entries in the RLP

Regarding the envoy filter reconciliation, it is more complex than the RateLimit objects. The envoy filter (EF) resource is made up with information from the networking resource (Host and gateways) and information from the RLP spec. So, in order to catch all events that affect the .spec of the EF, the RLP controller should be watching for networking resources with a mapper function that converts events in the networking resources into RLP events using the annotation referencing the RLP object.

On each RLP controller reconciliation event, the logic reconcile EF is:

  • Build a list of networking resources having reference to the current RLP.
    • This may imply client.List operations, so I suggest adding a kuadrant label to the networking resources to filter the list operation
  • With the list of networking resources (and associated Gateways), build a desired list of EnvoyFilters
  • Read the list of EF resources created by the RLP controller. This is the "existing state".
    • This may require a label in the EF with a back reference to the RLP object, kind of "owner ref".
  • Compare existing list of EF with the desired list of EF and create/execute a list of actions:
    • Create EF objects in desired not in existing
    • Update EF objects in desired and in existing if they are different. (The meaning of equal to be defined)
    • Delete EF objects in existing and not in desired

Fix settings generation in tests

In some integration tests we are relying on random namespace generation, which makes it hard to test dependent services expecting an ENV var value to be already setup.

Support OpenShift Service Mesh

OpenShift Service Mesh is Red Hat's downstream product of Istio. The most relevant difference compared to an upstream Istio is that it doesn't use IstioOperator but instead defines their own resources, notably ServiceMeshControlPlane which deploys the mesh and ServiceMeshMember/ServiceMeshMemberRoll which handle which namespaces are part of the mesh.

This means:

  • Kuadrant cannot interact with it as the IstioOperator resource is missing (#103)
  • I am not sure if it makes sense to bind Kuadrant to ServiceMeshControlPlane as the mesh and gateways can be spread across multiple namespaces

CI/CD workflows

We want to improve automation in all repos for the Kuadrant components. We're aiming for:

  1. good coverage of automation tasks related to code style, testing, CD/CD (image builds, releases), etc
  2. consistency across components
  3. automation as manageable code – i.e. less mouse clicks across scaterred UI "settings" pages and more Gitops, more YAMLs hosted as part of a code base.

As part of a preliminary investigation (#21) of the current state of such automation, the following desired workflows and corresponding status for the Kuadrant Operator repo were identified. Please review the list below.

Workflows do not have to be implemented exactly as in the list. The list is just a driver for the kind of tasks we want to cover. Each component should assess it as it makes sense considering the component's specificities. More details in the original epic: #21.

You may also want to use this issue to reorganize how current workflows are implemented, thus helping us make the whole thing consistent across components.

For an example of how Authorino and Authorino Operator intend to organise this for Golang code bases, see respectively Kuadrant/authorino#351 (comment) and Kuadrant/authorino-operator#96 (comment).

Kuadrant services settings

Currently there are some services, I.E: Limitador that the Operator tells the Gateway where it's been deployed (Name, Namespace, Service port, etc...) and many others that are relying on defaults and hardcoded settings.
Ideally one can have an unified interface (Kuadrant CR, params, ENV, etc..) and avoid any hardcoded values or split ways of configuration.

Kuadrant instance only works in the `kuadrant-system` namespace

When the PR #48 is merged, the kuadrant as a system will not be working unless the namespace where kuadrant is installed is kuadrant-system. This issue comes from the fact that the policy controller are no longer deployed as components of a kuadrant instance. Instead, the controllers live at the kuadrant's operator pod and they are up&running even if there is no kuadrant instance running in the cluster.

The very source issue of this "side effect" is the design about how backend services were wired with the Ratelimit/Auth policies. As it was designed, it allowed kuadrant instance to be installed in any namespace, however, the design only allowed one kuadrant instance to be running in the cluster. When the controllers were moved to the operator's pod, one of the design's requirements (the controllers know at deploy time with env vars where the kuadrant's backend services live) was unmet, thus causing the issue.

This issue is to track the fix, as it seems desirable that kuadrant can be installed in any namespace.

Related but not in the scope of this issue is the decision about whether kuadrant supports multiple instances in a single cluster or not. That would require a definition of how policies are wired to the backend services (limitador for the rate limiting and authorino for the authN/authZ services). Once it is clear how policies are linked to the backend services, the fix of this issue should not be hard.

Better support for automated security scans

We're using distroless base images to build the project. This is good for optimization of the container images but won't work for security scans such as the ones run by Quay Security Scanner (based on Clair).

docker scan, which relies on Snyk, is a good alternative for local environment and CI. Nevertheless, we likely want something that is supported by Quay.io as well.

Merge kuadrant-controller into kuadrant-operator

After discussing the path to follow in this issue, and offline, we decided that the best would be to merge the features of the Kuadrant Controller within this operator.

To make this happen, a couple of steps need to be follow to successfully integrate the rate limiting, authN/authZ, gateway shims between other Kuadrant capabilities, and keep the history of the "source" repo for historical purposes.

The main objective would be having the Kuadrant operator installed in a cluster, after applying a Kuadrant CR , install and configure Kuadrant dependencies for the later use of AuthPolicies and RateLimitPolicies. The later CRs, should setup the Kuadrant services the same way it's working with the Kuadrant Controller, but getting rid of that extra control plane layer. This means, the user guides provided should work.

Configure authorino deployment for API Key authN consistently with secrets labels

After this PR Kuadrant/authorino#179, authorino will update secret label selector env var, and fix secret controller reconciliation. As a consequence, the kuadrant controller should adapt the way it integrates with authorino. Currently authorino is not deployed with any custom secret label selector, hence default selector is used: it must contain the following label key

authorino.3scale.net/managed-by

If kuadrant user deploys a secret without that label, which is not even mentioned in the kuadrant doc, authorino will no be aware of the changes done to the secret. Thus, changing the secret will no have any effect and authorino will be validating old key.

The recomended labeling by authorino is:

SECRET_LABEL_SELECTOR ⊂ AuthConfig.spec.identity.apiKey.labelSelectors ⊂ Secret.metadata.labels

Which translates on:
1- Kuadrant controller will deploy authorino with SECRET_LABEL_SELECTOR => secret.kuadrant.io/managed-by: authorino
2.- Kuadrant documentation will recommend users to write APIProducts spec of api key auth ( spec.securityScheme.apiKeyAuth.credential_source.labelSelectors) as:

secret.kuadrant.io/managed-by: authorino
app: MYAPP
  1. Kuadrant documentation will recommend users to label they API key secrets as:
secret.kuadrant.io/managed-by: authorino
app: MYAPP
other labels: other values

Improved status for RateLimitPolicy

What

The RateLimitPolicy status should aggregate the status of the WASMPlugin any EnvoyFilters, HTTPRoute and also the RateLimit limitador resource. The RateLimitPolicy API is the end user facing API and it should communicate well the current state of the rate limiting set up.

Initially the status can reflect wether the ratelimit, envoyfilter and WasmPlugin has been created and the HTTPRoute is accepted. Possibly we want to add some new conditions as we are interacting with two different systems perhaps a condition for Gateway and a condition for RateLimitService and then a general Ready condition. alternatively we could use the Ready condition with specific messages to communicate. Our status block for AuthPolicy should follow the same pattern we decide on here

status:
  conditions:
    - lastTransitionTime: "2019-10-22T16:29:24Z"
      status: "True"
      msg: "gateway configured correctly"
      type: Gateway
    - lastTransitionTime: "2019-10-22T16:29:24Z"
      status: "True"
      mgs: "rate limit service configured correctly"
      type: RateLimitService
    - lastTransitionTime: "2019-10-22T16:29:24Z"
      status: "True"
      mgs: "rate limiting available"
      type: Ready

RFC Process

Summary

This is an attempt at streamlining and formalizing the addition of features to Kuadrant and its components: Authorino, Limitador and possible more to come. It describes a process, Request For Comment (RFC). That process would be followed by contributors to Kuadrant in order to add features to the platform. The process aims at enabling the teams to deliver better defined software with a better experience for our end-users.

Motivation

As I went through the process of redefining the syntax for Conditions in Limitador, I found it hard to seed people's mind with the problem's space as I perceived it. I started by asking questions on the issues itself that didn't get the traction I had hoped for until the PR was eventually opened.
This process should help the author to consider the proposed change in its entirety: the change, its pros & cons, its documentation and the error cases. Making it easier for reviewers to understand the impact of the change being considered.
Further more, this keeps a "written" record of "a decision log" of how a feature came to be. It would help the ones among us who tend to forget about things, but would be of incommensurate value for future contributors wanting to either understand a feature deeply or build upon certain features to enable a new one.

Guide-level explanation

A contributor would start by following the template for a new Request For Comment (RFC). Eventually opening a pull request with the proposed change explained. At which point it automatically becomes a point of discussion for the next upcoming technical discussion weekly call.
Anyone is free to add ideas, raise issues, point out possible missing bits in the proposal before the call on the PR itself. The outcome of the technical discussion call is recorded on the PR as well, for future reference.
Once the author feels the proposal is in a good shape and has addressed the comments provided by the team and community, they can label the RFC as FCP, entering the Final Comment Period_. From that point on, there is another week left for commenters to express any remaining concerns. After which, the RFC is merged and going into active status, ready for implementation.

Reference-level explanation

Creating a Kuadrant/rfcs repository, with the README below and a template to start a new RFC from:

  • See the README.md and 0000-template.md files below for more details.

Drawbacks

The process proposed here adds overhead to addition of new features onto our stack. It will require more upfront specification work. It may require doing a few proof of concepts along the initial authoring, to enable the author to better understand the problem space.

Rationale and alternatives

What we've done until now, investigations have been less formal, but I'm unsure how much of their value got properly and entirely captured. By formalizing the process and having a clear outcome: a implementable piece of documentation, that address all aspects of the user's experience look like a better result.

Prior art

The entire idea isn't new. This very proposal is based on prior art by rust-lang and pony-lang. This process isn't perfect, but has been proven over and over again to work.

Unresolved questions

  • A week for the FCP seems a lot and very little at the same time… should we revisit this?
  • Having two core team member accepting an RFC is… acceptable? Should it be more? Less?
  • Should this all go under Kuadrant/rfcs?
  • What does it mean for kcp-glbc ?

Future possibilities

I certainly see this process itself evolving overtime. I like to think that this process can itself be supporting its future changes…


  • README.md

Kuadrant RFCs

The RFC (Request For Comments) process is aiming at providing a consistent and well understood way of adding new features or introducing breaking changes in the Kuadrant stack. It provides a mean for all stakeholders and community at large to provide feedback and be confident about the evolution of our solution.

Many, if not most of the changes will not require to follow this process. Bug fixes, refactoring, performance improvements or documentation additions/improvements can be implemented using the tradition PR (Pull Request) model straight to the targeted repositories on Github.

Additions or any other changes that impact the end user experience will need to follow this process.

When is an RFC required?

This process is meant for any changes that affect the user's experience in any way: addition of new APIs, changes to existing APIs - whether they are backwards compatible or not - and any other change to behaviour that affects the user of any components of Kuadrant.

  • API additions;
  • API changes;
  • … any change in behaviour.

When no RFC is required?

  • bugfixes;
  • refactoring;
  • performance improvements.

The RFC process

The first step in adding a new feature to Kuadrant, or a starting a major change, is the having a RFC merged into the repository. One the file has been merged, the RFC is considered active and ready to be worked on.

  1. Fork the RFC repo
  2. Use the template 0000-template.md to copy and rename it into the rfcs directory. Change the template suffix to something descriptive. But this is still a proposal and as no assigned RFC number to it yet.
  3. Fill the template out. Try to be as thorough as possible. While some sections may not apply to your feature/change request, try to complete as much as possible, as this will be the basis for further discussions.
  4. Submit a pull request for the proposal. That's when the RFC is open for actual comments by other members of team and the broader community.
  5. The PR is to be handled just like a "code PR", wait on people's review and integrate the feedback provided. These RFCs can also be discussed during our weekly technical call meeting, yet the summary would need to be captured on the PR.
  6. How ever much the orignal proposal changes during this process, never force push or otherwise squash the history, or even rebase your branch. Try keeping the commit history as clean as possible on that PR, in order to keep a trace of how the RFC evolved.
  7. Once all point of views have been shared and input has been integrated in the PR, the author can push the RFC in the final comment period (FCP) which lasts a week. This is the last chance for anyone to provide input. If during the FCP, consensus cannot be reached, it can be decided to extend that period by another week. Consensus is achieved by getting two approvals from the core team.
  8. As the PR is merged, it gets a number assigned, making the RFC active.
  9. If on the other hand the consensus is to not implement the feature as discussed, the PR is closed.

The RFC lifecycle

  • Open: A new RFC as been submitted as a proposal
  • FCP: Final comment period of one week for last comments
  • Active: RFC got a number assigned and is ready for implementation with the work tracked in an issue, which summarizes the state of the implementation work.

Implementation

The work is itself tracked in a "master" issue with all the individual, manageable implementation tasks tracked.
The state of that issue is initially "open" and ready for work, which doesn't mean it'd be worked on immediately or by the RFC's author. That work will be planned and integrated as part of the usual release cycle of the Kuadrant stack.

Amendments

It isn't expected for an RFC to change, once it has become active. Minor changes are acceptable, but any major change to an active RFC should be treated as an independent RFC and go through the cycle described here.

EOF

  • 0000-template.md

RFC Template

Summary

One paragraph explanation of the feature.

Motivation

Why are we doing this? What use cases does it support? What is the expected outcome?

Guide-level explanation

Explain the proposal as if it was implemented and you were teaching it to Kuadrant user. That generally means:

  • Introducing new named concepts.
  • Explaining the feature largely in terms of examples.
  • Explaining how a user should think about the feature, and how it would impact the way they already use Kuadrant. It should explain the impact as concretely as possible.
  • If applicable, provide sample error messages, deprecation warnings, or migration guidance.
  • If applicable, describe the differences between teaching this to existing and new Kuadrant users.

Reference-level explanation

This is the technical portion of the RFC. Explain the design in sufficient detail that:

  • Its interaction with other features is clear.
  • It is reasonably clear how the feature would be implemented.
  • How error would be reported to the users.
  • Corner cases are dissected by example.

The section should return to the examples given in the previous section, and explain more fully how the detailed proposal makes those examples work.

Drawbacks

Why should we not do this?

Rationale and alternatives

  • Why is this design the best in the space of possible designs?
  • What other designs have been considered and what is the rationale for not choosing them?
  • What is the impact of not doing this?

Prior art

Discuss prior art, both the good and the bad, in relation to this proposal.
A few examples of what this can include are:

  • Does another project have a similar feature?
  • What can be learned from it? What's good? What's less optimal?
  • Papers: Are there any published papers or great posts that discuss this? If you have some relevant papers to refer to, this can serve as a more detailed theoretical background.

This section is intended to encourage you as an author to think about the lessons from other tentatives - successful or not, provide readers of your RFC with a fuller picture.

Note that while precedent set by other projects is some motivation, it does not on its own motivate an RFC.

Unresolved questions

  • What parts of the design do you expect to resolve through the RFC process before this gets merged?
  • What parts of the design do you expect to resolve through the implementation of this feature before stabilization?
  • What related issues do you consider out of scope for this RFC that could be addressed in the future independently of the solution that comes out of this RFC?

Future possibilities

Think about what the natural extension and evolution of your proposal would be and how it would affect the platform and project as a whole. Try to use this section as a tool to further consider all possible interactions with the project and its components in your proposal. Also consider how this all fits into the roadmap for the project and of the relevant sub-team.

This is also a good place to "dump ideas", if they are out of scope for the RFC you are writing but otherwise related.

Note that having something written down in the future-possibilities section is not a reason to accept the current or a future RFC; such notes should be in the section on motivation or rationale in this or subsequent RFCs. The section merely provides additional information.

EOF

Spec the status reporting behaviour

Using limitador server as an example to illustrate the scope of this issue:

When a new RateLimitPolicy is added or modified it will ultimately be reflected as a change to a ConfigMap mounted on the Limitador pod. While things can go "wrong" along the way to the CM, all actors are "k8s aware" and can reflect any status changes so to report to the users. It is necessary for us to decide how to report these back up the chain to the user. But when things make it to Limitador, the server as no way to report back to k8s. So we need to decide how Limitador could for instance report a syntax error in a Condition back to the matching RateLimitPolicy (possibly translating to "what" actually triggered the condition to be "generated" in this "invalid" form).

Spec configuration

@didierofrivia fill this out further

  • How do you wire the GW with the backing service (e.g. RLP -> Limitado srv)
  • Istio/GW-provider abstraction e.g this issue OSSM here ?
  • What would that all mean if Kuadrant lives at the KCP level ?

`hosts` field not exposed in the AuthPolicy

Motivation

Authorino will reject (with 404 Not Found) any request for which there is no matching authconfig object. Exposing the hosts in the spec.authScheme object can lead to unwanted scenarios for the end user and it is, at the very least, error prone. One example to illustrate:

kind: HTTPRoute:
spec:
  hostnames: [`*.company.com`]
---
kind: AuthPolicy
spec:
  authScheme:
    hosts: ["api.petstore.company.com"]

The route allows traffic for *.company.com, but the authconfig object have rules only for api.petstore.company.com, then a request for other.company.com will hit authorino and authorino will reject that traffic. Something clearly unwanted.

Kuadrant core should provide means to configure automatically the authorization process to only protect the wanted domains and pass through the remaining ones.

What

  1. Regarding managed AuthConfig

The proposal is to remove hosts from spec.authScheme. The Authconfig object still needs a hosts field, so it will be the kuadrant core, the owner of the authconfig object, who will be filling the hosts field.

The hosts field will be computed in the following way:

  • When there are no rules, authconfig's hosts field will be read from the network resource (HTTPRoute or Gateway).
  • When there is at least one rule with the hosts field empty/missing, authconfig's hosts field will be read from the network resource (HTTPRoute or Gateway).
  • authconfig's hosts field will be the list of hosts appearing in the authpolicy rules

For example:

kind: HTTPRoute:
spec:
  hostnames: [`*.petstore.com`]
----
kind: AuthPolicy
spec:
  rules:
  - hosts: ["api.petstore.com"]
    methods: ["GET"]
  - methods: ["POST"]

in this example, there is a rule with the hosts missing. Then the authconfig's hosts field will be [*.petstore.com]

For example:

kind: HTTPRoute:
spec:
  hostnames: [`*.petstore.com`]
----
kind: AuthPolicy
spec:
  rules:
  - hosts: ["api.petstore.com"]
    methods: ["GET"]
  - hosts: ["admin.petstore.com"]
    paths: ["/admin*"]

in this example, there is a rule with the hosts missing. Then the authconfig's hosts field will be [api.petstore.com, admin.petstore.com]

  1. Regarding managed Istio's AuthorizationPolicy

If there is a rule in the kuadrant's auth that does not have hosts specified, kuadrant will add the hostnames from the network resource when reconciling the AuthorizationPolicy.

For example:

kind: AuthPolicy
spec:
  rules:
  - paths: ["/toy*"]

The reconciled Istio's AuthorizationPolicy will include the network resource's (route) hostnames

piVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
spec:
  action: CUSTOM
  provider:
    name: kuadrant-authorization
  rules:
  - to:
    - operation:
        hosts:
        - '*.toystore.com'
        paths:
        - /toy*
  selector: {}

The Istio's authorization policies configure a common (shared) envoy's authorization filter. The rules coming from a kuadrant's policy targeting a given route should all be scoped to the route's traffic workloads.

Review scope of tests

Unit tests are well scoped, while Integration are looking very much like e2e tests. It's important to define the scope of the latter ones, since at the moment 2 integration tests are consuming way too much resources and time asserting trivial things.

Overall Release Process

Define how Kuadrant as a whole is released. The operator should likely be the key component where everything is brought together. Not everything needs to be automated but the process should be understandable and executable by anyone in the eng team.

  • Key areas

What criteria must be met in order to say this version is ready for release (e.g e2e test complete, deployed to our internal hosted environment etc)

When a new release of a component is created how do we test it as part of Kuadrant as a whole

How do we bring together the documentation for that release from the various Kuadrant components

How is the release made available to end users

@alexsnaps At a high level I think we want a process that enables a flow such as (and the below doesn't mean we have to do all this at once) below:

There are no doubt gaps and questions around this process it is intended as a getting started point

Master

  • Example: Kuadrant Controller PR made (unit test, e2e tests etc)
  • Kuadrant controller pr merged -> triggers pr to kuadrant operator to update the "alpha" or "nightly" channel via index image (for other components we will have to update their operator and then update the kuadrant operator)
  • Kuadrant operator runs its checks (tests etc)
  • release created of kuadrant operator and published via the nightly channel
  • Kuadrant operator is deployed to our internal eng environment via nightly channel
  • Nightly test run is done against the full environment (this should be a highly stable set of tests and given high priority if it fails)

Full Release

  • When we decide we want to we cut a new release, we create a release branch for each component from master
  • Tests re-run and builds done (related operators updated and tested)
  • single PR to update the kuadrant operator with these new versions (tests re run)
  • We may then want to release to different channel (beta?) and deploy to re-run our full e2e tests
  • Final step is to publish to our stable channel via our index image

Patch release

  • fix issue on master of affected component
  • cherry-pick fix to release branch
  • Tests re-run and builds done (only for affected component)
  • single PR to update the kuadrant operator with this new version (tests re run)
  • We may then want to release to different channel (beta?) and deploy to re-run our full e2e tests
  • Final step is to publish to our stable channel via our index image

Proposal: `v0.x` versioning

Proposed version scheme: v0.<yW>.<dot> where yW is from date +%y%W i.e. the year & week number, e.g. 2232. The version would be aligned across all components of Kuadrant, and we'd aim for a new release at the end of every sprint. This is a mix of semantic versioning and date based versioning. The former is dictated by some of our own reliances (e.g. operator hub), while the latter removes the necessity of mapping an actual number to a sprint.

The idea of aligning versions is to keep things simple and easy for us to manage for as long as we can, while shipping a functional packaged Kuadrant.

At the end of a sprint, we'd release all components with all of them using the same version (e.g. v0.2232.0). Each component would define its default dependency to other Kuadrant component to use v0.2232, where any <dot> version would work - leaving us room for further tiny fixes if need be (more on that later).
With such an approach, the release process and the team don't need to account for breaking changes to inter-component dependencies. Today, e.g. today limitador- & authorino-operators both default to latest (with some difference on how they actually do it) for the image they'll use to deploy their respective service onto the cluster. In this new world, they'd default to deploying version v0.2232 of their service instead, with the guarantee that this combination works. The option of overriding this should remain and users could "mix and match" as they see fit, but without guarantees that this actually works.

The idea is to give us the greatest flexibility in changing interfaces of each component without requiring us to think about implications for other versions of other components. Using the example above, if a new image of limitador breaks something that breaking change would be imposed on everyone. Until we reach v1, we should be able (if not encourage) to break things for the "benefit of the great good", where that would be the Kuadrant project as a whole. Once that point has been reached, any v1.m.d release of any component would work with a "Kuadrant v1" deployment. But until then, why tie ourselves to these limits? Especially as we are still exploring the problem space (and have some other reality imposed on us, not necessarily fixed neither, i.e. alpha APIs by some of our external dependencies), in a way that dependents can inform API changes in their dependencies.

A dot release might still be required in the case of a particular vulnerability or having to fix an actual bug that slipped through. But also only if some actual user cannot possibly upgrade to the latest version, because of time or breaking changes. Then a fix would be either provided to main and cherry-picked for a dot-release; or go straight to the branch of the tag of the affected version (depending on urgency and/or difficulty). A dot release could also be "hi-jacked" to quickly release a feature to someone. i.e. dot releases still leave every component to deal with their "own fate".

We'd still thrive to give fixes priority, so that hopefully people can "simply" upgrade to the release coming up at the end of sprint.

Content of a release

  • Operators updated to Operator Hub (under alpha channel?)
  • Images pushed to container registery (quay.io)
  • Complete changelog for all of the individual component
    • Breaking changes at each components "public API" level
  • Updated documentation to the website (versioned)

Requirements

  • Updated docs
  • Updated changelog (see Conventional Commits?)
  • CI/CD pipeline passing (i.e. including deploying the tags to our staging cluster)

Downside

We might end up doing "empty" releases, i.e. ones in which nothing actually changed from the previous one. While this might be somewhat confusing at first, there isn't a major drawback if most of the releasing process is automated/lightweight.

Tripwires

This is by no mean a release proposal for us to use for ever. The idea is to streamline development of the whole Kuadrant stack, without having to suffer from the additional complexity of having multiple versions and needing to clearly define the dependency tree. At the very lastest when the v1 (i.e. something we believe to be stable, while might be partially informed by the Gateway API's timeline), we probably would want to cut free from that scheme. But there might be other reasons:

  • Reaching v1
  • One component having a "life on its own"
    • Because of maintenance requirement on existing deployments
    • Having its own agenda, not informed or impacted by Kuadrant as a whole
  • Components not changing for longer periods… tho unsure again if that would be such a problem per se
  • More?

If any of the above happens, we should revisit our decision at that time. But there might be other reasons to revise it as the development journey of the different components continue.

Other benefits

This forces us all to think more in the Kuadrant stack. Hopefully with the whole CI/CD pipeline, this will have us all "learn" more about the implications of the dependencies and interactions of our components, where the "only" requirement is to get things working again by the end of the sprint.

There is a virtuous circle aspect to it all where the process will consolidate the product itself as well as the team, with being more "full stack"-minded.

Finally, this makes it also much simpler for potential users to keep up with, there is a single version of "Kuadrant" exposed to them that they don't need to understand the underpinnings of, other than if they need or want to.

Kuadrant Policy Designer Widget Mockup

Design and create policy designer widget as a clickable mockup that showcases creating a rate limiting policy and an auth policy.

  • Could be 1 or 2 policy designers (auth policy / rate limiting policy)
  • Ignore context of where the designer widget should live
  • Consider things like a default policy, policy templates?

Support Installation of Kuadrant via an Operator

Use Case

As a cluster admin, I would like to install Kuadrant on to my OpenShift cluster so that I can provide my development teams with the tools needed to protect their services and APIs.

Options

Provide an Operator available via the Operator Hub that can install the Kuadrant components onto a cluster that already has Isito installed.

Questions:

  • how does gateway API get installed on the cluster (cluster admin could probably do this but it needs to be a clear pre-req) another option might be to have it installed by the operator installing Istio.

Demo

  • Show a cluster with Isito pre installed
  • Show installing the kuadrant operator via OLM
  • Show Kuadrant is configured on the cluster (Authorino is registered as an AuthProvider)
  • Show the kuadrant CRDS as available

Done

Better support for automated security scans

We're using distroless base images to build the project. This is good for optimization of the container images but won't work for security scans such as the ones run by Quay Security Scanner (based on Clair).

docker scan, which relies on Snyk, is a good alternative for local environment and CI. Nevertheless, we likely want something that is supported by Quay.io as well.

Repo/module naming issue

Some context

There’s been a bit of confusion not only inside the Kuadrant team when referring to this module, and been confused with the Kuadrant Controller, but also from outside. Quoting a fellow Redhatter:

hey folks. I'm seeing there's a controller and an operator for kuadrant. Aren't they redundant? Is the operator going to replace the controller?

I'm looking at the types defined by kuadrant controller and limitador and they are very similar

Alternatives we could opt to

Rename it

This not only will help with the differentiation of both modules, but also chose a more fitting name, not to be confuse with the k8s controller design pattern.

Some examples could be:

  • Kuadrant.d
  • Kuadrant-core
  • Just Core

Merge it

The alternative of merging it within the Kuadrant Operator could help us not only with what version we are releasing but also with defining what Kuadrant really is. It will also bring some DRYing of the code and a single source for all these similar concepts and CRDs.

Live with it.

Man up, suck it up and go on.

Common CI/CD workflows - investigation/planning

What

  • Define a common set of workflows that each of our components should put in place. (look at existing ones from GLBC and Authorino etc to see what can be reused
  • Once agreed flows are in place define the set of tasks needed for each component and link them into this overall epic

Investigate supporting defaults and overrides in AuthPolicy

Use Case

As a gateway administrator I want to lock down access to any host attaching to this gateway. By default I want to apply a default DENY ALL policy at the gateway level.
When a policy is created targeting a HTTPRoute, I want that policy to then take precedence over my default policy.

  • Validate the existing behaviour supports this use case
  • Move authschema to default
  • Create a Demo showing two services and default policy. Show how access is only allowed when a policy has been defined

Adding Ratelimiting to the same gateway causes multiple requests to Limitador

If you add two APIProducts with authenticated rate limiting a request to 1 endpoint will trigger 2 requests to limitador. 1 for each rate limit filter added at the gateway level. The number of requests will grow proportional to the number of filters added.

Steps

  • create two echo apis
  • label them with the discovery label
  • add two api products
apiVersion: networking.kuadrant.io/v1beta1
kind: APIProduct
metadata:
  name: bookstore
  namespace: default
spec:
  hosts:
    - bookstore.127.0.0.1.nip.io
  APIs:
    - name: bookstore
      namespace: default
  securityScheme:
    - name: MyAPIKey
      apiKeyAuth:
        location: authorization_header
        name: APIKEY
        credential_source:
          labelSelectors:
            secret.kuadrant.io/managed-by: authorino
            api: toystore
  rateLimit:
    authenticated:
      maxValue: 3
      period: 5


apiVersion: networking.kuadrant.io/v1beta1
kind: APIProduct
metadata:
  name: toystore
  namespace: default
spec:
  hosts:
    - toystore.127.0.0.1.nip.io
  APIs:
    - name: toystore
      namespace: default
  securityScheme:
    - name: MyAPIKey
      apiKeyAuth:
        location: authorization_header
        name: APIKEY
        credential_source:
          labelSelectors:
            secret.kuadrant.io/managed-by: authorino
            api: toystore
  rateLimit:
    authenticated:
      maxValue: 1
      period: 5

make a single request to the bookstore in the limitador logs you will see it is called for both domains:

limitador-7f7c645979-92xfm limitador [2021-12-23T11:07:53Z DEBUG h2::codec::framed_write] send frame=Settings { flags: (0x0), initial_window_size: 1048576, max_frame_size: 16384 }
limitador-7f7c645979-92xfm limitador [2021-12-23T11:07:53Z DEBUG h2::proto::connection] Connection; peer=Server
limitador-7f7c645979-92xfm limitador [2021-12-23T11:07:53Z DEBUG h2::codec::framed_read] received frame=Settings { flags: (0x0), header_table_size: 4096, enable_push: 0, max_concurrent_streams: 2147483647, initial_window_size: 268435456 }
limitador-7f7c645979-92xfm limitador [2021-12-23T11:07:53Z DEBUG h2::codec::framed_write] send frame=Settings { flags: (0x1: ACK) }
limitador-7f7c645979-92xfm limitador [2021-12-23T11:07:53Z DEBUG h2::codec::framed_read] received frame=WindowUpdate { stream_id: StreamId(0), size_increment: 268369921 }
limitador-7f7c645979-92xfm limitador [2021-12-23T11:07:53Z DEBUG h2::codec::framed_read] received frame=Headers { stream_id: StreamId(1), flags: (0x4: END_HEADERS) }
limitador-7f7c645979-92xfm limitador [2021-12-23T11:07:53Z DEBUG h2::codec::framed_read] received frame=Data { stream_id: StreamId(1), flags: (0x1: END_STREAM) }
limitador-7f7c645979-92xfm limitador [2021-12-23T11:07:53Z DEBUG h2::codec::framed_read] received frame=Settings { flags: (0x1: ACK) }
limitador-7f7c645979-92xfm limitador [2021-12-23T11:07:53Z DEBUG h2::proto::settings] received settings ACK; applying Settings { flags: (0x0), initial_window_size: 1048576, max_frame_size: 16384 }
limitador-7f7c645979-92xfm limitador [2021-12-23T11:07:53Z DEBUG h2::codec::framed_write] send frame=WindowUpdate { stream_id: StreamId(0), size_increment: 983041 }
limitador-7f7c645979-92xfm limitador [2021-12-23T11:07:53Z DEBUG limitador_server::envoy_rls::server] Request received: Request { metadata: MetadataMap { headers: {"te": "trailers", "grpc-timeout": "20m", "content-type": "application/grpc", "x-b3-traceid": "2fe576109219495397a1b57aaa15c5ff", "x-b3-spanid": "2c2b33a7d145a8b1", "x-b3-parentspanid": "97a1b57aaa15c5ff", "x-b3-sampled": "0", "x-envoy-internal": "true", "x-forwarded-for": "10.244.0.8", "x-envoy-expected-rq-timeout-ms": "20"} }, message: RateLimitRequest { domain: "bookstore.default", descriptors: [RateLimitDescriptor { entries: [Entry { key: "user_id", value: "alice" }] }], hits_addend: 0 }, extensions: Extensions }
limitador-7f7c645979-92xfm limitador [2021-12-23T11:07:53Z DEBUG h2::codec::framed_write] send frame=Headers { stream_id: StreamId(1), flags: (0x4: END_HEADERS) }
limitador-7f7c645979-92xfm limitador [2021-12-23T11:07:53Z DEBUG h2::codec::framed_write] send frame=Data { stream_id: StreamId(1) }
limitador-7f7c645979-92xfm limitador [2021-12-23T11:07:53Z DEBUG h2::codec::framed_write] send frame=Headers { stream_id: StreamId(1), flags: (0x5: END_HEADERS | END_STREAM) }
limitador-7f7c645979-92xfm limitador [2021-12-23T11:07:53Z DEBUG h2::codec::framed_read] received frame=Headers { stream_id: StreamId(3), flags: (0x4: END_HEADERS) }
limitador-7f7c645979-92xfm limitador [2021-12-23T11:07:53Z DEBUG h2::codec::framed_read] received frame=Data { stream_id: StreamId(3), flags: (0x1: END_STREAM) }
limitador-7f7c645979-92xfm limitador [2021-12-23T11:07:53Z DEBUG limitador_server::envoy_rls::server] Request received: Request { metadata: MetadataMap { headers: {"te": "trailers", "grpc-timeout": "20m", "content-type": "application/grpc", "x-b3-traceid": "2fe576109219495397a1b57aaa15c5ff", "x-b3-spanid": "b5cf1278e05acd55", "x-b3-parentspanid": "97a1b57aaa15c5ff", "x-b3-sampled": "0", "x-envoy-internal": "true", "x-forwarded-for": "10.244.0.8", "x-envoy-expected-rq-timeout-ms": "20"} }, message: RateLimitRequest { domain: "toystore.default", descriptors: [RateLimitDescriptor { entries: [Entry { key: "user_id", value: "alice" }] }], hits_addend: 0 }, extensions: Extensions }
limitador-7f7c645979-92xfm limitador [2021-12-23T11:07:53Z DEBUG h2::codec::framed_write] send frame=Headers { stream_id: StreamId(3), flags: (0x4: END_HEADERS) }
limitador-7f7c645979-92xfm limitador [2021-12-23T11:07:53Z DEBUG h2::codec::framed_write] send frame=Data { stream_id: StreamId(3) }
limitador-7f7c645979-92xfm limitador [2021-12-23T11:07:53Z DEBUG h2::codec::framed_write] send frame=Headers { stream_id: StreamId(3), flags: (0x5: END_HEADERS | END_STREAM) }

Add option to opt-out of virtual host level ratelimits if a rule matches

In our current design, virtual host-level rate limits are always included regardless if there is a route-level rate limit or not. There is value in providing an option to opt-out of virtual host-level rate limits if there is a route-level rate limit.

RateLimitPolicy configuration will look similar to the following:

rules:
- operation:
    paths: ["*.toystore.com"]
    methods: ["GET"]
  ignore_vh_ratelimits: true # (defaults to false)
  actions:
  - generic_key:
      descriptor_key: get-toy
      descriptor_value: "yes"

Partial installations and consequences for design

One of the founding ideas of Kuadrant is that one wouldn't need to install all of it when one is looking for partial functionality . e.g. if I'm looking for operations level rate limiting and I'm not interested in auth I should be able to just install and run that part of Kuadrant; in this case: Kuadrant + Limitador.

To guarantee a good user experience, this way of thinking should probably be reflected in the way we design our CRDs: A user of a partial Kuadrant install should not be confronted with data in custom resources that has no meaning or is not accessible in their partial Kuadrant install. Or to put it in another way: a partial install of Kuadrant should give you a partial interface that exactly reflects your partial installation and nothing more.

This means we need to have a look at our CRDs: e.g. right now information related to Limitador and Authorino is stored in the same Product API CRD.

Let's discuss…

[Operator] Document how to use the operator to do installation

What

provide a document/guide that walks a user through installing kuadrant via OLM and the operator. It should get them to the point where they can start to use AuthConifg, Ratelimit and RateLimitPolicy, we should cover both k8s and openshift

  • Documentation outlining how the operator works what it installs and any configuration options
  • Getting started / quick start for using the operator

Investigate transforming a route to a VirtualService

If we were able to confidently transform an OpenShift Route with its various options into a Virtual Service, we could potentially contribute this transformation as part of the KCP control plane and the global load balancer.

Done

  • map out a set of routes that cover the options provided by routes (TLS options and backend routing options)
  • Look at what would be need to translate the options in the spec of a Route to a Virtual Service object.
  • Highlight and identify any issues encountered during this translation

Create a test e2e plan

Test scenarii

  • Installation itself
  • RLP happy path (add, mod, delete CRs)
  • AP happy path (add, mod, delete CRs)
  • "invalid" services (e.g. no Istio)
  • invalid CRs (test status)
  • Transient failures?
  • Uninstall

Implementation considerations

  • Test should "only" use public APIs (i.e. CRs) and test for expected outcome, e.g. the service is rate limited.
  • It's fine to have transient failures because of out-of-sync dependencies. Let's see how much we get, why we get them and re-assess if these become "too many".
  • Defer to kcp to actual testing implementation (for future safe testing, when integrating with them)

Investigate transforming a route to a Gateway API HTTPRoute

If we were able to confidently transform an OpenShift Route with its various options into a Gateway API HTTPRoute, we could potentially contribute this transformation as part of the KCP control plane and the global load balancer.

Done

  • map out a set of routes that cover the options provided by routes (TLS options and backend routing options)
  • Look at what would be need to translate the options in the spec of a Route to a Gateway API HTTPRoute object.
  • Highlight and identify and issues encountered during this translation

Aggregate RateLimit status back to the RLP

This should capture the state of everything needed in order for rate limiting to be considered "ready"

  • Limitador has accepted the rate limiting
  • WASMPlugin is ready and installed

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.