kuadrant / limitador-operator Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
This is an attempt at streamlining and formalizing the addition of features to Kuadrant and its components: Authorino
, Limitador
and possible more to come. It describes a process, Request For Comment (RFC). That process would be followed by contributors to Kuadrant in order to add features to the platform. The process aims at enabling the teams to deliver better defined software with a better experience for our end-users.
As I went through the process of redefining the syntax for Condition
s in Limitador
, I found it hard to seed people's mind with the problem's space as I perceived it. I started by asking questions on the issues itself that didn't get the traction I had hoped for until the PR was eventually opened.
This process should help the author to consider the proposed change in its entirety: the change, its pros & cons, its documentation and the error cases. Making it easier for reviewers to understand the impact of the change being considered.
Further more, this keeps a "written" record of "a decision log" of how a feature came to be. It would help the ones among us who tend to forget about things, but would be of incommensurate value for future contributors wanting to either understand a feature deeply or build upon certain features to enable a new one.
A contributor would start by following the template for a new Request For Comment (RFC). Eventually opening a pull request with the proposed change explained. At which point it automatically becomes a point of discussion for the next upcoming technical discussion weekly call.
Anyone is free to add ideas, raise issues, point out possible missing bits in the proposal before the call on the PR itself. The outcome of the technical discussion call is recorded on the PR as well, for future reference.
Once the author feels the proposal is in a good shape and has addressed the comments provided by the team and community, they can label the RFC as FCP, entering the Final Comment Period_. From that point on, there is another week left for commenters to express any remaining concerns. After which, the RFC is merged and going into active status, ready for implementation.
Creating a Kuadrant/rfcs
repository, with the README
below and a template to start a new RFC from:
README.md
and 0000-template.md
files below for more details.The process proposed here adds overhead to addition of new features onto our stack. It will require more upfront specification work. It may require doing a few proof of concepts along the initial authoring, to enable the author to better understand the problem space.
What we've done until now, investigation
s have been less formal, but I'm unsure how much of their value got properly and entirely captured. By formalizing the process and having a clear outcome: a implementable piece of documentation, that address all aspects of the user's experience look like a better result.
The entire idea isn't new. This very proposal is based on prior art by rust-lang
and pony-lang
. This process isn't perfect, but has been proven over and over again to work.
Kuadrant/rfcs
?kcp-glbc
?I certainly see this process itself evolving overtime. I like to think that this process can itself be supporting its future changes…
README.md
The RFC (Request For Comments) process is aiming at providing a consistent and well understood way of adding new features or introducing breaking changes in the Kuadrant stack. It provides a mean for all stakeholders and community at large to provide feedback and be confident about the evolution of our solution.
Many, if not most of the changes will not require to follow this process. Bug fixes, refactoring, performance improvements or documentation additions/improvements can be implemented using the tradition PR (Pull Request) model straight to the targeted repositories on Github.
Additions or any other changes that impact the end user experience will need to follow this process.
This process is meant for any changes that affect the user's experience in any way: addition of new APIs, changes to existing APIs - whether they are backwards compatible or not - and any other change to behaviour that affects the user of any components of Kuadrant.
The first step in adding a new feature to Kuadrant, or a starting a major change, is the having a RFC merged into the repository. One the file has been merged, the RFC is considered active and ready to be worked on.
0000-template.md
to copy and rename it into the rfcs
directory. Change the template
suffix to something descriptive. But this is still a proposal and as no assigned RFC number to it yet.The work is itself tracked in a "master" issue with all the individual, manageable implementation tasks tracked.
The state of that issue is initially "open" and ready for work, which doesn't mean it'd be worked on immediately or by the RFC's author. That work will be planned and integrated as part of the usual release cycle of the Kuadrant stack.
It isn't expected for an RFC to change, once it has become active. Minor changes are acceptable, but any major change to an active RFC should be treated as an independent RFC and go through the cycle described here.
EOF
0000-template.md
new_feature
)One paragraph explanation of the feature.
Why are we doing this? What use cases does it support? What is the expected outcome?
Explain the proposal as if it was implemented and you were teaching it to Kuadrant user. That generally means:
This is the technical portion of the RFC. Explain the design in sufficient detail that:
The section should return to the examples given in the previous section, and explain more fully how the detailed proposal makes those examples work.
Why should we not do this?
Discuss prior art, both the good and the bad, in relation to this proposal.
A few examples of what this can include are:
This section is intended to encourage you as an author to think about the lessons from other tentatives - successful or not, provide readers of your RFC with a fuller picture.
Note that while precedent set by other projects is some motivation, it does not on its own motivate an RFC.
Think about what the natural extension and evolution of your proposal would be and how it would affect the platform and project as a whole. Try to use this section as a tool to further consider all possible interactions with the project and its components in your proposal. Also consider how this all fits into the roadmap for the project and of the relevant sub-team.
This is also a good place to "dump ideas", if they are out of scope for the RFC you are writing but otherwise related.
Note that having something written down in the future-possibilities section is not a reason to accept the current or a future RFC; such notes should be in the section on motivation or rationale in this or subsequent RFCs. The section merely provides additional information.
EOF
In 3scale SaaS we have been using successfully limitador for a couple of years together with Redis, to protect all our public endpoints. However:
We would like to update how we manage limitador application, and use the most recommended limitador setup using limitador-operator, with a production-ready grade.
image
/tag
/pullSecretName
via CRquay.io/kuadrant/limitador
RELATED_IMAGE_LIMITADOR
.
quay.io/kuadrant/limitador
can be used because it is harcodedpullSecretName
reference pointing to a secret holding the private image repo credentials), the image
/tag
/pullSecretName
should be able to be configured via CR to override default valuesapiVersion: limitador.kuadrant.io/v1alpha1
kind: Limitador
metadata:
name: limitador-sample
spec:
image:
name: brew.registry.redhat.io/rh-osbs/3scale-mas-limitador-rhel8
tag: 1.2.0-2
pullSecretName: brew-pull-secret # this secret holds the private image repo credentials
Which should create something like:
kind: Deployment
apiVersion: apps/v1
metadata:
name: limitador
spec:
...
template:
spec:
imagePullSecrets:
- name: brew-pull-secret
...
containers:
- name: limitador
image: brew.registry.redhat.io/rh-osbs/3scale-mas-limitador-rhel8:1.2.0-2
Instead of requeue, the controller should watch for owned (referenceOwner) deployments and when the deployment says it is available, a new reconciliation loop will be triggered. If for some reason, the Limitador becomes unavailable (because it crashes or whatever), its controller will never know and the status will be available until something changes the spec of the Limitador CR.
Limitador supports several storage backends for the rate limits (in-memory, redis, wasm-compatible, cached redis).
To start with, it'd be good to be able to choose between in-memory and redis in the Limitador CR.
Build bundle workflow is failing due the the latest yq version requires go1.20
go: downloading golang.org/x/xerrors v0.0.0-20220609144429-65e65417b02f
# github.com/mikefarah/yq/v4/pkg/yqlib
Error: /home/runner/go/pkg/mod/github.com/mikefarah/yq/[email protected]/pkg/yqlib/encoder_lua.go:139:[29](https://github.com/Kuadrant/limitador-operator/actions/runs/5831295861/job/15815415261#step:4:30): undefined: strings.CutPrefix
Error: /home/runner/go/pkg/mod/github.com/mikefarah/yq/[email protected]/pkg/yqlib/encoder_lua.go:237:29: undefined: strings.CutPrefix
note: module requires Go 1.20
make: *** [Makefile:131: /home/runner/work/limitador-operator/limitador-operator/bin/yq] Error 1
Error: Process completed with exit code 2.
Instead of the always using the latest version, we should pin in to the last supporting version of go1.19 (v4.34.2 was last working before this issue occurred) or alternatively we can upgrade to go1.20
Line 131 in 5381563
After a redefinition of the authority of the limits, Kuadrant/limitador#74, the limits configuration will live as a local file in the pod. This issue is about the control plane of limitador and how the limits get their way to the local file in the pod.
Limitador operator will reconcile a configmap to be mounted as local file in all the replica pods of limitador. Where are those limits coming from? Currently the limitador's operator reads RateLimit CRs and reconciles with limitador using the HTTP endpoint. The association of RateLimit CR with a limitador instance is currently hardcoded in the limitador operator. The RateLimit CRs need to be created in the same namespace of the limitador's pod and the service name and the port are hardcoded.
In order to make the limits configuration flexible and with a clear association of which limits are applied to which limitador instances, the proposal is about setting the limits to the Limitador CRD. For example:
---
apiVersion: limitador.kuadrant.io/v1alpha1
kind: Limitador
metadata:
name: limitador
spec:
replicas: 1
version: "0.4.0"
limits:
- conditions: ["get-toy == yes"]
max_value: 2
namespace: toystore-app
seconds: 30
variables: []
- conditions:
- "admin == yes"
max_value: 2
namespace: toystore-app
seconds: 30
variables: []
- conditions: ["vhaction == yes"]
max_value: 6
namespace: toystore-app
seconds: 30
variables: []
the limitador's operator would be responsible of reconciling the content of spec.limits
with the content config map mounted in the limitador's pod.
Kuadrant users define their limits in the kuadrant API: RateLimitPolicy.
Kuadrant installation owns a Limitador deployment with at least one pod running. This limitador deployment is managed via a Limitador CR. The namespace and names of this Limitador CR is known by the kuadrant controller (kuadrant control plane).
Thus, when a user creates a RateLimitPolicy and adds some limits in it, the following happens behind the scenes.
a) The kuadrant-controller reads the RLP and reconciles the limits with the list in the owned Limitador CR following kuadrant rules. As an example for those rules, the namespace
will be set by kuadrant and not exposed in the RLP. When one limit is added/updated/removed in the RLP, that limit is added/updated/removed from the Limitador CR.
b) The limitador's operator will reconcile the limits in the Limitador CR with a configmap that gets mounted in the deployment as a local file for the limitador process. The limitador's operator gets notifications when the Limtador CR changes. When one limit is added/updated/removed in the Limitador's CR, that limit is added/updated/removed from the config/map. That effectively changes the content of the local file.
The kubebuilder-tools does not support dawrin/arm64 just yet, we need to do a workaround until this is fixed: kubernetes-sigs/controller-runtime#1657
I have been successfully testing limitador-operator v0.6.0
, and I have identified a possible not intended credentials leak in the deployment container command.
I deployed the following CR called cluster
:
apiVersion: limitador.kuadrant.io/v1alpha1
kind: Limitador
metadata:
name: cluster
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app: limitador
limitador-resource: cluster
topologyKey: kubernetes.io/hostname
weight: 100
- podAffinityTerm:
labelSelector:
matchLabels:
app: limitador
limitador-resource: cluster
topologyKey: topology.kubernetes.io/zone
weight: 99
limits:
- conditions: []
max_value: 400
namespace: kuard
seconds: 1
variables:
- per_hostname_per_second_burst
listener:
grpc:
port: 8081
http:
port: 8080
pdb:
maxUnavailable: 1
replicas: 3
resourceRequirements:
limits:
cpu: 500m
memory: 64Mi
requests:
cpu: 250m
memory: 32Mi
storage:
redis:
configSecretRef:
name: redisconfig
Then redis storage is configured on an external secret with the connection string set at URL
, and I guess it is a secret and not a configmap, because connection string to connect to redis might have use user/password:
apiVersion: v1
kind: Secret
metadata:
name: redisconfig
stringData:
URL: redis://127.0.0.1/a # Redis URL of its running instance
type: Opaque
However, instead of mounting the secret on the deployment and extract the URL
into possibly an ENVVAR, it is taking the URL
from the secret, and configure it directly on the container command showing its plain value (even if it possibly has a secret password):
command:
- limitador-server
- /home/limitador/etc/limitador-config.yaml
- redis
- 'redis://redis:6379'
My recommendation would be to extract its value like any standard deployment and inject it on an ENVVAR maybe, something similar to:
env:
- name: URL
valueFrom:
secretKeyRef:
name: limits-config-cluster
key: URL
And then, you will need also to update how to use its value from the container command.
Sometimes integration test can fail due to resource conflict when updating resource as the controller can be still reconciling the resource from the creation event
[FAILED] Expected success, but got an error:
<*errors.StatusError | 0xc0003570e0>:
Operation cannot be fulfilled on limitadors.limitador.kuadrant.io "a6bde1452-63e2-4061-bf3b-db842720cee8": the object has been modified; please apply your changes to the latest version and try again
{
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {
SelfLink: "",
ResourceVersion: "",
Continue: "",
RemainingItemCount: nil,
},
Status: "Failure",
Message: "Operation cannot be fulfilled on limitadors.limitador.kuadrant.io \"a6bde1452-63e2-4061-bf3b-db842720cee8\": the object has been modified; please apply your changes to the latest version and try again",
Reason: "Conflict",
Details: {
Name: "a6bde1452-63e2-4061-bf3b-db842720cee8",
Group: "limitador.kuadrant.io",
Kind: "limitadors",
UID: "",
Causes: nil,
RetryAfterSeconds: 0,
},
Code: 409,
},
}
In [It] at: /home/runner/work/limitador-operator/limitador-operator/controllers/limitador_controller_test.go:333 @ 09/13/23 13:15:21.778
Workflow job where this happened:
When a Limitador
object is created, the operator creates a Deployment
where the actual Limitador Instance resides, and a Service
that exposes it. The values of port and name are hardcoded, thus making it impossible to define specific values.
It should be possible to set which ports and protocols will be set to the Limitador Service and also possible to spawn multiple Services by namespace.
There are a couple of issues with the make commands with newer go versions (>1.16):
go get
won't install the binaries, needs to be replaced by go install
go install
go: sigs.k8s.io/kustomize/kustomize/[email protected] (in sigs.k8s.io/kustomize/kustomize/[email protected]):
The go.mod file for the module providing named packages contains one or
more exclude directives. It must not contain directives that would cause
it to be interpreted differently than if it were the main module.
Primary key fingerprint: 3B2F 1481 D146 2380 80B3 46BB 0529 96E2 A20B 5C7E
Subkey fingerprint: 8613 DB87 A5BA 825E F3FD 0EBE 2A85 9D08 BF98 86DB
sha256sum: 'standard input': no properly formatted checksum lines found
bash: syntax error near unexpected token `;;'
I have been successfully testing limitador-operator v0.6.0
, and I have identified some inconsistencies with resource name's created by the operator.
I deployed the following CR called cluster
:
apiVersion: limitador.kuadrant.io/v1alpha1
kind: Limitador
metadata:
name: cluster
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app: limitador
limitador-resource: cluster
topologyKey: kubernetes.io/hostname
weight: 100
- podAffinityTerm:
labelSelector:
matchLabels:
app: limitador
limitador-resource: cluster
topologyKey: topology.kubernetes.io/zone
weight: 99
limits:
- conditions: []
max_value: 400
namespace: kuard
seconds: 1
variables:
- per_hostname_per_second_burst
listener:
grpc:
port: 8081
http:
port: 8080
pdb:
maxUnavailable: 1
replicas: 3
resourceRequirements:
limits:
cpu: 500m
memory: 64Mi
requests:
cpu: 250m
memory: 32Mi
storage:
redis:
configSecretRef:
name: redisconfig
And I saw that most created resources follow's naming convention of limitador- prefix plus $CR_NAME:
cluster
instance, thanks to $CR_NAMEIn that particual case it would be limitador-cluster
, so these are 2 of the created resources:
limitador-cluster
limitador-cluster
Actually this same logic is applied to all label selectors, where there are 2 labels:
labelSelector:
matchLabels:
app: limitador
limitador-resource: cluster
However, there are 2 cases in which this naming convention is not addressed:
cluster
(without limitador-
prefix)
limitador-
prefix is not being added, having pods whose name is just the CR_NAME (that can be anything) can be missleadinglimits-config-cluster
(without limitador-
prefix)
limitador-
prefix is used to all created resources, including this configmaplimitador-limits-config-cluster
After a few changes on Limitador, it doesn't provide the http endpoints to set up the limits and only listen to changes on a config file. This config file will be mounted when deployed and will be provided by a ConfigMap
which is reconciled by the limitador-controller
reading from the Limitador
CR Spec.limits
.
This is more or less how this ConfigMap
should look like:
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: limitador
name: envoy
data:
limitador-config.yaml: |
limits:
- conditions: ["get-toy == yes"]
max_value: 2
namespace: toystore-app
seconds: 30
variables: []
- conditions:
- "admin == yes"
max_value: 2
namespace: toystore-app
seconds: 30
variables: []
- conditions: ["vhaction == yes"]
max_value: 6
namespace: toystore-app
seconds: 30
variables: []
We want to improve automation in all repos for the Kuadrant components. We're aiming for:
As part of a preliminary investigation (Kuadrant/kuadrant-operator#21) of the current state of such automation, the following desired workflows and corresponding status for the Limitador Operator repo were identified. Please review the list below.
go fmt
, go vet
, cargo fmt
)go test
, cargo test
)Workflows do not have to be implemented exactly as in the list. The list is just a driver for the kind of tasks we want to cover. Each component should assess it as it makes sense considering the component's specificities. More details in the original epic: Kuadrant/kuadrant-operator#21.
You may also want to use this issue to reorganize how current workflows are implemented, thus helping us make the whole thing consistent across components.
For an example of how Authorino and Authorino Operator intend to organise this for Golang code bases, see respectively Kuadrant/authorino#351 (comment) and Kuadrant/authorino-operator#96 (comment).
Enhance the observability of Limitador resources by adding custom printcolumn annotations to the CRD. This will allow key status and configuration details to be easily displayed in the kubectl get
output.
Current state allows the user to add sidecar containers to the limitador deployment that is being managed by limitador-operator. There are two states that can be created from adding sidecars.
When the sidecar is defined as the first the container in the deployment, the operator will update the configuration to have the values expected for limitador. It however does not change the name for the container.
This means the user defined container will not be created and two containers for limitador will be created in the same pod. This causes a conflict on ports and is not be in an error state.
If the user defines the sidecar second container in the list after the limitador configuration, the side car is created as expected. This method works and the limitador-operator does not override the sidecar configuration.
Sidecar creation is depended on ordering of containers in the deployment configuration.
If a user tries to add a sidecar to the Authorino deployment in any order, the authorino-operator reverts the changes and removes any user defined configuration.
For consistency between products the expected behaviour would be to revert any user defined configuration changes to limitador deployment CRs.
In 3scale SaaS we have been using successfully limitador for a couple of years together with Redis, to protect all our public endpoints. However:
We would like to update how we manage limitador application, and use the most recommended limitador setup using limitador-operator, with a production-ready grade.
Example of pod affinity used in 3scale SaaS production to manage between 3,500 and 5,500 requests/second with 3 limitador pods (selector labels need to coincide with the labels managed right now by limitador-operator):
...
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: limitador
topologyKey: kubernetes.io/hostname
- weight: 99
podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: limitador
topologyKey: topology.kubernetes.io/zone
...
That way, we "try" (preferred) to balance the 3 limitador pods to be distributed on different worker nodes (hostname) from different AWS Availability Zones (zone) having a good fault tolerant High Availability, but without forcing it, which means that if by some reason kube-scheduler can not satisfy this distribution because current nodes are quite full of resources usage... kube-scheduler will try to satisfy this pod distribution on best effort mode with no guarantee, but at least guaranteeing that the 3 pods will be scheduled elsewhere
In 3scale SaaS we have been using successfully limitador for a couple of years together with Redis, to protect all our public endpoints. However:
We would like to update how we manage limitador application, and use the most recommended limitador setup using limitador-operator, with a production-ready grade.
Example of PDB used in 3scale SaaS production to manage between 3,500 and 5,500 requests/second with 3 limitador pods (selector labels need to coincide with the labels managed right now by limitador-operator):
kind: PodDisruptionBudget
apiVersion: policy/v1
metadata:
name: limitador
spec:
selector:
matchLabels:
app.kubernetes.io/name: limitador
maxUnavailable: 1
apiVersion: limitador.kuadrant.io/v1alpha1
kind: Limitador
metadata:
name: limitador-sample
spec:
pdb:
maxUnavailable: 1
minAvailable: 2 # Note this field is mutually exclusive setting with "minAvailable", normally better use maxUnavailable, only one of them can be used at the same time
Example how we externalize PDB config in 3scale SaaS Operator CR.
In 3scale SaaS we have been using successfully limitador for a couple of years together with Redis, to protect all our public endpoints. However:
We would like to update how we manage limitador application, and use the most recommended limitador setup using limitador-operator, with a production-ready grade.
Example of resources used currently in 3scale SaaS production to manage between 3,500 and 5,500 requests/second with 3 limitador pods:
resources:
requests:
cpu: 250m
memory: 32Mi
limits:
cpu: 500m
memory: 64Mi
Real resources usage:
Unfortunately I realized after merging. The test passed in the PR and locally but failed when merged to the main branch.
In 3scale SaaS we have been using successfully limitador for a couple of years together with Redis, to protect all our public endpoints. However:
We would like to update how we manage limitador application, and use the most recommended limitador setup using limitador-operator, with a production-ready grade.
0.4.0
that we use):Example of the PodMonitor used in 3scale SaaS production to manage between 3,500 and 5,500 requests/second with 3 limitador pods (selector labels need to coincide with the labels managed right now by limitador-operator):
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: limitador
spec:
podMetricsEndpoints:
- interval: 30s
path: /metrics
port: http
scheme: http
selector:
matchLabels:
app.kubernetes.io/name: limitador
Both PodMonitor and GrafanaDashboard should be able to be customized via CR, but use default sane values if they are enabled, so you dont need to provide all the config if you dont want, and want to trust on defaults.
apiVersion: limitador.kuadrant.io/v1alpha1
kind: Limitador
metadata:
name: limitador-sample
spec:
podMonitor:
enabled: true # by default it is false, so does not create a PodMonitor
interval: 30s # by default it is 30 if not defined
labelSelector: XX ## by default not define any label/selector
... ## maybe in the future permit to override more PodMonitor fields if needed, dont think anymore is needed by now
grafanaDashboard:
enabled: true
labelSelector: XX ## by default not define any label/selector
The initial dashboard would be provided by us initially (3scale SRE), can be embedded into operator as an asset, like done with 3scale-operator.
Current Dashboard screenshots including limitador metrics by limitador_namespace
(the app being limited), and also pods, resources cpu/mem/net metrics:
Regarding PrometheusRules (prometheus alerts), my advise is to not embed them into the operator, but provide in the repo a yaml with an example of possible alerts that can be deployed, tuned... by the app administrator if needed.
Example:
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: limitador
spec:
groups:
- name: limitador.rules
rules:
- alert: LimitadorJobDown
annotations:
message: Prometheus Job {{ $labels.job }} on {{ $labels.namespace }} is DOWN
expr: up{job=~".*limitador.*"} == 0
for: 5m
labels:
severity: critical
- alert: LimitadorPodDown
annotations:
message: Limitador pod {{ $labels.pod }} on {{ $labels.namespace }} is DOWN
expr: limitador_up == 0
for: 5m
labels:
severity: critical
Currently the deployment adds env vars to configure the pod
containers:
- env:
- name: RUST_LOG
value: info
- name: LIMITS_FILE
value: /home/limitador/etc/limitador-config.yaml
Use the CLI parameters instead:
Limitador Server v1.0.0-dev (28a77d29) debug build
The Kuadrant team - github.com/Kuadrant
Rate Limiting Server
USAGE:
limitador-server [OPTIONS] <LIMITS_FILE> [STORAGE]
ARGS:
<LIMITS_FILE> The limit file to use
OPTIONS:
-b, --rls-ip <ip> The IP to listen on for RLS [default: 0.0.0.0]
-p, --rls-port <port> The port to listen on for RLS [default: 8081]
-B, --http-ip <http_ip> The IP to listen on for HTTP [default: 0.0.0.0]
-P, --http-port <http_port> The port to listen on for HTTP [default: 8080]
-l, --limit-name-in-labels Include the Limit Name in prometheus label
-v Sets the level of verbosity
-h, --help Print help information
-V, --version Print version information
STORAGES:
memory Counters are held in Limitador (ephemeral)
redis Uses Redis to store counters
redis_cached Uses Redis to store counters, with an in-memory cache
When generating the manifests, a multi-document single manifest.yaml
file can be generated (and commited), allowing to ease installing the Limitador Operator without having to clone the repo. Similarly, by adding as well Namespace
, Deployment
, etc (i.e. the resources produced by make deploy
) to that same or a second manifest.yaml
file, deploying the Limitador Operator can be done directly from one YAML file hosted remotely.
Usually the only customization involved when deploying is the operator image, which can default to either latest
or to any last released version of the operator available from the registry (quay.io/kuadrant/limitador-operator
), instead of controller:latest
, currently hard-coded and only meaningful for devs workflow building the operator locally.
This would be analogous to https://github.com/Kuadrant/authorino-operator/blob/b66abee89a325819442c07af5f36aa05b4eba30d/Makefile#L72-L73 (generates config/install/manifests.yaml
) and https://github.com/Kuadrant/authorino-operator/blob/b66abee89a325819442c07af5f36aa05b4eba30d/Makefile#L161 (generates config/deploy/manifests.yaml
).
In the exemplified case of Authorino Operator, it's more complicated because it even downloads manifests hosted in the main Authorino repo (e.g. for the AuthConfig
CRD). However, for Limitador Operator, this is not needed, as the repo has all it needs to generate the manifests, making it even simpler to implement.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.