temporalio / helm-charts Goto Github PK
View Code? Open in Web Editor NEWTemporal Helm charts
License: MIT License
Temporal Helm charts
License: MIT License
I saw a user report of temporal failing to start on minikube running w --vm-driver=none
on ubuntu.
please test, investigate and fix if needed.
Please include an actual sample that would help validating and demonstrating Temporal Helm Chart operations.
I added support for security context recently to the Cadence Helm chart:
banzaicloud/banzai-charts#1161
It's kind of a requirement these days in restricted environments.
Describe the bug
Getting issue while connecting with an external Elasticsearch, which expects username password as an input. The connection of temporal server with an external elastic search that does not require password is working fine, but when trying with a password protected one, it is giving error. This error occur when a request to elasticsearch is made without username password.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Temporal server should interact with elastic search by adding username password.
Screenshots/Terminal ouput
{"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/_template/temporal-visibility-template]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/_template/temporal-visibility-template]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}{"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/prod-temporal-visibility]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/prod-temporal-visibility]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}{"level":"info","ts":"2021-06-08T13:18:36.909Z","msg":"Updated dynamic config","logging-call-at":"file_based_client.go:262"}
Versions (please complete the following information where relevant):
Additional context
The environment variable of username, password asked by the helm chart is also getting properly set in the containers, verified by echoing it.
Hello,
When I am doing a helm install and trying to specify a custom annotation as using something like this:
--set server.frontend.service.annotations.service\.beta\.kubernetes\.io/azure-load-balancer-internal=true
It returns with this error:
Error: unable to build kubernetes objects from release manifest: unable to decode "": resource.metadataOnlyObject.ObjectMeta: v1.ObjectMeta.Annotations: ReadString: expects " or n, but found t, error found in #10 byte of ...|nternal":true},"labe|..., bigger context ...|beta.kubernetes.io/azure-load-balancer-internal":true},"labels":{"app.kubernetes.io/component":"fron|...
helm.go:84: [debug] unable to decode "": resource.metadataOnlyObject.ObjectMeta: v1.ObjectMeta.Annotations: ReadString: expects " or n, but found t, error found in #10 byte of ...|nternal":true},"labe|..., bigger context ...|beta.kubernetes.io/azure-load-balancer-internal":true},"labels":{"app.kubernetes.io/component":"fron|...
unable to build kubernetes objects from release manifest
Even when adding double quotes, escaped or not, results in something funky like this:
service.beta.kubernetes.io/azure-load-balancer-internal: '"true"'
These are all the parameters I am passing. It only complains about the custom annotation I am trying to set.
helm install \
--debug \
--dry-run \
-n temporal \
-f values/values.cassandra.yaml
--set prometheus.enabled=false \
--set grafana.enabled=false \
--set elasticsearch.enabled=false \
--set kafka.enabled=false \
--set server.replicaCount=1 \
--set server.frontend.service.annotations.service\.beta\.kubernetes\.io/azure-load-balancer-internal=true
--set server.image.repository=temporalio/server \
--set admintools.image.repository=temporalio/admin-tools \
--set web.image.repository=temporalio/web \
--set server.config.persistence.default.cassandra.hosts=cas-temporal.db.westus.test.azure.com \
--set server.config.persistence.default.cassandra.user=temporalpe \
--set server.config.persistence.default.cassandra.password=temporalpe \
--set server.config.persistence.visibility.cassandra.hosts=cas-temporal.db.westus.test.azure.com \
--set server.config.persistence.visibility.cassandra.user=temporalpe \
--set server.config.persistence.visibility.cassandra.password=temporalpe \
--set web.ingress.enabled=true \
--set server.frontend.service.annotations.service\.beta\.kubernetes\.io/azure-load-balancer-internal="true" \
--set web.ingress.hosts={temporal.svc.westus.test.azure.com} \
--set web.ingress.annotations\.kubernetes\.io/ingress\.class="nginx" \
--set web.ingress.annotations\.nginx\.org/mergeable-ingress-type="minion" \
--set server.frontend.service.type=LoadBalancer" \
temporal . --timeout 15m
We're working on standing up the temporal service via helm and I noticed this while I was configuring the various yaml files. If a user configures a custom gRPC port for the frontend service, then the hardcoded default of 7933
will be incorrect.
helm-charts/templates/server-configmap.yaml
Line 182 in 2fb4639
It also seems that the localhost address 127.0.0.1
address would be incorrect in a deployed environment assuming that the various services (history, matching, frontend, worker) are deployed separately.
Currently parts of our helm chart are production ready and parts are development only. This makes it difficult to communicate clearly about the status of the chart.
To fix this we would like to shift this repo to house 2 charts: one production ("temporal") for only the temporal services, and another for development ("temporal-development-deps") that is more like the current chart but consumes the to-be-created temporal service only chart as a dependency along side the other dependencies.
After doing this we would also like to set up real helm releases for our charts.
Is your feature request related to a problem? Please describe.
There doesn't seem to be a way to configure TLS options for Postgres via the helm chart. The underlying server supports it so just seems like it needs to be surfaced in values.
Describe the solution you'd like
Ability to configure Postgres TLS options via Temporal Helm Chart.
Hi, I'm running kubernetes 1.19 with RKE and when i'm installing the chart, These deprecations appears:
W1208 21:07:40.172608 495469 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W1208 21:07:40.589121 495469 warnings.go:67] rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
W1208 21:07:40.644555 495469 warnings.go:67] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
1.22 is scheduled for mid 2021
Please expose TEMPORAL_CLI_ADDRESS
environment variable inside "admin tools" container, so we can simplify usage, and not require passing --address
to tctl
when running from admin tools:
Provide a configuration option that would allow for deploying an instance of MySQL together with Temporal, and configuring Temporal to use this instance of mysql for its persistence.
The Kibana pod fails to come up:
The error logs from running kubectl -n temporal logs pod/temporal-kibana-f95df4f85-cb26q
:
{"type":"log","@timestamp":"2020-07-10T16:02:03Z","tags":["info","plugins-service"],"pid":8,"message":"Plugin \"case\" is disabled."}
{"type":"log","@timestamp":"2020-07-10T16:02:33Z","tags":["info","plugins-system"],"pid":8,"message":"Setting up [37] plugins: [taskManager,siem,licensing,infra,encryptedSavedObjects,code,usageCollection,metrics,canvas,timelion,features,security,apm_oss,translations,reporting,uiActions,data,navigation,status_page,share,newsfeed,kibana_legacy,management,dev_tools,inspector,expressions,visualizations,embeddable,advancedUiActions,dashboard_embeddable_container,home,spaces,cloud,apm,graph,eui_utils,bfetch]"}
{"type":"log","@timestamp":"2020-07-10T16:02:33Z","tags":["info","plugins","taskManager"],"pid":8,"message":"Setting up plugin"}
{"type":"log","@timestamp":"2020-07-10T16:02:33Z","tags":["info","plugins","siem"],"pid":8,"message":"Setting up plugin"}
{"type":"log","@timestamp":"2020-07-10T16:02:33Z","tags":["info","plugins","licensing"],"pid":8,"message":"Setting up plugin"}
{"type":"log","@timestamp":"2020-07-10T16:02:33Z","tags":["info","plugins","infra"],"pid":8,"message":"Setting up plugin"}
{"type":"log","@timestamp":"2020-07-10T16:02:33Z","tags":["info","plugins","encryptedSavedObjects"],"pid":8,"message":"Setting up plugin"}
{"type":"log","@timestamp":"2020-07-10T16:02:33Z","tags":["warning","plugins","encryptedSavedObjects","config"],"pid":8,"message":"Generating a random key for xpack.encryptedSavedObjects.encryptionKey. To be able to decrypt encrypted saved objects attributes after restart, please set xpack.encryptedSavedObjects.encryptionKey in kibana.yml"}
{"type":"log","@timestamp":"2020-07-10T16:02:33Z","tags":["info","plugins","code"],"pid":8,"message":"Setting up plugin"}
{"type":"log","@timestamp":"2020-07-10T16:02:33Z","tags":["info","plugins","usageCollection"],"pid":8,"message":"Setting up plugin"}
{"type":"log","@timestamp":"2020-07-10T16:02:33Z","tags":["info","plugins","metrics"],"pid":8,"message":"Setting up plugin"}
{"type":"log","@timestamp":"2020-07-10T16:02:33Z","tags":["info","plugins","canvas"],"pid":8,"message":"Setting up plugin"}
{"type":"log","@timestamp":"2020-07-10T16:02:33Z","tags":["info","plugins","timelion"],"pid":8,"message":"Setting up plugin"}
{"type":"log","@timestamp":"2020-07-10T16:02:33Z","tags":["info","plugins","features"],"pid":8,"message":"Setting up plugin"}
{"type":"log","@timestamp":"2020-07-10T16:02:33Z","tags":["info","plugins","security"],"pid":8,"message":"Setting up plugin"}
{"type":"log","@timestamp":"2020-07-10T16:02:33Z","tags":["warning","plugins","security","config"],"pid":8,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in kibana.yml"}
{"type":"log","@timestamp":"2020-07-10T16:02:33Z","tags":["warning","plugins","security","config"],"pid":8,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."}
{"type":"log","@timestamp":"2020-07-10T16:02:33Z","tags":["info","plugins","apm_oss"],"pid":8,"message":"Setting up plugin"}
{"type":"log","@timestamp":"2020-07-10T16:02:33Z","tags":["info","plugins","translations"],"pid":8,"message":"Setting up plugin"}
{"type":"log","@timestamp":"2020-07-10T16:02:33Z","tags":["info","plugins","data"],"pid":8,"message":"Setting up plugin"}
{"type":"log","@timestamp":"2020-07-10T16:02:33Z","tags":["info","plugins","share"],"pid":8,"message":"Setting up plugin"}
{"type":"log","@timestamp":"2020-07-10T16:02:33Z","tags":["info","plugins","home"],"pid":8,"message":"Setting up plugin"}
{"type":"log","@timestamp":"2020-07-10T16:02:33Z","tags":["info","plugins","spaces"],"pid":8,"message":"Setting up plugin"}
{"type":"log","@timestamp":"2020-07-10T16:02:33Z","tags":["info","plugins","cloud"],"pid":8,"message":"Setting up plugin"}
{"type":"log","@timestamp":"2020-07-10T16:02:33Z","tags":["info","plugins","apm"],"pid":8,"message":"Setting up plugin"}
{"type":"log","@timestamp":"2020-07-10T16:02:33Z","tags":["info","plugins","graph"],"pid":8,"message":"Setting up plugin"}
{"type":"log","@timestamp":"2020-07-10T16:02:33Z","tags":["info","plugins","bfetch"],"pid":8,"message":"Setting up plugin"}
{"type":"log","@timestamp":"2020-07-10T16:02:33Z","tags":["info","savedobjects-service"],"pid":8,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."}
{"type":"log","@timestamp":"2020-07-10T16:02:34Z","tags":["error","elasticsearch","data"],"pid":8,"message":"Request error, retrying\nGET http://elasticsearch-master:9200/_xpack => connect ECONNREFUSED 10.43.81.26:9200"}
{"type":"log","@timestamp":"2020-07-10T16:02:34Z","tags":["error","elasticsearch","admin"],"pid":8,"message":"Request error, retrying\nGET http://elasticsearch-master:9200/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip => connect ECONNREFUSED 10.43.81.26:9200"}
{"type":"log","@timestamp":"2020-07-10T16:02:34Z","tags":["error","elasticsearch","data"],"pid":8,"message":"Request error, retrying\nHEAD http://elasticsearch-master:9200/.apm-agent-configuration => connect ECONNREFUSED 10.43.81.26:9200"}
{"type":"log","@timestamp":"2020-07-10T16:02:35Z","tags":["warning","elasticsearch","data"],"pid":8,"message":"Unable to revive connection: http://elasticsearch-master:9200/"}
{"type":"log","@timestamp":"2020-07-10T16:02:35Z","tags":["warning","elasticsearch","data"],"pid":8,"message":"No living connections"}
Could not create APM Agent configuration: No Living connections
{"type":"log","@timestamp":"2020-07-10T16:02:37Z","tags":["warning","elasticsearch","data"],"pid":8,"message":"Unable to revive connection: http://elasticsearch-master:9200/"}
{"type":"log","@timestamp":"2020-07-10T16:02:37Z","tags":["warning","elasticsearch","data"],"pid":8,"message":"No living connections"}
{"type":"log","@timestamp":"2020-07-10T16:02:37Z","tags":["warning","plugins","licensing"],"pid":8,"message":"License information could not be obtained from Elasticsearch due to Error: No Living connections error"}
{"type":"log","@timestamp":"2020-07-10T16:02:37Z","tags":["warning","elasticsearch","admin"],"pid":8,"message":"Unable to revive connection: http://elasticsearch-master:9200/"}
{"type":"log","@timestamp":"2020-07-10T16:02:37Z","tags":["warning","elasticsearch","admin"],"pid":8,"message":"No living connections"}
{"type":"log","@timestamp":"2020-07-10T16:02:37Z","tags":["error","elasticsearch-service"],"pid":8,"message":"Unable to retrieve version information from Elasticsearch nodes."}
{"type":"log","@timestamp":"2020-07-10T16:02:39Z","tags":["warning","elasticsearch","admin"],"pid":8,"message":"Unable to revive connection: http://elasticsearch-master:9200/"}
{"type":"log","@timestamp":"2020-07-10T16:02:39Z","tags":["warning","elasticsearch","admin"],"pid":8,"message":"No living connections"}
{"type":"log","@timestamp":"2020-07-10T16:02:42Z","tags":["warning","elasticsearch","admin"],"pid":8,"message":"Unable to revive connection: http://elasticsearch-master:9200/"}
{"type":"log","@timestamp":"2020-07-10T16:02:42Z","tags":["warning","elasticsearch","admin"],"pid":8,"message":"No living connections"}
{"type":"log","@timestamp":"2020-07-10T16:02:44Z","tags":["warning","elasticsearch","admin"],"pid":8,"message":"Unable to revive connection: http://elasticsearch-master:9200/"}
{"type":"log","@timestamp":"2020-07-10T16:02:44Z","tags":["warning","elasticsearch","admin"],"pid":8,"message":"No living connections"}
{"type":"log","@timestamp":"2020-07-10T16:02:47Z","tags":["warning","elasticsearch","admin"],"pid":8,"message":"Unable to revive connection: http://elasticsearch-master:9200/"}
{"type":"log","@timestamp":"2020-07-10T16:02:47Z","tags":["warning","elasticsearch","admin"],"pid":8,"message":"No living connections"}
{"type":"log","@timestamp":"2020-07-10T16:02:49Z","tags":["error","elasticsearch-service"],"pid":8,"message":"This version of Kibana (v7.6.1) is incompatible with the following Elasticsearch nodes in your cluster: v6.8.8 @ 10.42.0.196:9200 (10.42.0.196), v6.8.8 @ 10.42.0.198:9200 (10.42.0.198), v6.8.8 @ 10.42.0.192:9200 (10.42.0.192)"}
Most notably:
This version of Kibana (v7.6.1) is incompatible with the following Elasticsearch nodes in your cluster: v6.8.8 @ 10.42.0.196:9200 (10.42.0.196), v6.8.8 @ 10.42.0.198:9200 (10.42.0.198), v6.8.8 @ 10.42.0.192:9200 (10.42.0.192)
(Using default helm chart)
Reported by a customer:
https://temporalio.slack.com/archives/CTRCR8RBP/p1588095830424300
Iโm installing the current helm chart into a kube cluster with all defaults and have run into the following problem:
{"level":"fatal","ts":"2020-04-28T17:00:33.140Z","msg":"Creating visibility producer failed","service":"history","error":"kafka: client has run out of available brokers to talk to (Is your cluster reachable?)","logging-call-at":"service.go:380","stacktrace":"github.com/temporalio/temporal/common/log/loggerimpl.(*loggerImpl).Fatal\n\t/temporal/common/log/loggerimpl/logger.go:140\ngithub.com/temporalio/temporal/service/history.NewService.func1\n\t/temporal/service/history/service.go:380\ngithub.com/temporalio/temporal/common/resource.New\n\t/temporal/common/resource/resourceImpl.go:211\ngithub.com/temporalio/temporal/service/history.NewService\n\t/temporal/service/history/service.go:393\ngithub.com/temporalio/temporal/cmd/server/temporal.(*server).startService\n\t/temporal/cmd/server/temporal/server.go:234\ngithub.com/temporalio/temporal/cmd/server/temporal.(*server).Start\n\t/temporal/cmd/server/temporal/server.go:79\ngithub.com/temporalio/temporal/cmd/server/temporal.startHandler\n\t/temporal/cmd/server/temporal/temporal.go:87\ngithub.com/temporalio/temporal/cmd/server/temporal.BuildCLI.func1\n\t/temporal/cmd/server/temporal/temporal.go:207\ngithub.com/urfave/cli.HandleAction\n\t/go/pkg/mod/github.com/urfave/[email protected]/app.go:492\ngithub.com/urfave/cli.Command.Run\n\t/go/pkg/mod/github.com/urfave/[email protected]/command.go:210\ngithub.com/urfave/cli.(*App).Run\n\t/go/pkg/mod/github.com/urfave/[email protected]/app.go:255\nmain.main\n\t/temporal/cmd/server/main.go:34\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:203"}
AFAICT, the kafka pod is online, am i missing something in the docs?
(Thank you, Joseph!)
It turns out, our helm configs are relying on a specific helm deployment name (temporaltest
) ๐คฆโโ๏ธ:
Please remove this dependency, so the system does not have to be name temporaltest
๐คฆโโ๏ธ
config_template.yaml has secrets in it and should be stored as a secret so that any kubernetes or etcd encryption may be applied and access by pods/users can be limited
As the title says, the schema update job rendering is broken, as-is if a user is not using cassandra, then the update job will attempt to start the temporal server since the container receives no args.
helm-charts/templates/server-job.yaml
Lines 168 to 171 in ca48bfe
versus:
helm-charts/templates/server-job.yaml
Line 86 in ca48bfe
When ingress is enabled , it gives conversion exception on annotation and tls key.
Temporal provides file based implementation to drive the dynamic config experience for the server.
Need better support or atleast documentation in best practice on how to configure this using helm charts
I'm installing this chart with my own Cassandra but I'd still love it to automatically create keyspaces and do all the initialization as if the Cassandra was built-in. The need to build and manually run temporal-cassandra-tool
disrupts my automation workflow.
This repo is missing the license file.
Describe the bug
Pods can not download docker images.
To Reproduce
Steps to reproduce the behavior:
--set server.replicaCount=1 \ --set cassandra.config.cluster_size=1 \ --set prometheus.enabled=false \ --set grafana.enabled=false \ --set elasticsearch.enabled=false \ --set kafka.enabled=false \ temporaltest . --timeout 15m
Expected behavior
To see all pods up and Running
Screenshots/Terminal ouput
kubectl get pods
NAME READY STATUS RESTARTS AGE
temporaltest-admintools-7d58dc8455-9zzfp 0/1 ContainerCreating 0 37s
temporaltest-cassandra-0 0/1 ErrImagePull 0 37s
temporaltest-frontend-57d9458c7c-ntwvs 0/1 Init:ImagePullBackOff 0 37s
temporaltest-history-79c944d586-m6nwj 0/1 Init:ImagePullBackOff 0 37s
temporaltest-matching-687d4bdd6c-8qbvx 0/1 Init:0/4 0 37s
temporaltest-schema-setup-rp5w4 0/2 Init:ErrImagePull 0 37s
temporaltest-web-6d47bff77d-dbl66 0/1 ImagePullBackOff 0 37s
temporaltest-worker-7fd7db64fc-gxg6s 0/1 Init:ImagePullBackOff 0 37s
kubectl describe pod temporaltest-cassandra-0
Events:
Type Reason Age From Message
Normal Scheduled 54s default-scheduler Successfully assigned default/temporaltest-cassandra-0 to kind-worker2
Warning Failed 26s kubelet Failed to pull image "cassandra:3.11.3": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/cassandra:3.11.3": failedto copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/cassandra/manifests/sha256:ce85468c5badfa2e0a04ae6825eee9421b42d9b12d1a781c0dd154f70d1ca288: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning Failed 26s kubelet Error: ErrImagePull
Normal BackOff 25s kubelet Back-off pulling image "cassandra:3.11.3"
Warning Failed 25s kubelet Error: ImagePullBackOff
Normal Pulling 11s (x2 over 53s) kubelet Pulling image "cassandra:3.11.3"
Though I can download the cassandra using same docker account as above:
docker pull docker.io/library/cassandra:3.11.3
3.11.3: Pulling from library/cassandra
6ae821421a7d: Pull complete
a0fef69a7a19: Pull complete
6849fd6936d9: Pull complete
832b4e4feae8: Pull complete
12e36f0fa0d9: Pull complete
625655f45ec7: Pull complete
c4392f7e9b96: Pull complete
4f6f85e6e245: Pull complete
e60258d103eb: Pull complete
30a7210918ab: Pull complete
Digest: sha256:ce85468c5badfa2e0a04ae6825eee9421b42d9b12d1a781c0dd154f70d1ca288
Status: Downloaded newer image for cassandra:3.11.3
docker.io/library/cassandra:3.11.3
Versions (please complete the following information where relevant):
Additional context
Describe the bug
Temporal Server start failed.
To Reproduce
The configures are provided in Additional context.
Expected behavior
Temporal Server successfully start.
Screenshots/Terminal ouput
Unable to start server. Error: unable to initialize system namespace: unable to register system namespace: CreateNamespace operation failed. Failed to commit transaction. Error: Error 1062: Duplicate entry '54321-2๏ฟฝhxr@๏ฟฝ๏ฟฝc๏ฟฝ๏ฟฝY๏ฟฝj๏ฟฝ' for key 'PRIMARY'
Versions (please complete the following information where relevant):
Additional context
Firstly I initialize the database using temporal-sql-tool
version v1.9.2.
The Database Version is TiDB v4.0.x.
Several yaml configs are rewritten.
values/values.elasticsearch.yaml
elasticsearch:
enabled: false
external: true
host: "xxx"
port: "xxx"
version: "v6"
scheme: "http"
logLevel: "info"
values/values.mysql.yaml
server:
config:
persistence:
default:
driver: "sql"
sql:
driver: "mysql"
host: xxx
port: 3306
database: temporal
user: root
password: xxx
maxConns: 20
maxConnLifetime: "1h"
visibility:
driver: "sql"
sql:
driver: "mysql"
host: xxx
port: 3306
database: temporal_visibility
user: root
password: xxx
maxConns: 20
maxConnLifetime: "1h"
cassandra:
enabled: false
mysql:
enabled: true
postgresql:
enabled: false
schema:
setup:
enabled: false
update:
enabled: false
install cmd:
helm install -f values/values.elasticsearch.yaml -f values/values.mysql.yaml temporaltest \
--set prometheus.enabled=false \
--set grafana.enabled=false . --timeout 900s
Bug
Running the helm template command with helm v2 can result in whitespace being removed, invalidating the helm chart.
The output of range templates inserver-service.yaml
and server-deployment.yaml
results in:
---apiVersion: v1
kind: Service
instead of
---
apiVersion: v1
kind: Service
See this issue with Cadence template for more detail: helm/helm#7149
To Reproduce
Expected behavior
Helm should not strip whitespace.
Versions (please complete the following information where relevant):
Additional context
I think this could be fixed by changing:
{{- range $service := (list "frontend" "history" "matching" "worker") }}
{{- $serviceValues := index $.Values.server $service -}}
to
{{- range $service := (list "frontend" "history" "matching" "worker") }}
{{- $serviceValues := index $.Values.server $service }}
which will not remove whitespace.
I'm not sure if this will have any unintended impact
Is your feature request related to a problem? Please describe.
PostgreSQL has an EOL date of November 11, 2021
. Officially, the temporal platform seems to support PostgreSQL only @v9.6.
https://docs.temporal.io/docs/server/versions-and-dependencies/#persistence
https://www.postgresql.org/support/versioning/
Describe the solution you'd like
Official verbiage that more recent versions have been tested and are deemed validated as a supported database.
Additional context
When running the system via docker-compose
(e. g. https://github.com/temporalio/temporal/blob/master/docker/docker-compose.yml), the system starts up with default
namespace already existing. However, this is not the case when deploying the system via helm chart.
Please add default namespace when deploying temporal via helm chart.
PS Thanks to Remy for reporting this!
Describe the bug
In v1.10.5 tha path of the ES index template is schema/elasticsearch/v7/visibility/index_template.json
and not schema/elasticsearch/visibility/index_template_v7.json
, therefore the temporal-es-index-setup does not set up ES correctly.
Moreover, the default index name temporal_visibility_v1_dev
in the values gives a wrong hint because the index template wouldn't match the index since the index pattern has the following values:
"index_patterns": [
"temporal-visibility-*"
],
To Reproduce
Steps to reproduce the behavior:
Just perform a clean deployment from the current master (eac55bf)
Expected behavior
Create the right index template
Screenshots/Terminal ouput
Warning: Couldn't read data from file
Warning: "schema/elasticsearch/visibility/index_template_v7.json", this makes
Warning: an empty POST.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 163 100 163 0 0 4657 0 --:--:-- --:--:-- --:--:-- 4657
{"error":{"root_cause":[{"type":"parse_exception","reason":"request body is required"}],"type":"parse_exception","reason":"request body is required"},"status":400} % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 85 100 85 0 0 310 0 --:--:-- --:--:-- --:--:-- 310
(reverse-i-search)`9': kubectl -n daisy logs -f telemetry-ingestor-6fdb9cf68^C4wnks telemetry-ingestorards_acknowledged":true,"index":"temporal_visibility_v1_dev"}
in the web-ingress.yaml,
the tls and rules key is in the same indentation as the spec. Due to which the install is failing.
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: [ValidationError(Ingress): unknown field "rules" in io.k8s.api.networking.v1beta1.Ingress, ValidationError(Ingress): unknown field "tls" in io.k8s.api.networking.v1beta1.Ingress]
kindly change the indentation.
This is a feature request from customer, i.e. adding support for Kubernetes sidecar (SQL proxy)
Seems like sidecar pattern is best according to docs (https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine).
It seems like supporting generic sidecarContainers for temporal would be useful in the helm-charts anyways like prometheus does for server.sidecarContainers (https://github.com/helm/charts/tree/master/stable/prometheus#configuration).
Then you could extend it like in this comment: https://stackoverflow.com/a/62910122
From slack:
Hi, We tested cadence in an openshift environement; we have used the banzaichards cloud for cadence helm chard.(only supported with kubernetes). In fact it doesn't work on openshift.....but I small change in dockerfile makes it working under openshift. Could you add this on your dockerfile (FROM ubercadence/server:0.11.0
RUN chmod 775 /etc/cadence/config) . This chmod is not a problem with kubernetes and I will work on openshift . It's a win-win change I think.
I have been trying to deploy Temporal.io with MySQL store support. I see the rest of the chart is heavily inspired from banzaicloud's cadence chart, but it seems MySQL support has been explicitly removed from the chart while Temporal itself seems to support it.
Is there a specific reason?
I have made changes to this chart to support it for my use for now. If the Temporal team/community is interested, I can create a PR.
Please publish Temporal Helm chart to a helm chart repository (such as Helm Hub), so it is easier to install it.
Brought to our attention by Kyle, thank you!
Describe the bug
Error Details: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp: lookup workflow-frontend on 10.0.0.10:53: no such host"
Upon launching a fresh deployment in a non-default namespace, it appears my admintools service lacks a valid TEMPORAL_CLI_ADDRESS
envvar. The one set does not match the services launched. I don't know the full effects of the issue but I know when I exec into the container, I need to manually set the env var to a valid hostname.
To Reproduce
Steps to reproduce the behavior:
temporal
chart as a dependency.helm install workflow -n workflow ./ --create-namespace
admintools
podkubectl exec -it -n workflow services/workflow-temporal-admintools /bin/bash
tctl namespace list
no such host
.Expected behavior
A list of declared namespaces.
Screenshots/Terminal ouput
% kubectl exec -it -nworkflow services/workflow-temporal-admintools /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
bash-5.0# tctl namespace list
Error: Error when list namespaces info
Error Details: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp: lookup workflow-frontend on 10.0.0.10:53: no such host"
('export TEMPORAL_CLI_SHOW_STACKS=1' to see stack traces)
# export TEMPORAL_CLI_ADDRESS=workflow-temporal-frontend:7233
# tctl namespace list
Name: temporal-system
...
...
Versions (please complete the following information where relevant):
Additional context
The web deployment follows a different pattern:
https://github.com/iamjohnnym/helm-charts/blob/master/templates/web-deployment.yaml#L38
DB: Aurora MySQL 5.7.12, Serverless
Temporal: v1.3.0
Deployment fails when connecting to Aurora MySQL due to version <5.7.20:
Error:
{"level":"info","ts":"2020-11-15T13:47:53.291Z","msg":"Starting server for services","value":"[worker]","logging-call-at":"server.go:109
"}
Unable to start server: sql schema version compatibility check failed: unable to create SQL connection: Error 1193: Unknown system variable 'transaction_isolation'.
From MySQL 8.0 Release Notes:
The tx_isolation and tx_read_only system variables have been removed. Use transaction_isolation and transaction_read_only instead.
From MySQL 5.7 Release Notes: https://dev.mysql.com/doc/refman/5.7/en/added-deprecated-removed.html#optvars-deprecated
tx_isolation: Default transaction isolation level. Deprecated as of MySQL 5.7.20.
Found similar issue addressed by Cadence here: banzaicloud/banzai-charts@8fbf828
Add a configuration option for deploying and configuring elastic search as part of the chart.
Add a configuration option for deploying and configuring Grafana as part of the chart.
Currently we are using successful tcp connection as a proxy for health in kubernetes. Instead we can use grpc_health_probe to check health using grpc.
Describe the bug
Installing the helm chart as follows:
Got an error in many of the services' (frontend, history, matching, etc) init containers when installing:
waiting for default keyspace to become ready
Apparently, the keyspace becomes ready with the Job: https://github.com/temporalio/helm-charts/blob/master/templates/server-job.yaml
The job has the following annotations:
annotations:
{{- if .Values.cassandra.enabled }}
"helm.sh/hook": post-install
{{- else }}
"helm.sh/hook": pre-install
{{- end }}
..it seems that the post-install hook doesn't execute until after the pods are ready, which they don't because they're waiting for this job to run.
The job already has init containers that are waiting for cassandra to come up, so I'm not sure the install hooks are necessary.
To Reproduce
Steps to reproduce the behavior:
server:
enabled: true
replicaCount: 1
config:
persistence:
default:
driver: "cassandra"
cassandra:
hosts: ["temporal-cassandra.temporal.svc.cluster.local"]
# port: 9042
keyspace: "temporal"
user: "user"
password: "password"
existingSecret: ""
replicationFactor: 1
consistency:
default:
consistency: "local_quorum"
serialConsistency: "local_serial"
visibility:
driver: "cassandra"
cassandra:
hosts: ["temporal-cassandra.temporal.svc.cluster.local"]
keyspace: "temporal_visibility"
user: "user"
password: "password"
existingSecret: ""
replicationFactor: 1
consistency:
default:
consistency: "local_quorum"
serialConsistency: "local_serial"
frontend:
replicaCount: 1
history:
replicaCount: 1
matching:
replicaCount: 1
worker:
replicaCount: 1
admintools:
enabled: true
web:
enabled: true
replicaCount: 1
schema:
setup:
enabled: true
backoffLimit: 100
update:
enabled: true
backoffLimit: 100
elasticsearch:
enabled: false
prometheus:
enabled: false
grafana:
enabled: false
cassandra:
enabled: true
persistence:
enabled: false
config:
cluster_size: 3
ports:
cql: 9042
num_tokens: 4
max_heap_size: 512M
heap_new_size: 128M
seed_size: 0
env:
CASSANDRA_PASSWORD: password
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots/Terminal ouput
log the check-cassandra-temporal-schema
init container of the frontend service during the deployment yields:
waiting for default keyspace to become ready
Versions (please complete the following information where relevant):
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.17-eks-c5067d", GitCommit:"c5067dd1eb324e934de1f5bd4c593b3cddc19d88", GitTreeState:"clean", BuildDate:"2021-03-05T23:39:01Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
0.9.3
As with persistence passwords ElasticSearch password should be in a secret.
Describe the bug
When installed with the default web.image.tag
of 1.10.1
, the temporal-web
pods fail to start (ImagePullBackoff
).
It appears that the 1.10.1
tag does not yet exist for the temporalio/web
image (tags).
To Reproduce
Steps to reproduce the behavior:
temporal-web
pods remain in the ImagePullBackoff
status.Expected behavior
All pods deployed as a result of the chart installation eventually reach a Running
status.
Screenshots/Terminal ouput
$ kubectl get pods -n temporal
NAME READY STATUS RESTARTS AGE
temporal-admintools-8b848bc98-wsvzv 1/1 Running 0 4m38s
temporal-cassandra-0 1/1 Running 0 4m38s
temporal-cassandra-1 1/1 Running 0 2m51s
temporal-cassandra-2 0/1 Running 0 65s
temporal-frontend-cbf6c8767-n97qw 1/1 Running 3 4m38s
temporal-history-6cd9cb7676-4jfjd 1/1 Running 3 4m38s
temporal-matching-658b7464cf-wjd84 1/1 Running 3 4m38s
temporal-web-569975fff5-k7cxr 0/1 ImagePullBackOff 0 4m38s
temporal-worker-5c9bc769ff-d9zzq 1/1 Running 3 4m38s
$ kubectl describe pod temporal-web-569975fff5-k7cxr -n temporal
Name: temporal-web-569975fff5-k7cxr
Namespace: temporal
Priority: 0
Node: <elided>
Start Time: Fri, 09 Jul 2021 14:48:03 -0500
Labels: app.kubernetes.io/component=web
app.kubernetes.io/instance=temporal
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=temporal
app.kubernetes.io/part-of=temporal
app.kubernetes.io/version=1.10.1
helm.sh/chart=temporal-0.10.1
pod-template-hash=569975fff5
Annotations: <none>
Status: Pending
IP: <elided>
IPs:
IP: <elided>
Controlled By: ReplicaSet/temporal-web-569975fff5
Containers:
temporal-web:
Container ID:
Image: temporalio/web:1.10.1
Image ID:
Port: 8088/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment:
TEMPORAL_GRPC_ENDPOINT: temporal-frontend.temporal.svc:7233
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-6hf98 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-6hf98:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-6hf98
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned temporal/temporal-web-569975fff5-k7cxr to <elided>
Normal Pulling 9m53s (x4 over 11m) kubelet Pulling image "temporalio/web:1.10.1"
Warning Failed 9m53s (x4 over 11m) kubelet Failed to pull image "temporalio/web:1.10.1": rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/temporalio/web:1.10.1": failed to resolve reference "docker.io/temporalio/web:1.10.1": docker.io/temporalio/web:1.10.1: not found
Warning Failed 9m53s (x4 over 11m) kubelet Error: ErrImagePull
Normal BackOff 9m27s (x7 over 11m) kubelet Back-off pulling image "temporalio/web:1.10.1"
Warning Failed 82s (x42 over 11m) kubelet Error: ImagePullBackOff
Additional context
Until temporalio/web:1.10.1
is available, users can set web.image.tag
to 1.10.0
at install/upgrade time.
Allow a configuration option for having the Temporal instance deployed by Helm to use an instance of MySQL that already exists, outside of the deployment.
Describe the bug
I was trying to disable cassandra and use the chart with postgresql easily. Looks like it cannot be done because of server-job template.
To Reproduce
Steps to reproduce the behavior:
helm install \
--set server.replicaCount=1 \
--set cassandra.enabled=false \
--set elasticsearch.enabled=false \
--set kafka.enabled=false \
--set postgresql.enabled=true \
temporaltest . --timeout 15m
Error: execution error at (temporal/templates/server-job.yaml:92:24): Please specify cassandra port for default store
Expected behavior
Temporal starts without cassandra but with postgresql.
Versions (please complete the following information where relevant):
Additional context
Add any other context about the problem here.
the first time I installed this i ended up having longer names for pods/svcs than indicated in the docs:
$ k get pods
NAME READY STATUS RESTARTS AGE
elasticsearch-master-0 1/1 Running 0 12m
elasticsearch-master-1 1/1 Running 0 12m
elasticsearch-master-2 1/1 Running 0 12m
temporlicious-cassandra-0 1/1 Running 0 12m
temporlicious-cassandra-1 1/1 Running 0 10m
temporlicious-cassandra-2 1/1 Running 0 7m43s
temporlicious-grafana-8684f55d85-cspgp 1/1 Running 0 12m
temporlicious-kafka-0 1/1 Running 5 12m
temporlicious-kube-state-metrics-689fdb76cc-6cv7n 1/1 Running 0 12m
temporlicious-prometheus-alertmanager-69fb7f4f6d-twwcw 2/2 Running 0 12m
temporlicious-prometheus-pushgateway-6dd8fdbbbc-mk64k 1/1 Running 0 12m
temporlicious-prometheus-server-564fbc54d9-4w6xq 2/2 Running 0 12m
temporlicious-temporal-admintools-5df8cdbd55-rt69b 1/1 Running 0 12m
temporlicious-temporal-frontend-5687b9f84c-rjdls 1/1 Running 0 12m
temporlicious-temporal-history-66b88465c6-9gst5 1/1 Running 4 12m
temporlicious-temporal-matching-7745757866-cfhms 1/1 Running 0 12m
temporlicious-temporal-web-85cf5cfdcc-77rr9 1/1 Running 0 12m
temporlicious-temporal-worker-57965bc4dc-bnwtx 1/1 Running 4 12m
temporlicious-zookeeper-0 1/1 Running 0 12m
this also had the side effect of tctl
in the admintools pod not having the correct address for the frontend (it was looking for temporlicious-frontend
rather than temporlicious-temporal-frontend
).
I thought this was because i installed into the defualt namespace, and when i tried installing into a temporal namespace the names were as expected (without the -temporal-
labels in the middle), but then when i tried installing in default again I couldn't repro this issue.
Hi
would it be possible for you to add temporal-web to the chart?
Necessary changes that should enable archival on cluster level #63
There is still something missing in the PR
Repro:
helm install temporaltest . --timeout 900s
kubectl exec -it services/temporaltest-admintools -- bash -c "tctl --ns nstest n register -has enabled -vas enabled"
kubectl exec -it services/temporaltest-admintools -- bash -c "tctl --ns nstest namespace describe""
Expected:
Archivals are enabled
Actual:
Archivals are disabled
I just did a fresh install using your helm chart
org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:322) [elasticsearch-6.8.8.jar:6.8.8]
at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:249) [elasticsearch-6.8.8.jar:6.8.8]
at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:564) [elasticsearch-6.8.8.jar:6.8.8]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) [elasticsearch-6.8.8.jar:6.8.8]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
at java.lang.Thread.run(Thread.java:832) [?:?]
[2021-01-18T22:44:42,691][WARN ][o.e.d.z.ZenDiscovery ] [elasticsearch-master-0] not enough master nodes discovered during pinging (found [[Candidate{node={elasticsearch-master-0}{fT0h0QSLRviF288vMzreMQ}{JIIqoPB_SD2J6Ugj3Cqu6w}{10.10.2.16}{10.10.2.16:9300}{ml.machine_memory=2147483648, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, clusterStateVersion=-1}]], but needed [2]), pinging again
Any Suggestions? Thank you
If the config for server is missing log
section then server sets the output stream as stderr
.
Users need to explicitly configure logger by providing the following section in config:
log:
stdout: true
level: info
Relevant code which sets the logger.
I need to access Temporal from outside the cluster for the web with the ingress and for the rpc service
We have a metalLB with external-DNS setup, but for Temporal, some customizations are missing:
The current README doesn't make it super-clear how to install temporal without ElasticSearch/extended Visibility features.
Here is a thread that describes this setup:
https://community.temporal.io/t/running-visibility-on-my-sql-in-k8s-setup/369?u=markmark
Please update the README to explicitly call out this scenario.
Add a configuration option for deploying and configuring Prometheus as part of the chart
Hopefully Temporal ported over the Cassandra + CQL updates to allow Cassandra over TLS support. Currently we are maintaining our own fork of the old Cadence helm charts to add support for a configmap TLS CA cert and TLS server config to load said CA cert and use TLS to connect to Cassandra. Not sure if I have time to port them over + test them. Quick dump of some changes. These are kinda hacky but they work:
--- a/_infra/charts/cadence/templates/server-deployment.yaml
+++ b/_infra/charts/cadence/templates/server-deployment.yaml
@@ -102,12 +102,22 @@ spec:
- name: config
mountPath: /etc/cadence/config/config_template.yaml
subPath: config_template.yaml
+ {{- if $.Values.tls.enabled }}
+ - name: certs
+ mountPath: /tlsfiles/caCert.pem
+ subPath: caCert.pem
+ {{- end}}
resources:
{{- toYaml (default $.Values.server.resources $serviceValues.resources) | nindent 12 }}
volumes:
- name: config
configMap:
name: {{ include "cadence.fullname" $ }}
+ {{- if $.Values.tls.enabled }}
+ - name: certs
+ configMap:
+ name: {{ include "cadence.fullname" $ }}-tlsfiles
+ {{- end}}
{{- with (default $.Values.server.nodeSelector $serviceValues.nodeSelector) }}
nodeSelector:
{{- toYaml . | nindent 8 }}
diff --git a/_infra/charts/cadence/templates/tls-configmap.yaml b/_infra/charts/cadence/templates/tls-configmap.yaml
new file mode 100644
index 0000000..bf8f49b
--- /dev/null
+++ b/_infra/charts/cadence/templates/tls-configmap.yaml
@@ -0,0 +1,16 @@
+{{- if .Values.tls.enabled }}
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ include "cadence.fullname" . }}-tlsfiles
+ labels:
+ app.kubernetes.io/name: {{ include "cadence.name" . }}
+ helm.sh/chart: {{ include "cadence.chart" . }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/version: {{ .Chart.AppVersion | replace "+" "_" }}
+ app.kubernetes.io/part-of: {{ .Chart.Name }}
+data:
+ caCert.pem: |
+{{ .Files.Get (printf "%s" .Values.tls.caCert) | indent 4 }}
+{{- end }}
\ No newline at end of file
Example Usage:
diff --git a/_infra/charts/cadence/values-staging.yaml b/_infra/charts/cadence/values-staging.yaml
index 26b78f5..d132bb3 100644
--- a/_infra/charts/cadence/values-staging.yaml
+++ b/_infra/charts/cadence/values-staging.yaml
@@ -19,6 +19,9 @@ server:
keyspace: cadence001
user: "cadence001"
existingSecret: "cadence001-default-store"
+ tls:
+ enabled: true
+ caFile: "/tlsfiles/caCert.pem"
visibility:
driver: "cassandra"
cassandra:
@@ -27,6 +30,9 @@ server:
keyspace: cadence001_visibility
user: "cadence001_visibility"
existingSecret: "cadence001-visibility-store"
+ tls:
+ enabled: true
+ caFile: "/tlsfiles/caCert.pem"
schema:
setup:
enabled: false
@@ -36,3 +42,7 @@ cassandra:
enabled: false
mysql:
enabled: false
+
+tls:
+ enabled: true
+ caCert: "staging-cert.pem"
Describe the bug
If external elasticsearch is used, u cannot disable user authentification and becauser of that init containers will fail during ES checks ( e.g. aws es service )
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Add possibility to continue without username, password
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.