GithubHelp home page GithubHelp logo

openshift / telemeter Goto Github PK

View Code? Open in Web Editor NEW
101.0 16.0 106.0 36.28 MB

Prometheus push federation

License: Apache License 2.0

Makefile 2.61% Go 74.83% Shell 3.72% Dockerfile 0.26% Jsonnet 18.24% Python 0.33%

telemeter's Introduction

Telemeter

Telemeter is a set of components used for OpenShift remote health monitoring. It allows OpenShift clusters to push telemetry data about clusters to Red Hat, as Prometheus metrics.

Telemeter Architecture

telemeter-server needs to receive and send metrics across multiple security boundaries, and thus needs to perform several authentication, authorization and data integrity checks. It (currently) has two endpoints via which it receives metrics and forwards them to an upstream service as a Prometheus remote write request.

/upload endpoint (receive metrics in []client_model.MetricFamily format from telemeter-client, currently used by CMO)

Telemeter implements a Prometheus federation push client and server to allow isolated Prometheus instances that cannot be scraped from a central Prometheus to instead perform authorized push federation to a central location.

The telemeter-client is deployed via the OpenShift Cluster Monitoring Operator and performs a certain set of actions via a forwarder.Worker every 4 minutes and 30 seconds (by default).

  1. On initialization, telemeter-client sends a POST request to the /authorize endpoint of telemeter-server with its configured token (configured via --to-token/to-token-file) as a auth header and the cluster ID as an id request query param (configured via --id). It exchanges the token for a JWT token from this endpoint and also receives a set of labels to include as well. Each client is uniquely identified by a cluster ID and all metrics federated are labelled with that ID. For more details on /authorize see section.
  2. It caches this token and labels in tokenStore and returns a HTTP roundtripper. The roundtripper checks validity and of the cached token and refreshes it before attaching it to any request it sends to telemeter-server.
  3. telemeter-client sends a GET request to the /federate endpoint of the in-cluster Prometheus instance, and scrapes all metrics (authenticates via --from-ca-file + --from-token/from-token-file). It retrieves the metrics from the response body and parses it into a []*client_model.MetricFamily type. You can even use --match arguments to match rules while federating.
  4. telemeter-client performs some transformations on these collected metrics, to anonymize them, rename them and to add labels provided by the roundtripper tokenStore and CLI args.
  5. telemeter-client then encodes the metrics (of type []*client_model.MetricFamily) into a POST request body and sends it to the /upload endpoint of telemeter-server, thereby "pushing" metrics.

The telemeter-server upon receiving a request at the /upload endpoint, does the following,

  1. It authorizes the request by inspecting the JWT token attached in the auth header, via the authorize.NewAuthorizeClientHandler which uses jwt.clientAuthorizer struct that implements the authorize.ClientAuthorizer interface, to uniqely identify the telemeter-client.
  2. If successfully identified, it passes authorize.Client into the request context, from which cluster ID is extracted later on via server.ClusterID middleware.
  3. It then checks if the cluster that the request came from, is under the configured request rate limit.
  4. If the request in under rate limits, telemeter-server validates/transforms those metrics encoded in the request, by checking request body size, applying whitelist label matcher rules, elide labels (configured via --whitelist and --elide-label) and clusterID labels. It also overwrites all the timestamps that came with the metric families and records the drift, if any.
  5. The server then converts the received metric families to []prompb.TimeSeries. During conversion however it drops all the timestamps again and overwrites that with current timestamp. It then marshals that into a Prometheus remote write request and forwards that to the Observatorium API, with an oauth2.Client (configured via OIDC flags) which attaches the correct auth header token after hitting SSO.

/authorize (for telemeter-client)

telemeter-server implements an authorization endpoint for telemeter-client which does the following,

  1. telemeter-server uses jwt.NewAuthorizeClusterHandler which accepts POST requests, having a auth header token and a "id" query param.
  2. This handler uses tollbooth.NewAuthorizer which implements the authorize.ClusterAuthorizer interface, to authorize that particular cluster. It uses authorize.AgainstEndpoint to send the cluster ID and token as a POST request to the authorization server (configured via --authorize). The authorization server returns a 200 status code, if the cluster is identified correctly.
  3. tollbooth.AuthorizeCluster returns a subject which is used as the client identifier in a generated signed JWT which is returned to the telemeter-client, along with any labels.

/metrics/v1/receive endpoint (receive metrics in prompb.WriteRequest format from any client)

telemeter-server also supports receiving remote write requests directly from in-cluster Prometheus (or any Prometheus with the appropriate auth header). In this case, telemeter-client is no longer needed.

Any client sending a remote write request will need to attach a composite token as an auth header to the request, so that telemeter-server can identify which cluster that request belongs to. You can generate the token via the following,

CLUSTER_ID="$(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}')" && \
AUTH="$(oc get secret pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' | jq '.auths."cloud.openshift.com"'.auth)" && \
echo -n "{'authorization_token':$AUTH,'cluster_id':$CLUSTER_ID}" | base64 -w 0

The client will also be responsible for ensuring that all metrics sent will have the _id (cluster ID) label. Sending metric metadata is not supported.

Upon receiving a request at this endpoint, telemeter-server does the following,

  1. telemeter-server parses the bearer token (decodes base64 JSON with "cluster_id" and "authorization_token" fields) via authorize.NewHandler
  2. It then sends this as a POST request against the authorization server (configured via --authorize) using authorize.AgainstEndpoint. The authorization server returns a 200 status code, if the cluster is identified correctly.
  3. telemeter-server then checks the request body size and if all metrics in the remote write request have the cluster ID label (_id by default). It also drops metrics which do not match whitelist label matchers and elides labels (configured via --whitelist and --elide-label).
  4. It then forwards that to the Observatorium API, with an oauth2.Client (configured via OIDC flags) which attaches the correct auth header token after hitting SSO.

This is planned to be adopted by CMO.

note: Telemeter is alpha and may change significantly

Get started

To see this in action, run

make test-integration

The command launches a two instance telemeter-server cluster and a single telemeter-client to talk to that server, along with a Prometheus instance running on http://localhost:9090 that shows the federated metrics. The client will scrape metrics from the local Prometheus, then send those to the telemeter-server, which will then forward metrics to Thanos Receive, which can be queried via a Thanos Querier.

To build binaries, run

make build

To execute the unit test suite, run

make test-unit

Adding new metrics to send via telemeter

Docs on the process on why and how to send these metrics are available here.

Testing recording rule changes

Run

make test-rules

telemeter's People

Contributors

aditya-konarde avatar arilivigni avatar brancz avatar crawford avatar douglascamata avatar itdove avatar jan--f avatar jfchevrette avatar joaobravecoding avatar kahowell avatar kakkoyun avatar lilic avatar maorfr avatar marioferh avatar matej-g avatar metalmatze avatar openshift-ci[bot] avatar openshift-merge-bot[bot] avatar openshift-merge-robot avatar paulfantom avatar philipgough avatar rporres avatar s-urbaniak avatar samuelstuchly avatar saswatamcode avatar simonpasquier avatar slashpai avatar smarterclayton avatar squat avatar thibaultmg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

telemeter's Issues

Telemeter server: could not join any of [telemeter-server]

Possibly related to #68 ?

2018/10/29 13:20:22 error: Could not join any of [telemeter-server]: 1 error occurred:
* Failed to resolve telemeter-server: lookup telemeter-server on 172.35.22.33:53: no such host

There is a telemeter-server service in the namespace. Also if I rsh into the telemeter-server pod, I can resolve the name.

sh-4.2$ host -v telemeter-server
Trying "telemeter-server.telemeter-production.svc.cluster.local"
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 11630
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;telemeter-server.telemeter-production.svc.cluster.local. IN A

;; ANSWER SECTION:
telemeter-server.telemeter-production.svc.cluster.local. 30 IN A 10.128.10.43
telemeter-server.telemeter-production.svc.cluster.local. 30 IN A 10.128.12.111
telemeter-server.telemeter-production.svc.cluster.local. 30 IN A 10.129.10.105

Received 121 bytes from 172.35.22.33#53 in 0 ms
Trying "telemeter-server.telemeter-production.svc.cluster.local"
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 23352
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;telemeter-server.telemeter-production.svc.cluster.local. IN AAAA

;; AUTHORITY SECTION:
cluster.local.          60      IN      SOA     ns.dns.cluster.local. hostmaster.cluster.local. 1540818000 28800 7200 604800 60

Received 127 bytes from 172.35.22.33#53 in 0 ms
Trying "telemeter-server.telemeter-production.svc.cluster.local"
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 10232
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;telemeter-server.telemeter-production.svc.cluster.local. IN MX

;; AUTHORITY SECTION:
cluster.local.          60      IN      SOA     ns.dns.cluster.local. hostmaster.cluster.local. 1540818000 28800 7200 604800 60

Received 127 bytes from 172.35.22.33#53 in 0 ms

Build error

  • I am getting below error while building the project:

priyank@priyank-HP-Pavilion-Laptop-15-cc1xx:~/go/src/telemeter$ make
find: ‘./benchmark’: No such file or directory
go build ./cmd/telemeter-client
cmd/telemeter-client/main.go:21:2: cannot find package "github.com/openshift/telemeter/pkg/forwarder" in any
of:
/home/priyank/go/src/telemeter/vendor/github.com/openshift/telemeter/pkg/forwarder (vendor tree)
/usr/local/go/src/github.com/openshift/telemeter/pkg/forwarder (from $GOROOT)
/home/priyank/go/src/github.com/openshift/telemeter/pkg/forwarder (from $GOPATH)

cmd/telemeter-client/main.go:22:2: cannot find package "github.com/openshift/telemeter/pkg/http" in any of:
/home/priyank/go/src/telemeter/vendor/github.com/openshift/telemeter/pkg/http (vendor tree)
/usr/local/go/src/github.com/openshift/telemeter/pkg/http (from $GOROOT)
/home/priyank/go/src/github.com/openshift/telemeter/pkg/http (from $GOPATH)
cmd/telemeter-client/main.go:23:2: cannot find package "github.com/openshift/telemeter/pkg/metricfamily" in a
ny of:
/home/priyank/go/src/telemeter/vendor/github.com/openshift/telemeter/pkg/metricfamily (vendor tree)
/usr/local/go/src/github.com/openshift/telemeter/pkg/metricfamily (from $GOROOT)
/home/priyank/go/src/github.com/openshift/telemeter/pkg/metricfamily (from $GOPATH)
Makefile:18: recipe for target 'build' failed
make: *** [build] Error 1

  • I tried to resolve this by getting this project as vendor dependency using glide, but the latest release available is 3.11.0:

priyank@priyank-HP-Pavilion-Laptop-15-cc1xx:~/go/src/telemeter$ glide get github.com/openshift/telemeter
[WARN] The name listed in the config file (github.com/openshift/telemeter) does not match the current locati
on (telemeter)
[INFO] Preparing to install 1 package.
[INFO] Attempting to get package github.com/openshift/telemeter
[INFO] --> Gathering release information for github.com/openshift/telemeter
[INFO] The package github.com/openshift/telemeter appears to have Semantic Version releases (http://semver.o
rg).
[INFO] The latest release is v3.11.0. You are currently not using a release. Would you like
[INFO] to use this release? Yes (Y) or No (N)
Y

  • I continued with 3.11 release but still getting error for metricfamily:

cmd/telemeter-client/main.go:23:2: cannot find package "github.com/openshift/telemeter/pkg/metricfamily" in a
ny of:
/home/priyank/go/src/telemeter/vendor/github.com/openshift/telemeter/pkg/metricfamily (vendor tree)
/usr/local/go/src/github.com/openshift/telemeter/pkg/metricfamily (from $GOROOT)
/home/priyank/go/src/github.com/openshift/telemeter/pkg/metricfamily (from $GOPATH)

Am I doing anything wrong or the build is broken?

Telemeter server manifest has bogus image definition

The image is defined under the wrong scope:

- apiVersion: apps/v1beta2
image: ${IMAGE}:${IMAGE_TAG}
kind: StatefulSet

.. and is also defined under spec.template.spec.containers[] here, without template variables.

image: quay.io/openshift/origin-telemeter:v4.0

My knowledge of jsonnet is very minimal, but it seems the bogus image definition comes from here

if object.kind == 'StatefulSet' then { image: '${IMAGE}:${IMAGE_TAG}' }

And I believe we need the following to output template variables rather that then hardcoded image:tag

container.new('telemeter-server', $._config.imageRepos.telemeterServer + ':' + $._config.versions.telemeterServer) +

This currently prevent the deployment of telemeter-server.

Reader Limit

In telemeter-client there is a limit of 200 * 1024 Bytes for the Prometheus reader. Is this just a recommended value or would you rather see that as kind of a hard limit?
Reason is I'm scraping some container_* and kube_* metrics. And those metrics are easily reaching the 200 K limit when the telemeter-client scrapes the Prometheus.

Thanks Markus.

Log metadata about requests against what endpoint is not authorized

During incident, we lost time figuring out if it's AMS not authorizing us, or Observatorium API (it was API at the end).

We only got:

level=warn caller=forward.go:148 ts=2021-05-20T15:01:06.354479232Z request=telemeter-server-3/P6iMXi9VyS-27827131 msg="response status code is 401 Unauthorized"

Collect Cluster Status Version History

We should pull back a ClusterVersionStatus.history so we can see or correlate how many upgrades a user has performed, and what the success rate, version, etct is for those upgrades.

double amount of metrics when using the /federate endpoint of infogw

This is the amount of metrics in the infogw cluster:
image

Here's the same query at my prometheus that /federates from infogw
image

In the first half, I had the scrape_interval at 60s, which resulted in almost exactly the double amount of datapoints.
The second half had 270s, with those spikes again.

Maybe #145 did not have the desired effect?

Indeed I'm getting the metrics twice: ?!

image

Any ideas @s-urbaniak

Here's the config

global:
  scrape_interval: 1m
  scrape_timeout: 10s
  evaluation_interval: 1m
  external_labels:
    monitor: prometheus
    replica: $(HOSTNAME)
rule_files:
- /etc/prometheus/*.rules
scrape_configs:
- job_name: telemeter
  honor_labels: true
  params:
    match[]:
    - '{_id!=""}'
  scrape_interval: 1m
  scrape_timeout: 59s
  metrics_path: /federate
  scheme: https
  static_configs:
  - targets:
    - infogw-data.api.openshift.com
  bearer_token: <secret>

Telemeter server statefulset manifest attempting to create PV claim with invalid name

create Claim -telemeter-server-0 for Pod telemeter-server-0 in StatefulSet telemeter-server failed error: PersistentVolumeClaim "-telemeter-server-0" is invalid: [metadata.name: Invalid value: "-telemeter-server-0": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*'), spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]

Possibly related to the empty item here?

volumeClaimTemplates:
- {}

RoleBindings listed before Roles in template

https://github.com/openshift/telemeter/blob/91e1dd6d343b0ca89440bd0776b4719baa532ad4/manifests/server/list.yaml
https://github.com/openshift/telemeter/blob/91e1dd6d343b0ca89440bd0776b4719baa532ad4/manifests/prometheus/list.yaml

Because the RoleBindings are listed before the Roles, our CI fails to apply the rolebinding objects on the first run because the roles are missing.

A workaround is to run the job a second time.

Ideally the Role objects would be listed before the RoleBinding objects in the templates.

--elide-label drops the entity completely

Hello,
When I try to use --elide-label

cmd.Flags().StringArrayVar(&opt.ElideLabels, "elide-label", opt.ElideLabels, "A list of labels to be elided from incoming metrics.")

I expect to see the metric without the label elided, but instead, the whole metric is being dropped.

For example:

Metric in Prometheus:

some_name{usefull_label="data", label_to_be_elided="not_usefull_data"}

Telemeter config:

...
- --elide-label='label_to_be_elided'
...

Expected result:

some_name{usefull_label="data"}

Actual result:

Metric is no longer exists in the Prometheus

no data

UPD:

was my mistake. Metrics are there

Missing prometheus operator

The manifests here creates a Prometheus CR but there is no prometheus-operator in the namespace that can pick it up.

Is there a requirement that prometheus-operator should be deployed prior to this?

Should it be deployed from OLM?

Can we / should we add an OLM subscription manifest in this repo to deploy prometheus-operator?

Metrics are too long to transmit.

I have setup my clusters to use Telemeter Client to federate metrics to a single OCP cluster with Telemeter Server. The Telemeter Client continues to fail and report the following error "error: unable to forward results: the incoming sample data is too long". I have reduced the scope to a single metric {name="kube_pod_container_resource_requests_cpu_cores"} in the Telemeter Client configuration; however, it still reports this error. I can get some metrics to work; such as {name="up"} and {name="cluster_version"} with the same setup.

Telemeter-Client Cluster: v3.11.200
Telemeter-Server Cluster: 4.3.13

Remove volumeClaimTemplates from StatefulSet manifest

Any attempts to oc apply an existing StatefulSet will fail if the new template
contains volumeClaimTemplates with the following message:

The StatefulSet "telemeter-server" is invalid: spec: Forbidden: updates
to statefulset spec for fields other than 'replicas', 'template', and
'updateStrategy' are forbidden.

We believe the underlying problem to be a bug in k8s. In this case, the
generated StatefulSet template contains volumeClaimTemplates = []. When the
StatefulSet is created from scratch, that value is removed from the
representation as it's an empty array. However, upon running oc apply again
with the same template the validation fails as it believes we're trying to
change the volumeClaimTemplates field.

Would it be possible for you to tweak the jsonnet generation so this value is
removed?

Image creation requires access to restricted base images

I tried to create a docker image from the repository but found that this requires access to registry.ci.openshift.org:

[rcampos@rh-laptop telemeter]$ make image
find: ‘./benchmark’: No such file or directory
docker build -t quay.io/openshift/telemeter:2c9c76e6 .
Sending build context to Docker daemon  56.57MB
Step 1/9 : FROM registry.ci.openshift.org/ocp/builder:rhel-8-golang-1.17-openshift-4.10                                                           
unauthorized: authentication required
make: *** [Makefile:76: .hack-operator-image] Error 1

Would there be a way for me to get access to these images, or are there alternative public base images I could use in the Dockerfile?

Prometheus should store data to a persistent storage backend

Following our conversation last week, I'm opening this issue to discuss adding persistent storage to Prometheus for telemeter.

I see two options:

  • Add a spec.storage.volumeClaimTemplate on the Prometheus object and use EC2 backed PVs to store data.
  • Deploy Thanos and use that as a storage backend

I think we can also do both where EC2 would be short term data and Thanos/S3 would have mid-long term data?

We need to keep in mind EC2 PV resizing is not possible at the moment on the target platform so if we're going to use that we'll need to think through sizing the volumes adequately for the amount of data we expect to store.

Consider sending some human-meaningful cluster name or domain?

Motivation

A user running installer according to try.openshift.com instructions, typed in a cluster "name", say cben-1-test. It affects the cluster domain names, e.g. console-openshift-console.apps.cben-1-test.sdev.devshift.net.
However the telemetry data AFAICT is only identified by the long hex id. If one creates say 5 clusters, they can only appear in https://cloud.redhat.com/openshift/clusters/ named by these hex ids, and to see meaninful names the user has to manually edit the "display name" of each cluster there. Which requires finding the ids (in metadata.json written by installer, or somewhere inside the cluster itself).

Proposal

  • If some metric included the chosen name / domain, https://cloud.redhat.com/openshift/clusters/ could offer a better experience for self-installed clusters.

    • Full domain would also allow cloud.redhat.com to offer a link to the cluster's console (currently not supported for self-installed clusters, only for clusters installed from the site).
  • There might be other ways to improve this from installer side (installer explicitly "phoning home" instead of relying on telemetry for first registration; installer printing out deep link into https://cloud.redhat.com/ pointing to that cluster).

Privacy Considerations

This proposes making the collected data less anonymous, should be weighed cautiously.
Note that Red Hat already knows who created the cluster (auth token used in telemetry is account-specific), this is just about the name / domain the user chose.

Move telemeter server metrics whitelist to a ConfigMap

whitelist: e19fbmFtZV9fPSJ1cCJ9CntfX25hbWVfXz0iY2x1c3Rlcl92ZXJzaW9uIn0Ke19fbmFtZV9fPSJjbHVzdGVyX29wZXJhdG9yX3VwIn0Ke19fbmFtZV9fPSJjbHVzdGVyX29wZXJhdG9yX2NvbmRpdGlvbnMifQp7X19uYW1lX189ImNsdXN0ZXJfdmVyc2lvbl9wYXlsb2FkIn0Ke19fbmFtZV9fPSJjbHVzdGVyX3ZlcnNpb25fcGF5bG9hZF9lcnJvcnMifQp7X19uYW1lX189Im1hY2hpbmVfY3B1X2NvcmVzIn0Ke19fbmFtZV9fPSJtYWNoaW5lX21lbW9yeV9ieXRlcyJ9CntfX25hbWVfXz0iZXRjZF9vYmplY3RfY291bnRzIn0Ke19fbmFtZV9fPSJhbGVydHMiLGFsZXJ0c3RhdGU9ImZpcmluZyJ9

Secrets are ignored in upstream manifests and are instead fetched from a secrets store and applied during CI/CD. Because of this, the whitelist key in the above secret will never be applied to the running telemeter instances when changes are merged to this repo.

The established pattern is that non-secret / volatile data should reside in a ConfigMap which is not ignored by our CI which would allow changes to be merged without the intervention of the SRE team to update/apply a secret.

Would it be posible to move the whitelist to a ConfigMap object?

K8S support

Anyone running this in kubernetes? We would like to federate iot metric from the edge into the cloud and this looks a perfect match.

Is there anything special other then running the docker container and pointing it to the local prometheus server and remote telemeter server?

Telemeter server logs: [$(NAME)] node joined $(NAME)

On telemeter server startup the following is outputed to the console.

Unsure where the $(NAME) is coming from but it'd probably useful to see the real data here

2018/10/29 13:20:29 Storing metrics on disk at /var/lib/telemeter
2018/10/29 13:20:29 [$(NAME)] node joined $(NAME)
2018/10/29 13:20:29 Starting telemeter-server $(NAME) on 0.0.0.0:8443 (internal=0.0.0.0:8081, cluster=0.0.0.0:8082)

Make issues

Hi,

I have forked the repo, tried to run make and got several issues:

  • It tries to find inside a "benchmark" directory that does not exists
  • It should install "jsonnet" but does not (At least according to the comment: # We need jsonnet on CI; here we default to the user's installed jsonnet binary; if nothing is installed, then install go-jsonnet.)

Make output:

find: ‘./benchmark’: No such file or directory
go build ./cmd/telemeter-client
go build ./cmd/telemeter-server
go build ./cmd/authorization-server
go build ./cmd/telemeter-benchmark
cd jsonnet && jb install
GET https://github.com/ksonnet/ksonnet-lib/archive/0d2f82676817bbf9e4acf6495b2090205f323b9f.tar.gz 200
GET https://github.com/coreos/prometheus-operator/archive/8d44e0990230144177f97cf62ae4f43b1c4e3168.tar.gz 200
rm -rf manifests
mkdir -p manifests/{benchmark,client,server,prometheus}
/home/frolland/git/go/bin/jsonnet jsonnet/benchmark.jsonnet -J jsonnet/vendor -m manifests/benchmark
bash: /home/frolland/git/go/bin/jsonnet: No such file or directory
make: *** [Makefile:107: manifests] Error 127

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.