GithubHelp home page GithubHelp logo

timescale / prometheus-postgresql-adapter Goto Github PK

View Code? Open in Web Editor NEW
335.0 19.0 66.0 144 KB

Use PostgreSQL as a remote storage database for Prometheus

License: Apache License 2.0

Makefile 2.43% Go 96.03% Dockerfile 1.06% Shell 0.48%

prometheus-postgresql-adapter's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

prometheus-postgresql-adapter's Issues

remote read is not showing metrics

Hi ,

I have configured remote write for the prometheus , and i can see remote writes are happening, I can see the data in the postgres (db : prometheus , table metrics_values).

i have configured another prometheus for remote read which i want to add as a datastore in the grafana, but i can't see the metrics flowing to prometheus.

my setup is like this.

prometheus_server-01 --> had all the targets configured and will write to the remote storage which is postgresql (postgresql_server-01)

prometheus_server-02 --> want to read the data from the postgresql (postgresql_server-01).

and in the grafana i want to create a prometheus dashboard with the prometheus_server-02 (configured for read).

i am not quite sure whether this is the normal approach to follow for remote_read , or i am completely miss understand the whole concept.

can someone please help me with this.

Thanks in well advance.

Prometheus unable to connect to adapter

I am currently running prometheus on a kubernetes cluster (Kubernetes Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0+icp-ee", GitCommit:"9cb64de4ca4d039c35f4a29721aa5cf787648a15", GitTreeState:"clean", BuildDate:"2018-04-27T06:32:18Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"})

I created a kubernetes service so that the containers for pg_prometheus, prometheus_postgresql_adapter, and prometheus can communicate with each other. The .yaml file I used to create this is:

apiVersion: v1
kind: Service
metadata:
name: pg-prometheus-service
  namespace: "data-collection"
  labels:
    collect: data
spec:
  selector:
    collect: data
  ports:
  - name: pg-prometheus
    protocol: TCP
    port: 5432
    targetPort: 5432
   - name: postgresql-adapter
    protocol: TCP
    port: 9201
    targetPort: 9201

I created the pg_prometheus deployment using the following .yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pg-prometheus
  namespace: data-collection
  labels:
    collect: data
spec:
  selector:
    matchLabels:
      collect: data
  replicas: 1
  template:
    metadata:
      labels:
        collect: data
    spec:
      containers:
      - name: pg-prometheus
        image: timescale/pg_prometheus:master
        ports:
        - containerPort: 5432
        args: ["postgres", "-csynchronous_commit=off"]

and I created the adapter deployment using the following .yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus-postgresql-adapter
  namespace: data-collection
  labels:
    collect: data
spec:
  selector:
    matchLabels:
      collect: data
  replicas: 1
  template:
    metadata:
      labels:
        collect: data
    spec:
      containers:
      - name: prometheus-postgresql-adapter
        image: timescale/prometheus-postgresql-adapter:master
        ports:
        - containerPort: 9201
        args: ["-pg.host=pg-prometheus-service", "-pg.prometheus-log-samples"]`

when connecting to pg-prometheus I see that the database schema has been created:

postgres=# \dt
List of relations
Schema |      Name      | Type | Owner
--------+----------------+-------+----------
public | metrics_copy | table | postgres
public | metrics_labels | table | postgres
public | metrics_values | table | postgres
(3 rows)

Prometheus and the node exporter have been running on the cluster as expected, I have just been trying to add on the remote_write feature. The problem that I am having is when I load prometheus (which is in the kube-system namespace). I edit the configmap for my prometheus deployment so that in the prometheus.yml file I have:

prometheus.yml:
global:
  scrape_interval: 1m
  evaluation_interval: 1m

rule_files:
  - /etc/alert-rules/.rules
  - /etc/alert-rules/
.yml

scrape_configs:
  - job_name: prometheus
    static_configs:
      - targets:
        - localhost:9090

remote_write:
  - url: "http://pg-prometheus-service.data-collection:9201/write"
...
(much more in the configuration file)

I then restart the prometheus deployment and get the following error:

level=info ts=2019-01-11T01:31:42.871609412Z caller=main.go:394 msg="Loading configuration file" filename=/etc/config/prometheus.yml
level=error ts=2019-01-11T01:31:42.874975125Z caller=main.go:356 msg="Error loading config" err="couldn't load configuration (--config.file=/etc/config/prometheus.yml): url for remote_write is empty"

I have tried many different options for the url parameter with no success (getting rid of http://, not including the service namespace, using the IP address rather than the service name, not including /write...).

On my kubernetes cluster I am also running a kube-dns service to handle domain name resolutions.

Any advice on what might be going wrong would be greatly appreciated! Thanks!

launch without docker

Hi,

I'm using Prometheus 2.2.0 to generate some metrics and I need long term storage.
Is there a way to use this adapter for PostgreSQL without docker ?

Thank you

Service restart fails

Hello,

While I am trying to use this somehow great tool, I have a problem on making it restart on a virtual machine.

The problem is simple : the adapter wants to recreate the whole database schema, which is a problem IMHO.
As the log says : level=error ts=2019-01-16T11:27:24.991560544Z caller=log.go:33 err="pq: relation « metrics_labels » already exists".

I am not fluent enough with go to propose a PR for this, but I think the adapter should only create anything in the table, if the required table/views/seqs are not already existing.

Best regards

Error Sending Samples to Remote Storage

Hi,

I am facing issues while running pg-adapter.
We are using prometheus (2.8.1) along with Postgresql (10.5) as a remote storage.

It was running fine, but when I redeployed the services I started to get the issue

msg="Error sending samples to remote storage" err="pq: tuple concurrently updated" storage=PostgreSQL num_samples=3

Can anyone help on how to resolve this.

Regards,
Keshav Sharma

Handle queries for empty labels

PromQL treats empty label values as queries for the all time-series that doesn't have the given label. This is from the PromQL docs: Label matchers that match empty label values also select all time series that do not have the specific label set at all.

The adapter currently doesn't handle this case as it queries for time-series that has the label set to the empty string.

Sawtooth memory usage in PostgreSQL - kernel OOM killer

Hi, I've been using timescaledb 0.9.1 for a few days in conjunction with pg_prometheus and the prometheus-postgresql-adapter.

It's working fine - thank you for the software. I just have one concern regarding the memory usage of PostgreSQL 10.3 itself:

image

The server is an AWS t2.medium, so has 3.75GB total RAM. prometheus-postgresql-adapter and the postgres_exporter are the only PostgreSQL clients.

 1444 ?        Ss     0:01 /usr/pgsql-10/bin/postmaster -D /var/lib/pgsql/10/data/
 1446 ?        Ss     0:00  \_ postgres: logger process
15457 ?        Ss     0:01  \_ postgres: checkpointer process
15458 ?        Ss     0:00  \_ postgres: writer process
15459 ?        Ss     0:00  \_ postgres: wal writer process
15460 ?        Ss     0:00  \_ postgres: autovacuum launcher process
15461 ?        Ss     0:01  \_ postgres: stats collector process
15462 ?        Ss     0:00  \_ postgres: bgworker: logical replication launcher
15467 ?        Ss     0:02  \_ postgres: postgres postgres [local] idle
15469 ?        Ss     0:32  \_ postgres: postgres metrics [local] idle
15470 ?        Ss     0:32  \_ postgres: postgres metrics [local] idle
15471 ?        Ss     0:32  \_ postgres: postgres metrics [local] idle
15472 ?        Ss     0:32  \_ postgres: postgres metrics [local] idle
15474 ?        Ss     0:32  \_ postgres: postgres metrics [local] idle
15478 ?        Ss     0:32  \_ postgres: postgres metrics [local] idle
15479 ?        Ss     0:31  \_ postgres: postgres metrics [local] idle
15480 ?        Ss     0:31  \_ postgres: postgres metrics [local] idle
15482 ?        Ss     0:31  \_ postgres: postgres metrics [local] idle
15483 ?        Ss     0:31  \_ postgres: postgres metrics [local] idle
 1749 ?        Ssl   12:15 /usr/bin/prometheus-postgresql-adapter -pg.host=/var/run/postgresql -pg.database=metrics
15463 ?        Ssl    0:03 /opt/monitoring/prometheus-postgresql-exporter/postgres_exporter

PostgreSQL was installed as per TimescaleDB's guide of using PGDG - I have not touched postgresql.conf so shared_buffers is still the default 128MB.

However, something is causing PostgreSQL to consume more and more memory until the kernel kills it...

[May10 22:51] postmaster invoked oom-killer: gfp_mask=0x280da, order=0, oom_score_adj=0
[  +0.005899] postmaster cpuset=/ mems_allowed=0
[  +0.004581] CPU: 1 PID: 13631 Comm: postmaster Kdump: loaded Not tainted 3.10.0-862.el7.x86_64 #1
[  +0.006778] Hardware name: Xen HVM domU, BIOS 4.2.amazon 08/24/2006
[  +0.004775] Call Trace:
[  +0.003090]  [<ffffffffa0d0d768>] dump_stack+0x19/0x1b
[  +0.004531]  [<ffffffffa0d090ea>] dump_header+0x90/0x229
[  +0.004263]  [<ffffffffa08d7c1b>] ? cred_has_capability+0x6b/0x120
[  +0.004667]  [<ffffffffa0797904>] oom_kill_process+0x254/0x3d0

The PostgreSQL logs show a little confusion as expected as it restarts...

2018-05-10 22:51:11.866 UTC [13635] FATAL:  the database system is in recovery mode
2018-05-10 22:51:11.867 UTC [13635] LOG:  could not send data to client: Broken pipe
2018-05-10 22:51:11.892 UTC [1444] LOG:  all server processes terminated; reinitializing
2018-05-10 22:51:11.951 UTC [13637] LOG:  database system was interrupted; last known up at 2018-05-10 22:50:53 UTC
2018-05-10 22:51:11.986 UTC [13637] LOG:  database system was not properly shut down; automatic recovery in progress
2018-05-10 22:51:11.993 UTC [13637] LOG:  redo starts at 2/13444650
2018-05-10 22:51:12.071 UTC [13637] LOG:  invalid record length at 2/147A8520: wanted 24, got 0
2018-05-10 22:51:12.071 UTC [13637] LOG:  redo done at 2/147A84F8
2018-05-10 22:51:12.071 UTC [13637] LOG:  last completed transaction was at log time 2018-05-10 22:51:11.55548+00
2018-05-10 22:51:12.160 UTC [1444] LOG:  database system is ready to accept connections

The machine is 90-95% idle with plenty of CPU credits, and there are literally only 6 machines sending node_exporter stats to the adapter every 15 seconds - super low load!

Help! :)

failed to connect postgresql

Hi All,
I use the pre-built binaries to push the prometheus data to postgresql
issue: err="pq: Ident authentication failed for user "postgres""
but the postgresql is normal, and the user of "postgres" also can be in motion.

I don't know how to solve the problem.
Any help would be appreciated.

Regards

docker instructions

Hi,
I'm using this adapter for a personal project, and I follow exactly the docker instructions. In the final one I get this Error:
"docker: Error response from daemon: Cannot link to a non running container: /prometheus_postgresql_adapter AS /pedantic_montalcini/prometheus_postgresql_adapter"
I don't if I have to change the path to prometheus.yml, if that's the solution, I want to know how I can determine that path.
Thanks

labels id sequence exhausted

I've noticed some errors in my logs:
msg="Error sending samples to remote storage" err="pq: integer out of range"
So I went in to investigate.

My current value of the metrics_labels_id_seq sequence is to big to be used for label ids as this is only an int4. I have no idea how it got that high (2.150.699.906), as I never had any more than about 350.000 labels.

I've cleaned up now and restarted the sequence, and that fixes the issue for now.
I'll keep an eye on it to see if I have a rogue label somewhere, but I thought I'll make an issue for it here anyway, as it might be something others have noticed too.

For completeness, I'm using Prometheus version 2.4.3, and Postgres 9.6 with:

                                       List of installed extensions
     Name      | Version |   Schema   |                            Description                            
---------------+---------+------------+-------------------------------------------------------------------
 pg_cron       | 1.1     | public     | Job scheduler for PostgreSQL
 pg_prometheus | 0.2.1   | public     | Prometheus metrics for PostgreSQL
 plpgsql       | 1.0     | pg_catalog | PL/pgSQL procedural language
 timescaledb   | 0.12.1  | public     | Enables scalable inserts and complex queries for time-series data

/pg_prometheus.so: undefined symbol: PG_GETARG_JSONB"

./prometheus-postgresql-adapter -log.level=debug -leader-election.pg-advisory-lock-id=1 -leader-election.pg-advisory-lock.prometheus-timeout=6s -pg.password=root -pg.use-timescaledb=false
level=info ts=2018-11-27T09:04:28.2253517Z caller=log.go:25 config="&{remoteTimeout:30000000000 listenAddr::9201 telemetryPath:/metrics pgPrometheusConfig:{host:localhost port:5432 user:postgres password:root database:postgres schema: sslMode:disable table:metrics copyTable: maxOpenConns:50 maxIdleConns:10 pgPrometheusNormalize:true pgPrometheusLogSamples:false pgPrometheusChunkInterval:43200000000000 useTimescaleDb:false dbConnectRetries:0 readOnly:false} logLevel:debug haGroupLockId:1 restElection:false prometheusTimeout:6000000000}"
level=info ts=2018-11-27T09:04:28.2305317Z caller=log.go:25 msg="host=localhost port=5432 user=postgres dbname=postgres password='root' sslmode=disable connect_timeout=10"
level=error ts=2018-11-27T09:04:28.2873623Z caller=log.go:33 err="pq: could not load library "/usr/lib/postgresql/11/lib/pg_prometheus.so": /usr/lib/postgresql/11/lib/pg_prometheus.so: undefined symbol: PG_GETARG_JSONB"

Can some one help me on the same

Improve the build instructions

Could be improve the build from source instructions, I have tried the below but I have not managed to built it (I am no GO guru)

# git clone https://github.com/timescale/prometheus-postgresql-adapter.git /tmp/pg_adapter/src
# export GOPATH=/tmp/pg_adapter
# cd /tmp/pg_adapter/src
# dep ensure
root project import: dep does not currently support using GOPATH/src as the project root

Update I have manged to build it but using godep get

connecting prometheus adapter to azure timescale

I'm trying to set up an Azure Database for PostgreSQL instance, hoping to use their replication and PITR features, and send metrics to it from a few different Prometheus instances. Previously, I had a single Timescale pod running and consuming metrics correctly, but the pg_prometheus extension doesn't seem to be available on Azure:

level=info ts=2019-05-01T21:14:38.588458149Z caller=log.go:25 msg="host=example-timescale-test.postgres.database.azure.com port=5432 user='username' dbname=staging password='password' sslmode=require connect_timeout=10"
level=error ts=2019-05-01T21:14:39.24436137Z caller=log.go:33 err="pq:  extension \"pg_prometheus\" is not supported by Azure Database for PostgreSQL"

This adapter seems to require that extension, even when inserting normalized metrics. Is that the case, or am I missing a flag? If so, is there a known/documented way of attaching Prometheus and Azure's Timescale?

pq: copyin statement has already been closed

mondb=# \dt
List of relations
Schema | Name | Type | Owner
--------+----------------+-------+----------
public | metrics_copy | table | postgres
public | metrics_labels | table | postgres
public | metrics_values | table | postgres
(3 rows)

mondb=#

./prometheus

level=info ts=2018-07-17T10:15:35.123777285Z caller=main.go:222 msg="Starting Prometheus" version="(version=2.3.2, branch=HEAD, revision=71af5e29e815795e9dd14742ee7725682fa14b7b)"
level=info ts=2018-07-17T10:15:35.123836745Z caller=main.go:223 build_context="(go=go1.10.3, user=root@5258e0bd9cc1, date=20180712-14:02:52)"
level=info ts=2018-07-17T10:15:35.1238555Z caller=main.go:224 host_details="(Linux 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 pxc-app01-db01 (none))"
level=info ts=2018-07-17T10:15:35.123871209Z caller=main.go:225 fd_limits="(soft=1024, hard=4096)"
level=info ts=2018-07-17T10:15:35.124507787Z caller=web.go:415 component=web msg="Start listening for connections" address=0.0.0.0:9090
level=info ts=2018-07-17T10:15:35.124479211Z caller=main.go:533 msg="Starting TSDB ..."
level=info ts=2018-07-17T10:15:35.138722073Z caller=main.go:543 msg="TSDB started"
level=info ts=2018-07-17T10:15:35.138764754Z caller=main.go:603 msg="Loading configuration file" filename=prometheus.yml
level=info ts=2018-07-17T10:15:35.140000617Z caller=main.go:629 msg="Completed loading of configuration file" filename=prometheus.yml
level=info ts=2018-07-17T10:15:35.140022743Z caller=main.go:502 msg="Server is ready to receive web requests."

$ prometheus-postgresql-adapter -pg.database "mondb" -pg.prometheus-log-samples
ts=2018-07-17T10:15:05.011421283Z caller=log.go:124 level=info msg="host=localhost port=5432 user=postgres dbname=mondb password='' sslmode=disable connect_timeout=10"
ts=2018-07-17T10:15:05.020600907Z caller=log.go:124 level=info msg="Starting up..."
ts=2018-07-17T10:15:05.020640117Z caller=log.go:124 level=info msg=Listening addr=:9201
up{instance="node_exporter:9100",job="prometheus"} 0 1531822544461
scrape_duration_seconds{instance="node_exporter:9100",job="prometheus"} 0.002416797 1531822544461
scrape_samples_scraped{instance="node_exporter:9100",job="prometheus"} 0 1531822544461
scrape_samples_post_metric_relabeling{instance="node_exporter:9100",job="prometheus"} 0 1531822544461
ts=2018-07-17T10:15:45.150765541Z caller=log.go:124 level=error storage=PostgreSQL msg="Error on Close when writing samples" err="pq: copyin statement has already been closed"
ts=2018-07-17T10:15:45.151189004Z caller=log.go:124 level=warn msg="Error sending samples to remote storage" err="pq: copyin statement has already been closed" storage=PostgreSQL num_samples=4

err="pq: password authentication failed for user \"postgres\""

According to Getting started with Prometheus and TimescaleDB, when I start the prometheus_postgresql_adapter container, I get an error.
level=info ts=2019-04-18T06:46:39.906315481Z caller=log.go:25 config="&{remoteTimeout:30000000000 listenAddr::9201 telemetryPath:/metrics pgPrometheusConfig:{host:pg_prometheus port:5432 user:postgres password: database:postgres schema: sslMode:disable table:metrics copyTable: maxOpenConns:50 maxIdleConns:10 pgPrometheusNormalize:true pgPrometheusLogSamples:true pgPrometheusChunkInterval:43200000000000 useTimescaleDb:true dbConnectRetries:0 readOnly:false} logLevel:debug haGroupLockId:0 restElection:false prometheusTimeout:-1}"
level=info ts=2019-04-18T06:46:39.906416995Z caller=log.go:25 msg="host=pg_prometheus port=5432 user=postgres dbname=postgres password='' sslmode=disable connect_timeout=10"
level=error ts=2019-04-18T06:46:39.910818007Z caller=log.go:33 err="pq: password authentication failed for user "postgres""

Prometheus crashes in latest prom/prometheus, but fine in prom/prometheus:v2.2.1

With the latest tag https://hub.docker.com/r/prom/prometheus/tags/ published 6 days ago from this date, having a regression issue I believe.

Followed exact steps to integrate postgresDB+its metrics exporter, along with prometheus and postgres storage adaptor.

On Postgres DB side I was seeing bunch of following error logs

LOG: unexpected EOF on client connection with an open transaction

And on prometheus side, seen

panic: runtime error: invalid memory address or nil pointer dereference

and crashed.

Problem with 10 Millions metrics from Prometheus every 5 minutes

Hello guys
I am impressed with this product and try to run it on environment that generate 10-11 millions metrics every 5 minutes. And i got problem that database used all disk operations, after 1h of running i got average load 150 on 32 cores cpu.
I notice that not all metrics are ingested from Prometheus to timescale due to strange errors on timescale adapter. Here is couple of them

level=debug ts=2018-11-06T16:57:33.620386875Z caller=log.go:21 msg="Wrote samples" count=3649 duration=64.632063268
level=error ts=2018-11-06T16:57:33.761927518Z caller=log.go:33 msg="Error executing COPY statement" stmt="node_cpu_frequency_min_hertz{cluster=\"sf-prod-2\",colo=\"sf\",colo_type=\"prod\",container_name=\"node-exporter\",cpu=\"cpu3\",host_name=\"hss-sf-prod-2-8\",instance=\"199.241.123.9:9100\",job=\"consul-services\",prometheus=\"long_retention\"} 1200000000 1541523310332" err="pq: invalid input syntax for prometheus sample: Unexpected number of input items assigned: 1\n"
level=warn ts=2018-11-06T16:57:33.815570729Z caller=log.go:29 msg="Error sending samples to remote storage" err="pq: invalid input syntax for prometheus sample: Unexpected number of input items assigned: 1\n" storage=PostgreSQL num_samples=1355
level=warn ts=2018-11-06T16:57:33.866651939Z caller=log.go:29 msg="Error sending samples to remote storage" err="pq: invalid input syntax for prometheus sample: Unexpected number of input items assigned: 1\n" storage=PostgreSQL num_samples=1305

Please help to understand what i did wrong or what i need to do to make it work properly.
Can you give advise what settings i need to use and if its really can handle that number of metrics?

Here is config that i used for timescaledb

listen_addresses = '*'
max_connections = 1100			# (change requires restart)
shared_buffers = 48GB			# min 128kB
maintenance_work_mem = 2GB		# min 1MB
dynamic_shared_memory_type = posix	# the default is the first option
shared_preload_libraries = 'timescaledb'		# (change requires restart)
effective_io_concurrency = 200		# 1-1000; 0 disables prefetching
max_worker_processes = 32		# (change requires restart)
max_parallel_workers_per_gather = 16		# taken from max_parallel_workers
max_parallel_workers = 32		# maximum number of max_worker_processes that
synchronous_commit = off		# synchronization level;
wal_buffers = 16MB			# min 32kB, -1 sets based on shared_buffers
max_wal_size = 4GB
min_wal_size = 2GB
checkpoint_completion_target = 0.7	# checkpoint target duration, 0.0 - 1.0
random_page_cost = 1.1			# same scale as above
effective_cache_size = 144GB
default_statistics_target = 100	# range 1-10000
log_timezone = 'UTC'
datestyle = 'iso, mdy'
timezone = 'UTC'
lc_messages = 'en_US.utf8'			# locale for system error message
lc_monetary = 'en_US.utf8'			# locale for monetary formatting
lc_numeric = 'en_US.utf8'			# locale for number formatting
lc_time = 'en_US.utf8'				# locale for time formatting
default_text_search_config = 'pg_catalog.english'

Unclear how to use adapter for read load balance

We employ replicated postgres solution and would like to balance read load across replicas.
Nonetheless it would be nice if it would work when the replica is promoted to leader.

err="pq: cannot set transaction read-write mode during recovery"

Adapter cannot connect to PostgreSQL

Hi All,

I have created the docker image for PostgreSQL (along with timescaleDB) using centos as base image.
and 1 image for adapter .
Below is the output when I run this PostgreSQL Image:

< 2018-08-23 06:15:40.412 UTC >LOG: listening on IPv4 address "0.0.0.0", port 5432
< 2018-08-23 06:15:40.412 UTC >LOG: listening on IPv6 address "::", port 5432
< 2018-08-23 06:15:40.415 UTC >LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
< 2018-08-23 06:15:40.416 UTC >LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
< 2018-08-23 06:15:40.425 UTC >LOG: redirecting log output to logging collector process
< 2018-08-23 06:15:40.425 UTC >HINT: Future log output will appear in directory "pg_log".
< 2018-08-23 06:15:40.428 UTC >LOG: database system was shut down at 2018-08-23 05:52:23 UTC
< 2018-08-23 06:15:40.432 UTC >LOG: database system is ready to accept connections

Also, When I logged in to the container , I can able to access the DB

sh-4.2$ psql -U postgres -h 127.0.0.1
psql (10.5)
Type "help" for help.
postgres=#
postgres=#

Third I checked the 5432 port status using netstat command:
sh-4.2$ netstat -ntl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN
tcp6 0 0 :::5432 :::* LISTEN

But, when I tried to run the adapter image, I am getting below error:

level=info ts=2018-08-23T06:15:31.270935952Z caller=log.go:25 msg="host=127.0.0.1 port=5432 user=postgres dbname=postgres password='' sslmode=disable connect_timeout=10"
level=error ts=2018-08-23T06:15:31.271753696Z caller=log.go:33 err="dial tcp 127.0.0.1:5432: connect: connection refused"

Can anyone tell what's the issue here.

Note: Created Pod in kubernetes having 2 container one with PostgreSQL Image and another is with adapter image. So adapter should connect with localhost:5432.

Any help would be appreciated.

Regards,
Keshav Sharma

Automatically build Docker images

We should automatically build Docker images, either in Travis (potentially pushing directly to DockerHub), or using automated builds in DockerHub.

prometheus_postgresql_adapter doesn't stop writing in HA

I am running 2 identical prometheus containers talking to their respective prometheus.yml files.

adapter1 is running a timescale/prometheus-postgresql-adapter:latest \ on a port (:xxxx) with lockid=1 and locktimeout=25s
adapter2 is running a timescale/prometheus-postgresql-adapter:latest \ on a port (:yyyy)

prometheus1 is configured to read/write to adapter1
prometheus2 is configured to read/write to adapter2

Both the prometheus instances have scraping intervals of 20s and that is the reason I kept the locktimeout for adapters as 25s

To maintain HA, both adapters are writing to DB based on lock acquired.

Let's say at this given point 'adapter1' has the lock and activley writing to DB. While adapter2 is paused as it cannot become a leader.

Problem I am facing is.. when I turn off prometheus1.. i am expecting the adapter2 to pick the lock and start writing. But to my surprise, adapter1 continues to write (as per my docker logs ) command.

Is there anything I am missing?

NOTE: My prometheus instances (has 20s scraping interval) and are federated to scrap specific job out of master prometheus (which has scraping interval of 1m). Hardly matters right, I am anyways going to scrap duplicate into my instance. But the question is why is my adapter2 not able to acquire lock and still shows below message.

level=debug ts=2019-05-02T15:45:03.960924634Z caller=log.go:21 msg="Scheduled election is paused. Instance can't become a leader until scheduled election is resumed (Prometheus comes up again)"

pg.schema didn't take effect

v0.4.1
command:
./prometheus-postgresql-adapter -pg.host=localhost -pg.user=prometheus -pg.password=secret -pg.database=postgres -pg.schema=prometheus -pg.prometheus-log-samples -log.level=debug

data is always inserted in postgres.public

Unable to build adapter

I was able to build this adapter previously on a CentOS system with a little hoop-jumping.

My environment uses Ubuntu for our standard services, and I've been having issues building the adapter for non-docker use. The issue, whether using dep or go get is with the libraries in github.com/go-stack/stack.

(barebones install)

~$ go get github.com/timescale/prometheus-postgresql-adapter
go build github.com/go-stack/stack: no buildable Go source files in /home/user/go/src/github.com/go-stack/stack
# github.com/lib/pq
go/src/github.com/lib/pq/notify.go:787: undefined: time.Until
# github.com/prometheus/prometheus/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime
go/src/github.com/prometheus/prometheus/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/mux.go:149: r.Context undefined (type *http.Request has no field or method Context)

Same thing occurs when using dep and the Makefile

user@labmon1:~/go/src$ cd prometheus-postgresql-adapter/
user@labmon1:~/go/src/prometheus-postgresql-adapter$ ls
Dockerfile  Gopkg.lock  Gopkg.toml  LICENSE  main.go  Makefile  postgresql  README.md  sample-docker-prometheus.yml
user@labmon1:~/go/src/prometheus-postgresql-adapter$ $GOBIN/dep ensure
user@labmon1:~/go/src/prometheus-postgresql-adapter$ make
GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -a -installsuffix cgo --ldflags '-w' -o prometheus-postgresql-adapter main.go
vendor/github.com/go-kit/kit/log/value.go:6:2: no buildable Go source files in /home/user/go/src/prometheus-postgresql-adapter/vendor/github.com/go-stack/stack
Makefile:30: recipe for target 'prometheus-postgresql-adapter' failed
make: *** [prometheus-postgresql-adapter] Error 1

Please advise,
Thank you

retrieving metrics data from postgres via a client

I am using kubernetes, and I'm able to use the adapter just fine to write to postgres. In prometheus, all metrics are retrievable. I exposed my database service via a LoadBalancer type and connected to it using a client from my laptop. Connection is successful, and I can see the schema with all tables/view. However, all my "select' queries are coming with empty results.

I also try to connect from within the database pod (with username postgres and empty password), connection is fine, yet no data is retrievable via my select statement.

I am writing a simple select statement such as "select * form metrics fetch first 3 rows only".

Also, I have setup logging to true, yet no logging out for the metric messages written out.

containers:
  - name: pg-adapter
    image: timescale/prometheus-postgresql-adapter:master
    args:
      - "-pg-host=pgprometheus"
      - "-pg-prometheus-log-samples=true"

Any suggestion or advise ?

Too many connection error on Postgres

After connecting with the adapter, prometheus pods are not stable, restaring too many times. Not able to connecto to postgres database using a client, I keep on getting "Too many connetions" error.

adapter image: timescale/prometheus-postgresql-adapter:0.4.1
pg-postgres image: timescale/pg_prometheus:0.2.1

the adapter does not work

The docker adapter does not work even after providing the -pg.password. Hints?

level=info ts=2019-03-01T19:59:42.737889692Z caller=log.go:25 config="&{remoteTimeout:30000000000 listenAddr::9201 telemetryPath:/metrics pgPrometheusConfig:{host:pg_prometheus port:5432 user:postgres password:PASSWORD database:postgres schema: sslMode:disable table:metrics copyTable: maxOpenConns:50 maxIdleConns:10 pgPrometheusNormalize:true pgPrometheusLogSamples:true pgPrometheusChunkInterval:43200000000000 useTimescaleDb:true dbConnectRetries:0 readOnly:false} logLevel:debug haGroupLockId:0 restElection:false prometheusTimeout:-1}"
level=info ts=2019-03-01T19:59:42.738027611Z caller=log.go:25 msg="host=pg_prometheus port=5432 user=postgres dbname=postgres password='PASSWORD' sslmode=disable connect_timeout=10"
level=error ts=2019-03-01T19:59:42.790922702Z caller=log.go:33 err="pq: password authentication failed for user \"postgres\""

Docker image doesn't support leader-election flag

Hello,

Docker image doesn't support leader-election flag.

docker-compose.yml

version: '2.1'
services:
 prometheus_postgresql_adapter_1:
   image: timescale/prometheus-postgresql-adapter:master
   ports:
     - "9201:9201"
   command: "-pg.host=x.x.x.x -pg.prometheus-log-samples -leader-election.pg-advisory-lock-id=1 -leader-election.pg-advisory-lock.prometheus-timeout=6s"

 prometheus_postgresql_adapter_2:
   image: timescale/prometheus-postgresql-adapter:master
   ports:
     - "9202:9202"
   command: "-pg.host=x.x.x.x -pg.prometheus-log-samples -leader-election.pg-advisory-lock-id=1 -leader-election.pg-advisory-lock.prometheus-timeout=6s"

Logs:

...
prometheus_postgresql_adapter_1_1  | flag provided but not defined: -leader-election.pg-advisory-lock-id
...
prometheus_postgresql_adapter_2_1  | flag provided but not defined: -leader-election.pg-advisory-lock-id
...

Regards.

postgres-adapter authentication failed for 'postgres' db

I am setting this up in kubernetes and getting the below error in adapter logs

level=info ts=2019-05-10T09:28:20.158622915Z caller=log.go:25 config="&{remoteTimeout:30000000000 listenAddr::9201 telemetryPath:/metrics pgPrometheusConfig:{host:pg-prometheus port:5432 user:postgres password: database:postgres schema: sslMode:disable table:metrics copyTable: maxOpenConns:50 maxIdleConns:10 pgPrometheusNormalize:true pgPrometheusLogSamples:true pgPrometheusChunkInterval:43200000000000 useTimescaleDb:true dbConnectRetries:0 readOnly:false} logLevel:debug haGroupLockId:0 restElection:false prometheusTimeout:-1}"
level=info ts=2019-05-10T09:28:20.15875991Z caller=log.go:25 msg="host=pg-prometheus port=5432 user=postgres dbname=postgres password='' sslmode=disable connect_timeout=10"
level=error ts=2019-05-10T09:28:20.162595001Z caller=log.go:33 err="pq: password authentication failed for user "postgres""

I have also tried adding below line to my pg_hba.conf but no luck:
host postgres postgres 0.0.0.0/0 trust

Surprisingly if i create another separate DB with some credentials while spinning up pg_prometheus and pass its credentials to postgres-adapter, it is able to connect seamlessly. But that is not of much help because apparently we cannot use any db but 'postgres' due to the required initial setup being already done in it.

Please help me get through this.

-pg.use-timescaledb doesn't seems to work

Hi,

when i try to use the adapter without TimescaleDB, it demands it anyway.

postgres@somehost:/data/prometheus-postgresql-adapter-0.2$ ./prometheus-postgresql-adapter -pg.database prometheusdb -pg.user prometheus -pg.use-timescaledb false
level=info ts=2018-06-27T08:43:58.53972208Z caller=client.go:80 msg="host=localhost port=5432 user=prometheus dbname=prometheusdb password='' sslmode=disable connect_timeout=10"
level=info ts=2018-06-27T08:43:58.546982528Z caller=client.go:125 storage=PostgreSQL msg="Could not enable TimescaleDB extension" err="pq: could not open extension control file \"/usr/share/postgresql/10/extension/timescaledb.control\": No such file or directory"
level=error ts=2018-06-27T08:43:58.547422322Z caller=client.go:99 err="pq: current transaction is aborted, commands ignored until end of transaction block"

postgres@somehost:/data/prometheus-postgresql-adapter-0.2$ ./prometheus-postgresql-adapter -pg.database prometheusdb -pg.user prometheus
level=info ts=2018-06-27T09:02:44.996237366Z caller=client.go:80 msg="host=localhost port=5432 user=prometheus dbname=prometheusdb password='' sslmode=disable connect_timeout=10"
level=info ts=2018-06-27T09:02:45.001604523Z caller=client.go:125 storage=PostgreSQL msg="Could not enable TimescaleDB extension" err="pq: could not open extension control file \"/usr/share/postgresql/10/extension/timescaledb.control\": No such file or directory"
level=error ts=2018-06-27T09:02:45.002073409Z caller=client.go:99 err="pq: current transaction is aborted, commands ignored until end of transaction block" 

It seems the package needs to be installed anyway?

Flags mapped to environment variables

Feature Request

Hey all, great work on the project. I authored a Kubernetes Helm chart for this repo (helm/charts#7927), and was wondering if it was possible to have each of the CLI flags mapped to an environment variable.

Describe the solution you'd like

Basically, each flag should have an equivalent environment variable. Something like this:

-adapter.send-timeout            => ADAPTER_SEND_TIMEOUT
-log.level                       => LOG_LEVEL
-pg.copy-table                   => PG_COPY_TABLE
-pg.database                     => PG_DATABASE
-pg.db-connect-retries           => PG_DB_CONNECT_RETRIES
-pg.host                         => PG_HOST
-pg.max-idle-conns               => PG_MAX_IDLE_CONNS
-pg.max-open-conns               => PG_MAX_OPEN_CONNS
-pg.password                     => PG_PASSWORD
-pg.port                         => PG_PORT
-pg.prometheus-chunk-interval    => PG_PROMETHEUS_CHUNK_INTERVAL
-pg.prometheus-log-samples       => PG_PROMETHEUS_LOG_SAMPLES
-pg.prometheus-normalized-schema => PG_PROMETHEUS_NORMALIZED_SCHEMA
-pg.schema                       => PG_SCHEMA
-pg.ssl-mode                     => PG_SSL_MODE
-pg.table                        => PG_TABLE
-pg.use-timescaledb              => PG_USE_TIMESCALEDB
-pg.user                         => PG_USER
-read.only                       => READ_ONLY
-web.listen-address              => WEB_LISTEN_ADDRESS
-web.telemetry-path              => WEB_TELEMETRY_PATH

Is your feature request related to a problem? Please describe.

Not a "problem" per se, but the change would make the chart more Kubernetes native. For example, a potential PG_PASSWORD environment variable could be read from a Kubernetes secret as opposed to being passed directly from (and thus visible) from a command line argument.

Describe alternatives you've considered

If you all are fine with only having the CLI flags, that would be alright, just wondering if the environment variable approach is possible since it's (probably) a quick fix.

Teachability, Documentation, Adoption, Migration Strategy

This could be easily documented in a README.md snippet. As long as the mapping between the CLI flags and environment variable are consistent, it would be no problem.

Q: Can you run this without Docker?

Is there a way to build and run this software without a Docker image? If this is documented somewhere I'd be happy to submit a PR for the README. Thanks!

-danny

Duplicate key value constraint problems

Hi,

One one of our clusters we're fighting with integrity constraint errors:

"level=warn ts=2018-11-13T16:00:00.390643264Z caller=main.go:274 msg="Error sending samples to remote storage" err="pq: duplicate key value violates unique constraint "metrics_labels_metric_name_labels_key"" storage=PostgreSQL num_samples=100
"

This causes all our prometheus-postgresql-adapter pods to fail, is there anything we can do to solve this? We are using the latest master image.

Losing data after prometheus restart

I seem to lose most historical data whenever I restart my Prometheus server.

I'm running both Prometheus and the Postgres Adapter as a docker container, so I assume it has something to do with local cached data vs. when Prometheus will actually use the remote read through the Postgres Adapter.

Is this behaviour observed before, and is there some more docs/insights on how to setup Prometheus with the Postgres Adapter (other than just the write/read urls) ?

I'm not sure where to start debugging this as I don't know if this is either a Prometheus problem, a Postgres Adapter problem, or even a TimescaleDB problem.

HTTP 400 : Bad wiretype

Hello,

I was trying to setup Prometheus with TimescaleDB.

  • Prometheus 2.0
  • PostgreSQL 10.1
  • TimescaleDB 0.8.0
  • PostgreSQL Prometheus plugin master
  • Prometheus PostgreSQL Adapter (docker) latest

But when I set the prometheus' configuration as given in the documentation, I have this error :

level=warn ts=2018-01-09T13:40:01.162965791Z caller=queue_manager.go:485 component=remote msg="Error sending samples to remote storage" count=100 err="server returned HTTP status 400 Bad Request: proto: bad wiretype for field remote.Query.StartTimestampMs: got wiretype 2, want 0"

Is the prometheus version too recent ?

Adapter Failed To Connect via Password

Hi All,

I am trying to push the prometheus data to postgresql via adapter.

Issue: Adapter failing with password authentication error
Reason: I am using kubernetes secret to hold the password in base 64 form. when fetching password via yaml and setting up it as env variable in k8s, the postgres adapter is not able to decode it because it seems adapter keep the password in single quotes and hence base64 is being treated as normal string.

Any help would be appreciated.

Regards,
Keshav Sharma

error: lookup pg_prometheus on <ip:port>: server misbehaving

I have this docker-compose file to start prometheus with this adapter:

version: "3.2"
volumes:
    prometheus_data: {}
services:

  # postgresql for prometheus
  pg_prometheus:
    image: timescale/pg_prometheus:master
    ports:
      - 5432:5432
    command:
      - 'postgres -csynchronous_commit=off'

  # storage adapter
  prometheus_postgresql_adapter:
    image: timescale/prometheus-postgresql-adapter:master
    ports:
      - 9201:9201
    links:
      - pg_prometheus
    depends_on:
      - pg_prometheus
    command:
      - '-pg-host=pg_prometheus'
      - '-pg-prometheus-log-samples'

  # prometheus 
  prometheus:
    image: prom/prometheus
    volumes:
      - type: bind
        source: ./configs/
        target: /etc/prometheus/
      - type: bind
        source: ./data
        target: /etc/prometheus/data
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/etc/prometheus/data'
      - '--storage.tsdb.retention=31d'
      - '--web.console.libraries=/usr/share/prometheus/console_libraries'
      - '--web.console.templates=/usr/share/prometheus/consoles'
      - '--web.enable-lifecycle'
    ports:
      - 9090:9090
    depends_on:
      - pg_prometheus
      - prometheus_postgresql_adapter
    links:
      - prometheus_postgresql_adapter

In my prometheus.yml I added urls for reading and writing:

remote_write:
  - url: "http://prometheus_postgresql_adapter:9201/write"
remote_read:
  - url: "http://prometheus_postgresql_adapter:9201/read"

After I launch it I have the following log with errors:

Starting daedra_pg_prometheus_1 ... done
Recreating daedra_prometheus_postgresql_adapter_1 ... done
Recreating daedra_prometheus_1 ... done
Attaching to daedra_pg_prometheus_1, daedra_prometheus_postgresql_adapter_1, daedra_prometheus_1
pg_prometheus_1                  | /usr/local/bin/docker-entrypoint.sh: line 145: exec: postgres -csynchronous_commit=off: not found
prometheus_postgresql_adapter_1  | host=pg_prometheus port=5432 user=postgres dbname=postgres password='' sslmode=disable connect_timeout=10
daedra_pg_prometheus_1 exited with code 127
prometheus_1                     | level=info ts=2018-02-21T07:13:06.2090674Z caller=main.go:225 msg="Starting Prometheus" version="(version=2.1.0, branch=HEAD, revision=85f23d82a045d103ea7f3c89a91fba4a93e6367a)"
prometheus_1                     | level=info ts=2018-02-21T07:13:06.2118463Z caller=main.go:226 build_context="(go=go1.9.2, user=root@6e784304d3ff, date=20180119-12:01:23)"
prometheus_1                     | level=info ts=2018-02-21T07:13:06.2122849Z caller=main.go:227 host_details="(Linux 4.9.60-linuxkit-aufs #1 SMP Mon Nov 6 16:00:12 UTC 2017 x86_64 5e3d213d1b02 (none))"
prometheus_1                     | level=info ts=2018-02-21T07:13:06.2127513Z caller=main.go:228 fd_limits="(soft=1048576, hard=1048576)"
prometheus_1                     | level=info ts=2018-02-21T07:13:06.2159414Z caller=main.go:499 msg="Starting TSDB ..."
prometheus_1                     | level=info ts=2018-02-21T07:13:06.2159657Z caller=web.go:383 component=web msg="Start listening for connections" address=0.0.0.0:9090
prometheus_postgresql_adapter_1  | level=error ts=2018-02-21T07:13:07.5344368Z caller=client.go:99 err="dial tcp: lookup pg_prometheus on 127.0.0.11:53: server misbehaving"
prometheus_1                     | level=info ts=2018-02-21T07:13:07.969697Z caller=main.go:509 msg="TSDB started"
prometheus_1                     | level=info ts=2018-02-21T07:13:07.9697553Z caller=main.go:585 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
prometheus_1                     | level=info ts=2018-02-21T07:13:07.9818122Z caller=main.go:486 msg="Server is ready to receive web requests."
prometheus_1                     | level=info ts=2018-02-21T07:13:07.9818271Z caller=manager.go:59 component="scrape manager" msg="Starting scrape manager..."
daedra_prometheus_postgresql_adapter_1 exited with code 1
prometheus_1                     | level=error ts=2018-02-21T07:13:10.1086058Z caller=wal.go:709 component=tsdb msg="operation failed" err="sync WAL directory: sync /etc/prometheus/data/wal: invalid argument"
prometheus_1                     | level=warn ts=2018-02-21T07:13:11.1532438Z caller=queue_manager.go:485 component=remote msg="Error sending samples to remote storage" count=100 err="Post http://prometheus_postgresql_adapter:9201/write: dial tcp: lookup prometheus_postgresql_adapter on 127.0.0.11:53: server misbehaving"
prometheus_1                     | level=warn ts=2018-02-21T07:13:14.2745967Z caller=queue_manager.go:485 component=remote msg="Error sending samples to remote storage" count=100 err="Post http://prometheus_postgresql_adapter:9201/write: dial tcp: lookup prometheus_postgresql_adapter on 127.0.0.11:53: server misbehaving"
prometheus_1                     | level=warn ts=2018-02-21T07:13:17.4257119Z caller=queue_manager.go:485 component=remote msg="Error sending samples to remote storage" count=100 err="Post http://prometheus_postgresql_adapter:9201/write: dial tcp: lookup prometheus_postgresql_adapter on 127.0.0.11:53: server misbehaving"

Last line continues indefinitely.
What am I doing wrong?

make: /opt/rh/llvm-toolset-7/root/usr/bin/clang: Command not found

> [root@vultr pg_prometheus]# make
> gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fPIC -DINCLUDE_PACKAGE_SUPPORT=0 -MMD -I. -I./ -I/usr/pgsql-11/include/server -I/usr/pgsql-11/include/internal  -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include  -c -o src/prom.o src/prom.c
> src/prom.c: In function ‘prom_construct’:
> src/prom.c:407:2: warning: implicit declaration of function ‘PG_GETARG_JSONB’ [-Wimplicit-function-declaration]
>   Jsonb    *jb = PG_GETARG_JSONB(3);
>   ^
> src/prom.c:407:17: warning: initialization makes pointer from integer without a cast [enabled by default]
>   Jsonb    *jb = PG_GETARG_JSONB(3);
>                  ^
> gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fPIC -DINCLUDE_PACKAGE_SUPPORT=0 -MMD -I. -I./ -I/usr/pgsql-11/include/server -I/usr/pgsql-11/include/internal  -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include  -c -o src/parse.o src/parse.c
> gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fPIC -DINCLUDE_PACKAGE_SUPPORT=0 -MMD -I. -I./ -I/usr/pgsql-11/include/server -I/usr/pgsql-11/include/internal  -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include  -c -o src/utils.o src/utils.c
> gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fPIC -DINCLUDE_PACKAGE_SUPPORT=0 -MMD -shared -o pg_prometheus.so src/prom.o src/parse.o src/utils.o -L/usr/pgsql-11/lib  -Wl,--as-needed -L/usr/lib64/llvm5.0/lib  -L/usr/lib64 -Wl,--as-needed -Wl,-rpath,'/usr/pgsql-11/lib',--enable-new-dtags
> /opt/rh/llvm-toolset-7/root/usr/bin/clang -Wno-ignored-attributes -fno-strict-aliasing -fwrapv -O2  -I. -I./ -I/usr/pgsql-11/include/server -I/usr/pgsql-11/include/internal  -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include -flto=thin -emit-llvm -c -o src/prom.bc src/prom.c
> make: /opt/rh/llvm-toolset-7/root/usr/bin/clang: Command not found
> make: *** [src/prom.bc] Error 127
> [root@vultr pg_prometheus]# grep clang ./* -R
> ./src/.dir-locals.el:          ;;(flycheck-clang-include-path . ("/usr/local/pgsql/include"
> [root@vultr pg_prometheus]# pwd
> /root/go/src/github.com/timescale/pg_prometheus
> [root@vultr pg_prometheus]#

Build fails

I could not build the adapter on a new Linux box due to compilation errors.

Also, the build instructions miss some critical steps. I had to debug make not finding the go libraries that glide downloaded because they were not in $HOME/go/src.

Retention

How do you drop chunks to reduce storage?

Postgresql-adapter Dial tcp i/o timeout error

When running:

$ docker run -it --name prometheus_postgresql_adapter --link pg_prometheus -p 9201:9201 timescale/prometheus-postgresql-adapter:latest -pg.password=xxxxxx_ -pg.host=pg_prometheus -pg.prometheus-log-samples

I get the following error:

level=info ts=2019-04-30T02:46:57.114577849Z caller=log.go:25 config="&{remoteTimeout:30000000000 listenAddr::9201 telemetryPath:/metrics pgPrometheusConfig:{host:pg_prometheus port:5432 user:postgres password:xxxxxx_ database:postgres schema: sslMode:disable table:metrics copyTable: maxOpenConns:50 maxIdleConns:10 pgPrometheusNormalize:true pgPrometheusLogSamples:true pgPrometheusChunkInterval:43200000000000 useTimescaleDb:true dbConnectRetries:0 readOnly:false} logLevel:debug haGroupLockId:0 restElection:false prometheusTimeout:-1}"
level=info ts=2019-04-30T02:46:57.114774342Z caller=log.go:25 msg="host=pg_prometheus port=5432 user=postgres dbname=postgres password='xxxxxx_' sslmode=disable connect_timeout=10"
level=error ts=2019-04-30T02:47:07.114948565Z caller=log.go:33 err="dial tcp: i/o timeout"

I followed the instructions exactly. pg_prometheus and prom/prometheus run fine. The adapter encounters the dial tcp i/o timeout.

adapter cannot connect to postgresql with default credentials

Hi,

I have setup postgresql 10 on my ubuntu box and it is in running state. Also, I changed pg_hba.conf to allow connections to the host without password by changing METHOD to trust from peer. Still, when I am running prometheus-postgresql-adapter, I am getting following error:

root@ubuntu:~/work/src/prometheus-postgresql-adapter# ./prometheus-postgresql-adapter
level=info ts=2018-04-27T08:24:04.537165229Z caller=client.go:80 msg="host=localhost port=5432 user=postgres dbname=postgres password='' sslmode=disable connect_timeout=10"
level=error ts=2018-04-27T08:24:04.603877981Z caller=client.go:99 err="pq: password authentication failed for user "postgres""

But remote connection to postgresql is working which was tested as below:
root@ubuntu:~/work/src/prometheus-postgresql-adapter# sudo psql -U postgres
psql (10.3 (Ubuntu 10.3-1.pgdg16.04+1))
Type "help" for help.

postgres=# \q

Binary for Windows, or a mean to compile

I would like ton install Prometheus and TimeScaleDB on a windows Server 2012 (or to test on windows 10), but natively, not with docker . But I didn't find binaries for windows, and the makefile is targetted at Docker and Linux.
So how do I compile or port it on windows. (precision: i'm newbie in go) ?
Thanx
F.Bevia

how Start Adapter on slave-postgresql-10.5-server?

Hi, i have master-slave configuration, and I try to run adapter (read only ) on slave server to balance.
when run with parameters:
/opt/pg_adapter/prometheus-postgresql-adapter-0.3-linux-amd64 -read.only -pg.database=prometheus -pg.password=mypass -pg.use-timescaledb=true -pg.user=myuser
adapter write:
level=error ts=2018-09-11T10:03:37.967475402Z caller=log.go:33 err="pq: cannot set transaction read-write mode during recovery"
This configuration is supported by this system? if so, where did I go wrong?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.