GithubHelp home page GithubHelp logo

alethio / eth2stats-client Goto Github PK

View Code? Open in Web Editor NEW
32.0 10.0 19.0 147 KB

Command line stats collector for Eth2Stats Ethereum 2 Network Monitor

Home Page: https://eth2stats.io

License: MIT License

Dockerfile 0.86% Makefile 0.57% Go 96.32% Shell 2.25%

eth2stats-client's Introduction

Ethereum 2.0 Network Stats and Monitoring - CLI Client

This is an intial POC release of the eth2stats network monitoring suite

It supports Prysm, Lighthouse, Teku, Nimbus and v1 of standardized api (Lodestar). Once the standard lands the client will be refactored to support just that.

Supported clients and protocols:

Client Supported Protocols Supported features
Prysm GRPC Version, head, sync stats, memory, attestation count
Lighthouse (v1) HTTP Version, head, sync stats, memory
Teku HTTP Version, head, sync stats, memory
Lodestar (v1) HTTP Version, head, sync stats, memory, attestation count
Nimbus HTTP Version, head, sync stats, memory
Trinity

Current live deployments:

Getting Started

The following section uses Docker to run. If you want to build from source go here.

The most important variable to change is --eth2stats.node-name which will define what name your node has on eth2stats.

Joining a Testnet

The first thing you should do is get a beacon node running and connected to your Eth2 network of choice.

The dashboard for the given testnet has a "Add your node" button. The client information is not always accurate however. See below for options per client.

docker run -d --name eth2stats --restart always --network="host" \
      -v ~/eth2stats/data:/data \
      alethio/eth2stats-client:latest \
      run --v \
      --eth2stats.node-name="YourPrysmNode" \
      --data.folder="/data" \
      --eth2stats.addr="grpc.sapphire.eth2stats.io:443" --eth2stats.tls=true \
      --beacon.type="changeme" --beacon.addr="changeme" --beacon.metrics-addr="changeme" # insert client-specific options here

Client options

Client version --beacon.type --beacon.addr --beacon.metrics-addr
Lighthouse v0.3.x v1 (standard API) http://localhost:5052 http://127.0.0.1:5054/metrics (changed)
Lighthouse v0.2.x lighthouse http://localhost:5052 http://127.0.0.1:5052/metrics
Lodestar v1 (standard API) http://localhost:9596 http://127.0.0.1:8008/metrics
Nimbus nimbus http://localhost:9190 http://127.0.0.1:8008/metrics
Prysm prysm localhost:4000 (GRPC!) http://127.0.0.1:8080/metrics
Teku teku http://localhost:5051 http://127.0.0.1:8008/metrics

The metrics are only required if you want to see your beacon node client's memory usage on eth2stats.

Securing your gRPC connection to the Beacon Chain

If your Beacon node uses a TLS connection for its GRPC endpoint you need to provide a valid certificate to eth2stats-client via the --beacon.tls-cert flag:

docker run -d --name eth2stats --restart always --network="host" \
      -v ~/eth2stats/data:/data \
      ... # omitted for brevity
      --beacon.type="prysm" --beacon.addr="localhost:4000" --beacon.tls-cert "/data/cert.pem"

Have a look at Prysm's documentation to learn how to start their Beacon Chain with enabled TLS and how to generate and use self-signed certificates.

Metrics

If you want to see your beacon node client's memory usage as well, make sure you have metrics enabled and add this cli argument, pointing at the right host, e.g. --beacon.metrics-addr="http://127.0.0.1:8080/metrics".

Default metrics endpoints of supported clients:

  • Lighthouse: 127.0.0.1:5054/metrics (using --metrics --metrics-address=127.0.0.1 --metrics-port=5054)
  • Teku: 127.0.0.1:8008/metrics (using --metrics-enabled=true in Teku options)
  • Prysm: 127.0.0.1:8080/metrics, monitoring enabled by default.
  • Nimbus: 127.0.0.1:8008/metrics (using --metrics --metrics-port=8008)
  • Lodestar: 127.0.0.1:8008/metrics (configure with "metrics": { "enabled": true, "serverPort": 8008} in config JSON)

The process_resident_memory_bytes gauge is extracted from the Prometheus metrics endpoint.

Building from source

Prerequisites

  • a working Golang environment (tested with go v1.14)
    • requires go modules (>=go v1.11)

Step-by-step

Clone the repo

git clone https://github.com/Alethio/eth2stats-client.git
cd eth2stats-client

Build the executable

We are using go modules, so it will automatically download the dependencies

make build

Run

The eth2stats-client can run with run and flags as described per client.

Example for Lighthouse:

./eth2stats-client run \
                   --eth2stats.node-name="YourNode" \
                   --eth2stats.addr="grpc.example.eth2stats.io:443" --eth2stats.tls=true \
                   --beacon.type="lighthouse" --beacon.addr="http://localhost:5052"

Note that since Prysm uses GRPC, the addr flag does not start with http://, unlike the others. So it would be like --beacon.addr="localhost:4000".

For the other clients, it is similar as lighthouse, except you replace the name.

Client names are prysm, lighthouse, teku, nimbus, lodestar. And v1 for standard API option, which clients are all planning to adopt.

eth2stats-client's People

Contributors

kwix avatar lacasian avatar linki avatar mpetrunic avatar protolambda avatar raphpa avatar tzapu avatar zediir avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

eth2stats-client's Issues

System Slot feature

With the recent medalla clock chaos, it became apparent how important the time of the node is. And since this is no secret to the outside world, why not share it on eth2stats to raise awareness, and alert users?

Possible design:

  • Client sends current timestamp (timestamp is put in the message after calling api methods etc.)
  • Include system slot in the /clients response data. 0 = not available
  • Server substracts genesis time, then divides by slot time, to calculate the "system slot" (float64)
  • Dashboard adds some visual marker on an entry if the system time is different than the eth2stats server time (do not compare to browser time, that adds even more latency). Alert users when nodes are having a bad time (no pun intended)!
    • Show the color based on difference with expected time. d = abs(server_time - client_time), d < 0.4: green, 0.4<=d<1.0: yellow, 1.0<=d<3.0: orange, d>=3.0: red.
  • Dashboard shows a warning at the top if the median (bucket by second) time of all clients is too different from the server, as sanity check. The server itself may have an inaccurate time.
  • Dashboard shows a warning at the top if the server time itself is inaccurate compared to browser (user) time.

Questions:

  • Account for client-server latency: (maybe track server-client latency somehow, and adjust times with it). It's nice when all times can be compared better.
  • How to deploy the feature on eth2stats.io main site, and which dashboard changes to make, to keep it compatible and not shake the current setup too much.

Related cleanup:

  • clients are all sending genesis time over and over again (???)
  • server needs to be configured with genesis time, timestamp option would be nice (more common in client api formats)
  • no need to track genesis time per client (head root already signifies the difference in beacon state, genesis time is included in it)

I'm happy to implement all this if others like the idea, and agree on above design. Feedback welcome.

Fatal upstream request timeout

time="2020-02-13T08:57:53Z" level=error msg="[prysm] rpc error: code = Unavailable desc = UNAVAILABLE:upstream request timeout"
time="2020-02-13T08:57:53Z" level=fatal msg="[telemetry] rpc error: code = Unavailable desc = UNAVAILABLE:upstream request timeout"

Docker image was alethio/eth2stats-client@sha256:a0b388460afcaf3a336fb47ef188eede0a5b92760a72fd0b1ec3e77025055f31

Nimbus - could not find `process_resident_memory_bytes` in metrics

Running Nimbus client 0.6.6
Running eth2stats-client version v0.0.16+d729a1d

Description of warn message
stats is looking for process_resident_memory_bytes

Closest stat that Nimbus reports is
nim_gc_mem_bytes 4100096.0
nim_gc_mem_occupied_bytes 2733144.0
sqlite3_memory_used_bytes 2614336.0

--Version flag not working

When entering : eth2stats-client --version there is an error message.
Just asking for the version should not be dependent on any config file.
Also, currently the setup for eth2 client al goes via parameters and no config file.

2020-05-08_16-47-08

Favorited node resets to unfavorited after X minutes

I love the site and have been using it during the last few days to track my node's participation. However, after leaving the tab and coming back to it (maybe after 30 or 60 minutes) I find my node ("alex.eth") that has been favorited is no longer favorited.

image

Nodes are not visible in Global Map screen

Node name was not visible in global map screen. However able to see the node name in the list.
The nodes are hosted from Google Cloud Platform from Singapore.

Attached are the snapshots of node name in the list.
image

Pyrmont testnet

Although Medalla testnet is still alive, it has been deprecated and no longer maintained. Any plan to switch to the latest testnet Pyrmont?

Support for Lighthouse v0.3.0+

Looks like Lighthouse changed their HTTP API
sigp/lighthouse#1434

I think that it now breaks the eth2stats-client

WARN Error processing HTTP API request method: GET, path: /beacon/head, status: 405 Method Not Allowed, elapsed: 271.068µs

Received message larger than max (8400136 vs. 4194304)

Sometimes the client crashes and on restart it keeps crashing with the following error message:

time="2020-02-10T15:42:17Z" level=error msg="[prysm] rpc error: code = ResourceExhausted desc = grpc: received message larger than max (8400136 vs. 4194304)"
time="2020-02-10T15:42:17Z" level=fatal msg="[telemetry] rpc error: code = ResourceExhausted desc = grpc: received message larger than max (8400136 vs. 4194304)"

This wasn't an issue before and I cannot seem to reproduce it consistently. I'm running the client as part of a docker-compose stack, together with the Prysm beacon node:

version: '2'

services:
  node:
    image: gcr.io/prysmaticlabs/prysm/beacon-chain:latest
    restart: always
    stdin_open: true
    tty: true
    command: --datadir=/data --p2p-host-ip=94.103.153.169 --min-sync-peers=7 --p2p-max-peers=100 --deposit-contract=0x4689a3C63CE249355C8a573B5974db21D2d1b8Ef
    ports:
     - 4000:4000
     - 13000:13000
    volumes:
     - '/opt/prysm/beacon:/data'
    labels:
     - 'com.centurylinklabs.watchtower.enable=true'
     
  ethstats:
    image: alethio/eth2stats-client:latest
    restart: always
    command: run --v --eth2stats.node-name="morten.eth" --data.folder="/data" --eth2stats.addr="grpc.sapphire.eth2stats.io:443" --eth2stats.tls=true --beacon.type="prysm" --beacon.addr="node:4000" --beacon.metrics-addr="http://node:8080/metrics"
    volumes:
     - '/opt/prysm/stats:/data'
    labels:
      - 'com.centurylinklabs.watchtower.enable=true'
    depends_on:
     - node
    links:
     - node:node

  watchtower:
    image: containrrr/watchtower
    restart: always
    command: --label-enable --cleanup --include-stopped
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

eth2stats adds a new gRPC connection to the prysm node every 12 seconds until the OS runs out of file handlers

Hi,

I added my prysm node (alpha17) to eth2stats this morning. Later in the afternoon it dropped of the stats board so I looked into the tmux session running my node and noticed it is throwing errors, because it can't open any file handles anymore. I also saw that it's logging out "New gRPC connection to beacon node" with a new port every 12 seconds. I simply rebooted the device and everything was back to normal. I investigated what these port are about and saw that all these connections are ESTABLISHED when I look into lsof. After stopping the container it immediately stops adding new connections. Must be because of the pre-genesis condition. I will turn the container back on after genesis otherwise it will crash the node again.

Bildschirmfoto 2020-07-31 um 19 39 15

Crashes on startup if beacon node not yet ready.

It seems during the start-up, if the beacon node is not yet ready to serve traffic, eth2stats will crash.

This causes a problem when running both of them in a kubernetes pod together. The pod starts up, starting a beacon node, and eth2stats. The beacon node takes a little time to be able to start serving traffic, but eth2stats attempts immediately, fails, and crashes. This causes all the containers to be restarted, and we get into an infinite loop of failure.

Would it make sense to have some retry/back-off mechanism at startup to be more forgiving of the beacon node not yet being ready?

fatal error: invalid node URL: "beacon:4000"

Not sure if I'm missing something but trying to run the eth2stats client inside a docker-compose along a working beacon (reachable at beacon:4000), I end up with the following error on startup:

prysm_eth2stats | time="2020-08-17T09:02:05Z" level=info msg="Could not load config file. Falling back to args. Error: Config File \"config\" Not Found in \"[/]\"" module=main
prysm_eth2stats | time="2020-08-17T09:02:05Z" level=fatal msg="[core] invalid node URL: \"beacon:4000\""

Compose block:

  stats:
    image: alethio/eth2stats-client:latest
    restart: always
    container_name: prysm_eth2stats
    hostname: prysm_eth2stats
    environment:
      - TZ=${TZ}
    volumes:
      - "./volumes/eth2stats/data:/data:rw"
    command:
      - run
      - --eth2stats.node-name="mgcrea-prysm-01"
      - --data.folder="/data"
      - --eth2stats.addr="grpc.medalla.eth2stats.io:443"
      - --eth2stats.tls=true
      - --beacon.type="prysm"
      - --beacon.addr="beacon:4000"
      - --beacon.metrics-addr="http://beacon:8080/metrics"
    depends_on:
      - beacon

Lighthouse not updating

My eth2stats for lighthouse don't seem to be updating

time="2020-07-17T19:04:56Z" level=info msg="Could not load config file. Falling back to args. Error: Config File \"config\" Not Found in \"[/]\"" module=main
time="2020-07-17T19:04:56Z" level=info msg="[core] setting up eth2stats server connection"
time="2020-07-17T19:04:56Z" level=info msg="[core] getting beacon client version"
time="2020-07-17T19:04:56Z" level=info msg="[core] got beacon client version" version=Lighthouse/v0.1.2-unstable/x86_64-linux
time="2020-07-17T19:04:56Z" level=info msg="[core] getting beacon client genesis time"
time="2020-07-17T19:04:56Z" level=info msg="[core] beacon client genesis time" genesisTime=1593433805
time="2020-07-17T19:04:56Z" level=info msg="[core] awaiting connection to eth2stats server"
time="2020-07-17T19:05:03Z" level=info msg="[core] getting chain head for initial feed"
time="2020-07-17T19:05:03Z" level=info msg="[core] got chain head" headSlot=131574
time="2020-07-17T19:05:03Z" level=info msg="[core] successfully connected to eth2stats server"
time="2020-07-17T19:05:03Z" level=info msg="[core] setting up chain heads subscription"
time="2020-07-17T19:05:03Z" level=info msg="[polling] polling for new heads"

That's all it does, no other logs, at all. But my Beacon-Node is running fine.

If I stop the docker and re-start it, it updates the headSlot to the latest headSlot, then sits there again.

I appear to be on the latest commit d1c921565596bbaf0218e24f96cf7e351685eaa5

Secure gRPC?

Is there any way to get eth2stats to connect to secure gRPC? I just got prysm set up to use a certificate, but then realized eth2stats won't work with it. :(

on teku, pre genesis, an error is thrown

Mai 26 10:21:19 ethnode-7451d4c7 eth2stats-client[21290]: time="2020-05-26T10:21:19+02:00" level=info msg="[core] getting beacon client version"
Mai 26 10:21:19 ethnode-7451d4c7 eth2stats-client[21290]: time="2020-05-26T10:21:19+02:00" level=info msg="[core] got beacon client version" version=teku/v0.11.3-dev-44d9e02a/linux-aarch_64/-ubuntu-
Mai 26 10:21:19 ethnode-7451d4c7 eth2stats-client[21290]: time="2020-05-26T10:21:19+02:00" level=info msg="[core] getting beacon client genesis time"
Mai 26 10:21:19 ethnode-7451d4c7 eth2stats-client[21290]: time="2020-05-26T10:21:19+02:00" level=error msg="[main] setting up: strconv.ParseInt: parsing "": invalid syntax"

it should be able to deal with this and retry until everything is ok

Cannot connect to beacon node

running eth2stats gives the following error:

FATA[0000] [core] invalid node URL: localhost:5052

I run eth2stats with the following parameter:

./eth2stats-client run
--eth2stats.node-name="ethnode-c65e37164"
--eth2stats.addr="grc.summer.eth2stats.io:443"
--eth2stats.tls=true
--beacon.type="lighthouse"
--beacon.addr="localhost:5052"
--beacon.metrics-addr="http://localhost:8080/metrics"

lighthouse is up and running fine.

A short glimpse to the used ports with sudo lsof -i -P -n | grep LISTEN
gives the following output

lighthous 3962 ubuntu 30u IPv4 728231 0t0 TCP *:9000 (LISTEN)
lighthous 3962 ubuntu 35u IPv4 728237 0t0 TCP 127.0.0.1:5052 (LISTEN)

Do you have any ideas what it could be?

Configurable token.dat path

Would it be possible to provide a path via flag for the token.dat?
This would help us add eth2stats client in our deployments.

Build fails

Trying to build the repo on a NANOPC T4. This is the output:

root@ethnode-9538ecf1:~/eth2stats-client# make build
go build -ldflags "-X main.buildVersion="v0.0.3-872141e""
main.go:7:2: cannot find package "github.com/alethio/eth2stats-client/commands" in any of:
/usr/lib/go-1.10/src/github.com/alethio/eth2stats-client/commands (from $GOROOT)
/root/eth2stats-client/src/github.com/alethio/eth2stats-client/commands (from $GOPATH)
Makefile:4: recipe for target 'build' failed
make: *** [build] Error 1

Not really familiar with go. Do you git any hints?

Take into account peer state for standard API

Lighthouse nodes currently have their peer count overstated on eth2stats.io because eth2stats-client ignores the peer states when reading /eth/v1/node/peers. I think it should use the state field to work out which peers are connected.

E.g. eth2-stats reports this number:

$ curl -s "http://localhost:5052/eth/v1/node/peers" | jq '.data | length'
244

when it should report:

$ curl -s "http://localhost:5052/eth/v1/node/peers" | jq '.data | map(select(.state == "connected")) | length'
50

404 error code with Medalla

Hi, running the Eth2stats client as a DAppNode package and the Prysm beacon chain on the Medalla testnet with the following config parameters:

BEACON_ADDR prysm-medalla-beacon-chain.dappnode:4000
BEACON_METRICS http://prysm-medalla-beacon-chain.dappnode:8080/metrics
BEACON_TYPE prysm

Returns the following error:

time="2020-08-07T08:55:20Z" level=info msg="Could not load config file. Falling back to args. Error: Config File "config" Not Found in "[/]"" module=main
time="2020-08-07T08:55:20Z" level=debug msg="[main] Debug mode"
time="2020-08-07T08:55:20Z" level=info msg="[prysm] setting up beacon client connection"
time="2020-08-07T08:55:20Z" level=info msg="[core] setting up eth2stats server connection"
time="2020-08-07T08:55:20Z" level=debug msg="[core] looking for existing token"
time="2020-08-07T08:55:20Z" level=warning msg="[core] token file not found; will register as new client"
time="2020-08-07T08:55:20Z" level=info msg="[core] getting beacon client version"
time="2020-08-07T08:55:20Z" level=info msg="[core] got beacon client version" version="Prysm/v1.0.0-alpha.19/0d118df0343bf0e268e9fb4f2d5eb60156519c11. Built at: 2020-08-05 14:27:07+00:00"
time="2020-08-07T08:55:20Z" level=info msg="[core] getting beacon client genesis time"
time="2020-08-07T08:55:20Z" level=info msg="[core] got beacon client genesis time" genesisTime=1596546008
time="2020-08-07T08:55:20Z" level=info msg="[core] awaiting connection to eth2stats server"
time="2020-08-07T08:55:21Z" level=error msg="[main] setting up: eth2stats: failed to connect: rpc error: code = Unimplemented desc = Not Found: HTTP status code 404; transport: received the unexpected content-type "text/plain; charset=utf-8""
time="2020-08-07T08:55:33Z" level=info msg="[main] retrying..."

Eth2Stats client using 1.5GB memory

While investigating an OOM killing, I noticed eth2stats-client using 1.5GB of memory, which is a lot more than I've ever seen it use previously. I'm using the latest v1 node type with Lighthouse v0.3.0:

~/eth2stats-client/eth2stats-client run \
        --eth2stats.node-name="$node_name" \
        --data.folder ~/.eth2stats/data \
        --eth2stats.addr="grpc.medalla.eth2stats.io:443" --eth2stats.tls=true \
        --beacon.type="v1" \
        --beacon.addr="http://localhost:5052" \
        --beacon.metrics-addr="http://localhost:5054/metrics"

My only hunch about what it could be is the attestation pool. Lighthouse hoards a lot of attestations in its pool during periods without finality, e.g. I currently have 329822 attestations, which consume 188MB as a JSON response from /eth/v1/beacon/pool/attestations. Still, this in an order of magnitude lower than the amount of memory eth2stats is consuming.

When I restarted the client its memory usage quickly jumped back up to around 1.3GB:

eth2stats-client

Version: eth2stats-client version v0.0.14+1455e8d

Error trying to run on Archlinux aarch64

Getting error standard_init_linux.go:211: exec user process caused "exec format

[pi@archlinux ~]$ docker run --restart always --network="host" --name eth2stats-client -v ~/.eth2stats/data:/data alethio/eth2stats-client:latest run --eth2stats.node-name="archlinuxarm" --data.folder="/data" --eth2stats.addr="grpc.topaz.eth2stats.io:443" --eth2stats.tls=true --beacon.type="prysm" --beacon.addr="localhost:4000" --beacon.metrics-addr="http://localhost:8080/metrics"
standard_init_linux.go:211: exec user process caused "exec format error"
[pi@archlinux ~]$ uname -a
Linux archlinux 5.4.38-1-ARCH #1 SMP PREEMPT Wed May 6 11:05:57 MDT 2020 aarch64 GNU/Linux

Timestamp in log

Currently the log entries do not show a timestamp. It would be nice to see when a log-entry is actually printed. This way we have a "time reference".

WARN[2639] [core] ChainHead request was skipped due to rate limiting
INFO[2641] [prysm] got chain head headSlot=67504
INFO[2643] [prysm] got chain head headSlot=67505
INFO[2644] [prysm] got chain head headSlot=67506
INFO[2646] [prysm] got chain head headSlot=67507
INFO[2648] [prysm] got chain head headSlot=67508
INFO[2650] [prysm] got chain head headSlot=67509
INFO[2652] [prysm] got chain head headSlot=67510
INFO[2653] [prysm] got chain head headSlot=67511
INFO[2655] [prysm] got chain head headSlot=67512
INFO[2657] [prysm] got chain head headSlot=67513
INFO[2659] [prysm] got chain head headSlot=67514
INFO[2659] [prysm] got chain head headSlot=67515
WARN[2659] [core] ChainHead request was skipped due to rate limiting
INFO[2661] [prysm] got chain head headSlot=67516
INFO[2662] [prysm] got chain head headSlot=67517
WARN[2662] [core] ChainHead request was skipped due to rate limiting
INFO[2664] [prysm] got chain head headSlot=67518

Can't pull peer count

Getting the follow error messages. I can confirm the node shows up on the site, but indeed the peer count is empty.

INFO[0000] [metrics-watcher] Started polling metrics    
INFO[0000] [metrics-watcher] querying metrics           
INFO[0000] [prysm] listening on stream                  
ERRO[0000] [prysm] rpc error: code = Unknown desc = runtime error: invalid memory address or nil pointer dereference 
ERRO[0000] [telemetry] getting peer count: rpc error: code = Unknown desc = runtime error: invalid memory address or nil pointer dereference 
ERRO[0012] [prysm] rpc error: code = Unknown desc = runtime error: invalid memory address or nil pointer dereference 
ERRO[0012] [telemetry] getting peer count: rpc error: code = Unknown desc = runtime error: invalid memory address or nil pointer dereference 
ERRO[0024] [prysm] rpc error: code = Unknown desc = runtime error: invalid memory address or nil pointer dereference 
ERRO[0024] [telemetry] getting peer count: rpc error: code = Unknown desc = runtime error: invalid memory address or nil pointer dereference 

Site on mobile phone no longer working

When I call the eth2stats.io on my iPhone (iOS 13.3.1), it gets stuck at the opening screen (see screen print). It no longer goes to the list of nodes. In the past (some weeks ago), this was working. I guess this was before the latest update. The desktop version continues to work.

IMG_0539

Prysm gets more verbose logs than other clients

For Prysm, eth2stats keeps logging each of the headSlot. Meanwhile for other clients, logging would quiet down and only logs on errors. I guess this might have to do with prysm connection being gRPC, and therefore the slightly different implementation.

Prysm

prysm_eth2stats_1  | time="2020-08-08T12:00:06Z" level=info msg="Could not load config file. Falling back to args. Error: Config File \"config\" Not Found in \"[/]\"" module=main
prysm_eth2stats_1  | time="2020-08-08T12:00:06Z" level=info msg="[prysm] setting up beacon client connection"
prysm_eth2stats_1  | time="2020-08-08T12:00:06Z" level=warning msg="[prysm] no tls certificate provided; will use insecure connection to beacon chain"
prysm_eth2stats_1  | time="2020-08-08T12:00:06Z" level=info msg="[core] setting up eth2stats server connection"
prysm_eth2stats_1  | time="2020-08-08T12:00:06Z" level=info msg="[core] getting beacon client version"
prysm_eth2stats_1  | time="2020-08-08T12:00:08Z" level=info msg="[core] got beacon client version" version="Prysm/v1.0.0-alpha.19/ec21316efd11bce1a84fb713b0db5bf2d025f9b6. Built at: 2020-08-06 05:39:54+00:00"
prysm_eth2stats_1  | time="2020-08-08T12:00:08Z" level=info msg="[core] getting beacon client genesis time"
prysm_eth2stats_1  | time="2020-08-08T12:00:08Z" level=info msg="[core] beacon client genesis time" genesisTime=0
prysm_eth2stats_1  | time="2020-08-08T12:00:08Z" level=info msg="[core] awaiting connection to eth2stats server"
prysm_eth2stats_1  | time="2020-08-08T12:00:09Z" level=info msg="[core] getting chain head for initial feed"
prysm_eth2stats_1  | time="2020-08-08T12:00:09Z" level=info msg="[core] got chain head" headSlot=28495
prysm_eth2stats_1  | time="2020-08-08T12:00:09Z" level=info msg="[core] successfully connected to eth2stats server"
prysm_eth2stats_1  | time="2020-08-08T12:00:09Z" level=info msg="[core] setting up chain heads subscription"
prysm_eth2stats_1  | time="2020-08-08T12:00:09Z" level=info msg="[prysm] listening on stream"
prysm_eth2stats_1  | time="2020-08-08T12:01:17Z" level=info msg="[prysm] got chain head" headSlot=26625
prysm_eth2stats_1  | time="2020-08-08T12:01:17Z" level=info msg="[prysm] got chain head" headSlot=26627
prysm_eth2stats_1  | time="2020-08-08T12:01:17Z" level=info msg="[prysm] got chain head" headSlot=26628
prysm_eth2stats_1  | time="2020-08-08T12:01:17Z" level=info msg="[prysm] got chain head" headSlot=26629
prysm_eth2stats_1  | time="2020-08-08T12:01:18Z" level=info msg="[prysm] got chain head" headSlot=26630
prysm_eth2stats_1  | time="2020-08-08T12:01:18Z" level=info msg="[prysm] got chain head" headSlot=26631
prysm_eth2stats_1  | time="2020-08-08T12:01:18Z" level=info msg="[prysm] got chain head" headSlot=26632
prysm_eth2stats_1  | time="2020-08-08T12:01:18Z" level=info msg="[prysm] got chain head" headSlot=26635
prysm_eth2stats_1  | time="2020-08-08T12:01:19Z" level=info msg="[prysm] got chain head" headSlot=26636
prysm_eth2stats_1  | time="2020-08-08T12:01:19Z" level=info msg="[prysm] got chain head" headSlot=26637
# ... keeps logging "[prysm] got chain head" for each new head slot

Lighthouse

lighthouse_eth2stats_1  | time="2020-08-08T10:01:44Z" level=info msg="Could not load config file. Falling back to args. Error: Config File \"config\" Not Found in \"[/]\"" module=main
lighthouse_eth2stats_1  | time="2020-08-08T10:02:23Z" level=info msg="[core] setting up eth2stats server connection"
lighthouse_eth2stats_1  | time="2020-08-08T10:02:23Z" level=info msg="[core] getting beacon client version"
lighthouse_eth2stats_1  | time="2020-08-08T10:02:23Z" level=info msg="[core] got beacon client version" version=Lighthouse/v0.1.2-unstable/x86_64-linux
lighthouse_eth2stats_1  | time="2020-08-08T10:02:23Z" level=info msg="[core] getting beacon client genesis time"
lighthouse_eth2stats_1  | time="2020-08-08T10:02:23Z" level=info msg="[core] beacon client genesis time" genesisTime=1596546008
lighthouse_eth2stats_1  | time="2020-08-08T10:02:23Z" level=info msg="[core] awaiting connection to eth2stats server"
lighthouse_eth2stats_1  | time="2020-08-08T10:02:29Z" level=info msg="[core] getting chain head for initial feed"
lighthouse_eth2stats_1  | time="2020-08-08T10:02:29Z" level=info msg="[core] got chain head" headSlot=26271
lighthouse_eth2stats_1  | time="2020-08-08T10:02:29Z" level=info msg="[core] successfully connected to eth2stats server"
lighthouse_eth2stats_1  | time="2020-08-08T10:02:29Z" level=info msg="[core] setting up chain heads subscription"
lighthouse_eth2stats_1  | time="2020-08-08T10:02:29Z" level=info msg="[polling] polling for new heads"
# ... no further logs unless there's an error, while latest data continues to show on eth2stats.io correctly

Teku

teku_eth2stats_1  | time="2020-08-08T12:23:29Z" level=info msg="Could not load config file. Falling back to args. Error: Config File \"config\" Not Found in \"[/]\"" module=main
teku_eth2stats_1  | time="2020-08-08T12:24:44Z" level=info msg="[core] setting up eth2stats server connection"
teku_eth2stats_1  | time="2020-08-08T12:24:44Z" level=info msg="[core] getting beacon client version"
teku_eth2stats_1  | time="2020-08-08T12:24:46Z" level=info msg="[core] got beacon client version" version=teku/v0.12.3-dev-b40fd617/linux-x86_64/oracle_openjdk-java-14
teku_eth2stats_1  | time="2020-08-08T12:24:46Z" level=info msg="[core] getting beacon client genesis time"
teku_eth2stats_1  | time="2020-08-08T12:24:46Z" level=info msg="[core] beacon client genesis time" genesisTime=1596546008
teku_eth2stats_1  | time="2020-08-08T12:24:46Z" level=info msg="[core] awaiting connection to eth2stats server"
teku_eth2stats_1  | time="2020-08-08T12:24:46Z" level=info msg="[core] getting chain head for initial feed"
teku_eth2stats_1  | time="2020-08-08T12:24:47Z" level=info msg="[core] got chain head" headSlot=28624
teku_eth2stats_1  | time="2020-08-08T12:24:47Z" level=info msg="[core] successfully connected to eth2stats server"
teku_eth2stats_1  | time="2020-08-08T12:24:47Z" level=info msg="[core] setting up chain heads subscription"
teku_eth2stats_1  | time="2020-08-08T12:24:47Z" level=info msg="[polling] polling for new heads"
# ... no further logs unless there's an error, while latest data continues to show on eth2stats.io correctly

Expected behaviour

Logging quiets down after a successful chain heads subscription. Perhaps the chain head log gets demoted to "debug" level?

teku metrics polling fails to parse

teku_stats_1  | time="2020-08-17T23:50:04Z" level=warning msg="[metrics-watcher] failed to poll metrics: text format parsing error in line 934: expected float as value for 'quantile' label, got \"50%\""
teku_stats_1  | time="2020-08-17T23:50:04Z" level=info msg="[metrics-watcher] querying metrics"
teku_stats_1  | time="2020-08-17T23:50:04Z" level=error msg="[metrics-watcher] reading text format failed:text format parsing error in line 934: expected float as value for 'quantile' label, got \"50%\""
teku_stats_1  | time="2020-08-17T23:50:04Z" level=warning msg="[metrics-watcher] failed to poll metrics: text format parsing error in line 934: expected float as value for 'quantile' label, got \"50%\""
teku_stats_1  | time="2020-08-17T23:50:05Z" level=info msg="[metrics-watcher] querying metrics"

teku metrics response values

# TYPE validator_attestation_publication_delay summary
validator_attestation_publication_delay{quantile="50%",} 0.0
validator_attestation_publication_delay{quantile="95%",} 0.0
validator_attestation_publication_delay{quantile="99%",} 0.0
validator_attestation_publication_delay{quantile="100%",} 0.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.