GithubHelp home page GithubHelp logo

dennis-tra / nebula Goto Github PK

View Code? Open in Web Editor NEW
289.0 11.0 30.0 16.48 MB

๐ŸŒŒ A network agnostic DHT crawler, monitor, and measurement tool that exposes timely information about DHT networks.

License: Apache License 2.0

Makefile 0.30% Go 89.54% Dockerfile 0.14% PLpgSQL 10.01%
ipfs golang libp2p filecoin crawler cid hacktoberfest

nebula's Introduction

Nebula Logo

Nebula

standard-readme compliant go test readme nebula GitHub license Hits

A network agnostic DHT crawler and monitor. The crawler connects to DHT bootstrappers and then recursively follows all entries in their k-buckets until all peers have been visited. The crawler supports the following networks:

The crawler was:

๐Ÿ“Š ProbeLab is publishing weekly reports for the IPFS Amino DHT based on the crawl results here! ๐Ÿ“Š

๐Ÿ“บ You can find a demo on YouTube: Nebula: A Network Agnostic DHT Crawler ๐Ÿ“บ

Screenshot from a Grafana dashboard

Table of Contents

Project Status

The crawler is powering critical IPFS Amino DHT KPIs, used for Weekly IPFS Reports as well as for many metrics on probelab.io. The main branch will contain the latest changes and should not be considered stable. The latest stable release that is production ready is version 2.2.0. The gathered numbers about the IPFS Amino DHT network are in line with existing data like from the wiberlin/ipfs-crawler. Their crawler also powers a dashboard which can be found here. Numbers of the Ethereum Consensus Layer do not match existing numbers from other teams like MigaLabs' as can be seen on their dashboard. However, this seems to be because of different ways to aggregate and group peers in the network.

Install

Precompile Binaries

Head over to the release section and download binaries from the latest stable release.

From source

Nebula has a hard dependency on Go 1.19 because Nebula requires go-libp2p <0.30. With version 0.30 go-libp2p dropped support for the quic transport and only continues to support quic-v1 (release notes). However, many peers in the IPFS Amino DHT still only listen on quic addresses (as opposed to quic-v1). Many of them also listen over tcp but from experiments I saw that they often refuse connections over tcp. As of 2023-12-02 this results in a significant increase of undialable peers that Nebula was previously able to connect to and identify.

Until the error incurred by dropping the quic transport is negligible or some new go-libp2p feature justifies an update, Nebula will stick to the old go-libp2p version.

Because go-libp2p has a dependency on quic-go and specific versions of quic-go can only be compiled with specific versions of Go. I'm currently sticking to Go 1.19, but it might be possible to update to Go 1.20 - I just haven't had the time to test this yet.

git clone https://github.com/dennis-tra/nebula
cd nebula
make build # Nebula requires Go 1.19!

Now you should find the nebula executable in the dist subfolder.

Usage

Nebula is a command line tool and provides the crawl sub-command.

Dry-Run

To simply crawl the IPFS Amino DHT network run:

nebula --dry-run crawl

The crawler can store its results as JSON documents or in a postgres database - the --dry-run flag prevents it from doing any of it. Nebula will just print a summary of the crawl at the end instead. A crawl takes ~5-10 min depending on your internet connection. You can also specify the network you want to crawl by appending, e.g., --network FILECOIN and limit the number of peers to crawl by providing the --limit flag with the value of, e.g., 1000. Example:

nebula --dry-run crawl --network FILECOIN --limit 1000

To find out which other network values are supported, you can run:

nebula networks

JSON Output

To store crawl results as JSON files provide the --json-out command line flag like so:

nebula --json-out ./results/ crawl

After the crawl has finished, you will find the JSON files in the ./results/ subdirectory.

When providing only the --json-out command line flag you will see that the *_neighbors.json document is empty. This document would contain the full routing table information of each peer in the network which is quite a bit of data (~250MB for the Amino DHT as of April '23) and is therefore disabled by default

Track Routing Table Information

To populate the document, you'll need to pass the --neighbors flag to the crawl subcommand.

nebula --json-out ./results/ crawl --neighbors

The routing table information forms a graph and graph visualization tools often operate with adjacency lists. To convert the *_neighbors.json document to an adjacency list, you can use jq and the following command:

jq -r '.NeighborIDs[] as $neighbor | [.PeerID, $neighbor] | @csv' ./results/2023-04-16T14:32_neighbors.json > ./results/2023-04-16T14:32_neighbors.csv

Postgres

If you want to store the information in a proper database, you could run make database or make databased (for running it in the background) to start a local postgres instance and run Nebula like:

nebula --db-user nebula_test --db-name nebula_test crawl --neighbors

At this point, you can also start Nebula's monitoring process, which would periodically probe the discovered peers to track their uptime. Run in another terminal:

nebula --db-user nebula_test --db-name nebula_test monitor

When Nebula is configured to store its results in a postgres database, then it also tracks session information of remote peers. A session is one continuous streak of uptime (see below).


There are a few more command line flags that are documented when you runnebula --help and nebula crawl --help:

How does it work?

crawl

The crawl sub-command starts by connecting to a set of bootstrap nodes and constructing the routing tables (kademlia k-buckets) of these peers based on their PeerIDs. Then nebula builds random PeerIDs with common prefix lengths (CPL) that fall each peers buckets, and asks each remote peer if they know any peers that are closer (XOR distance) to the ones nebula just constructed. This will effectively yield a list of all PeerIDs that a peer has in its routing table. The process repeats for all found peers until nebula does not find any new PeerIDs.

This process is heavily inspired by the basic-crawler in libp2p/go-libp2p-kad-dht from @aschmahmann.

If Nebula is configured to store its results in a database, every peer that was visited is written to it. The visit information includes latency measurements (dial/connect/crawl durations), current set of multi addresses, current agent version and current set of supported protocols. If the peer was dialable nebula will also create a session instance that contains the following information:

CREATE TABLE sessions (
    -- A unique id that identifies this particular session
    id                      INT GENERATED ALWAYS AS IDENTITY,
    -- Reference to the remote peer ID. (database internal ID)
    peer_id                 INT           NOT NULL,
    -- Timestamp of the first time we were able to visit that peer.
    first_successful_visit  TIMESTAMPTZ   NOT NULL,
    -- Timestamp of the last time we were able to visit that peer.
    last_successful_visit   TIMESTAMPTZ   NOT NULL,
    -- Timestamp when we should start visiting this peer again.
    next_visit_due_at       TIMESTAMPTZ,
    -- When did we notice that this peer is not reachable.
    first_failed_visit      TIMESTAMPTZ,
    -- When did we first notice that this peer is not reachable anymore.
    last_failed_visit       TIMESTAMPTZ,
    -- When did we last visit this peer. For indexing purposes.
    last_visited_at         TIMESTAMPTZ   NOT NULL,
    -- When was this session instance updated the last time
    updated_at              TIMESTAMPTZ   NOT NULL,
    -- When was this session instance created
    created_at              TIMESTAMPTZ   NOT NULL,
    -- Number of successful visits in this session.
    successful_visits_count INTEGER       NOT NULL,
    -- The number of times this session went from pending to open again.
    recovered_count         INTEGER       NOT NULL,
    -- The state this session is in (open, pending, closed)
    -- open: currently considered online
    -- pending: peer missed a dial and is pending to be closed
    -- closed: peer is considered to be offline and session is complete
    state                   session_state NOT NULL,
    -- Number of failed visits before closing this session.
    failed_visits_count     SMALLINT      NOT NULL,
    -- What's the first error before we close this session.
    finish_reason           net_error,
    -- The uptime time range for this session measured from first- to last_successful_visit to
    uptime                  TSTZRANGE     NOT NULL,

    -- The peer ID should always point to an existing peer in the DB
    CONSTRAINT fk_sessions_peer_id FOREIGN KEY (peer_id) REFERENCES peers (id) ON DELETE CASCADE,

    PRIMARY KEY (id, state, last_visited_at)

) PARTITION BY LIST (state);

At the end of each crawl nebula persists general statistics about the crawl like the total duration, dialable peers, encountered errors, agent versions etc...

Info: You can use the crawl sub-command with the global --dry-run option that skips any database operations.

Command line help page:

NAME:
   nebula crawl - Crawls the entire network starting with a set of bootstrap nodes.

USAGE:
   nebula crawl [command options] [arguments...]

OPTIONS:
   --addr-dial-type value                               Which type of addresses should Nebula try to dial (private, public, any) (default: "public") [$NEBULA_CRAWL_ADDR_DIAL_TYPE]
   --addr-track-type value                              Which type addresses should be stored to the database (private, public, any) (default: "public") [$NEBULA_CRAWL_ADDR_TRACK_TYPE]
   --bootstrap-peers value [ --bootstrap-peers value ]  Comma separated list of multi addresses of bootstrap peers (default: default IPFS) [$NEBULA_CRAWL_BOOTSTRAP_PEERS, $NEBULA_BOOTSTRAP_PEERS]
   --limit value                                        Only crawl the specified amount of peers (0 for unlimited) (default: 0) [$NEBULA_CRAWL_PEER_LIMIT]
   --neighbors                                          Whether to persist all k-bucket entries of a particular peer at the end of a crawl. (default: false) [$NEBULA_CRAWL_NEIGHBORS]
   --network nebula networks                            Which network should be crawled. Presets default bootstrap peers and protocol. Run: nebula networks for more information. (default: "IPFS") [$NEBULA_CRAWL_NETWORK]
   --protocols value [ --protocols value ]              Comma separated list of protocols that this crawler should look for [$NEBULA_CRAWL_PROTOCOLS, $NEBULA_PROTOCOLS]
   --workers value                                      How many concurrent workers should dial and crawl peers. (default: 1000) [$NEBULA_CRAWL_WORKER_COUNT]

   Network Specific Configuration:

   --check-exposed  Whether to check if the Kubo API is exposed. Checking also includes crawling the API. (default: false) [$NEBULA_CRAWL_CHECK_EXPOSED]

monitor

The monitor sub-command polls every 10 seconds all sessions from the database (see above) that are due to be dialed in the next 10 seconds (based on the next_visit_due_at timestamp). It attempts to dial all peers using previously saved multi-addresses and updates their session instances accordingly if they're dialable or not.

The next_visit_due_at timestamp is calculated based on the uptime that nebula has observed for that given peer. If the peer is up for a long time nebula assumes that it stays up and thus decreases the dial frequency aka. sets the next_visit_due_at timestamp to a time further in the future.

Command line help page:

NAME:
   nebula monitor - Monitors the network by periodically dialing previously crawled peers.

USAGE:
   nebula monitor [command options] [arguments...]

OPTIONS:
   --workers value  How many concurrent workers should dial peers. (default: 1000) [$NEBULA_MONITOR_WORKER_COUNT]
   --help, -h       show help

resolve

The resolve sub-command goes through all multi addresses that are present in the database and resolves them to their respective IP-addresses. Behind one multi address can be multiple IP addresses due to, e.g., the dnsaddr protocol. Further, it queries the GeoLite2 database from Maxmind to extract country information about the IP addresses and UdgerDB to detect datacenters. The command saves all information alongside the resolved addresses.

Command line help page:

NAME:
   nebula resolve - Resolves all multi addresses to their IP addresses and geo location information

USAGE:
   nebula resolve [command options] [arguments...]

OPTIONS:
   --udger-db value    Location of the Udger database v3 [$NEBULA_RESOLVE_UDGER_DB]
   --batch-size value  How many database entries should be fetched at each iteration (default: 100) [$NEBULA_RESOLVE_BATCH_SIZE]
   --help, -h          show help (default: false)

Development

To develop this project, you need Go 1.19 and the following tools:

To install the necessary tools you can run make tools. This will use the go install command to download and install the tools into your $GOPATH/bin directory. So make sure you have it in your $PATH environment variable.

Database

You need a running postgres instance to persist and/or read the crawl results. Run make database or use the following command to start a local instance of postgres:

docker run --rm -p 5432:5432 -e POSTGRES_PASSWORD=password -e POSTGRES_USER=nebula_test -e POSTGRES_DB=nebula_test --name nebula_test_db postgres:14

Info: You can use the crawl sub-command with the global --dry-run option that skips any database operations or store the results as JSON files with the --json-out flag.

The default database settings for local development are:

Name     = "nebula_test"
Password = "password"
User     = "nebula_test"
Host     = "localhost"
Port     = 5432

Migrations are applied automatically when nebula starts and successfully establishes a database connection.

To run them manually you can run:

# Up migrations
make migrate-up

# Down migrations
make migrate-down

# Generate the ORM with SQLBoiler
make models # runs: sqlboiler
# This will update all files in the `pkg/models` directory.
# Create new migration
migrate create -ext sql -dir pkg/db/migrations -seq some_migration_name

Tests

To run the tests you need a running test database instance:

make database
make test

Release Checklist

  • Merge everything into main
  • Create a new tag with the new version
  • Push tag to GitHub

This will trigger the goreleaser.yml workflow which pushes creates a new draft release in GitHub.

Related Efforts

Demo

The following presentation shows a ways to use Nebula by showcasing crawls of the Amino, Celestia, and Ethereum DHT's:

Nebula: A Network Agnostic DHT Crawler - Dennis Trautwein

Maintainers

@dennis-tra.

Contributing

Feel free to dive in! Open an issue or submit PRs.

Support

It would really make my day if you supported this project through Buy Me A Coffee.

Other Projects

You may be interested in one of my other projects:

  • pcp - Command line peer-to-peer data transfer tool based on libp2p.
  • image-stego - A novel way to image manipulation detection. Steganography-based image integrity - Merkle tree nodes embedded into image chunks so that each chunk's integrity can be verified on its own.

License

Apache License Version 2.0 ยฉ Dennis Trautwein

nebula's People

Contributors

cortze avatar dennis-tra avatar guillaumemichel avatar iand avatar ja7ad avatar kasteph avatar weiihann avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nebula's Issues

bump quic-go to 0.37+ for go 1.21+

nebula crawler no longer compiles on latest go

go install github.com/dennis-tra/nebula-crawler/cmd/nebula@latest
[...]
# github.com/quic-go/quic-go/internal/qtls
.go/pkg/mod/github.com/quic-go/[email protected]/internal/qtls/go121.go:5:13: cannot use "The version of quic-go you're using can't be built on Go 1.21 yet. For more details, please see https://github.com/quic-go/quic-go/wiki/quic-go-and-Go-versions." (untyped string constant "The version of quic-go you're using can't be built on Go 1.21 yet. F...) as int value in variable declaration

ERRO[0000] error: flag provided but not defined: -json-out

Hi! Running nebula crawl --json-out + dir i am having the abovementioned response (ERRO[0000] error: flag provided but not defined: -json-out ). Any ideas? Thanks!
++ I tried postgres to avoid json. After running the docker image i get the following error: creating crawl in db: models: unable to insert into crawls: pq: relation "crawls" does not exist) and from the postgres' side:
2023-10-11 08:47:04.973 UTC [70] ERROR: relation "crawls" does not exist at character 13

Use proper user agent

Libp2p allows to configure a user agent via libp2p.UserAgent(...).

I'm thinking of using nebula-crawler/{version} while {version} should come from a central source.

Check existing data when starting a crawl to not mix data from different networks

Imagine you're running the crawler for the IPFS network for some time. Then you want to start crawling the FILECOIN network as well and experiment around. This could easily lead to FILECOIN data ending up in the same database as the IPFS data. This could be avoided if prior to each crawl we check which network was actually crawled before.

postgres insert error

"Could not write dial result" alive=true dialDur=7.714151764s dialerID=dialer-406 error="pq: range lower bound must be less than or equal to range upper bound" remoteID="<peer.ID 12*WRRbFG>"

Support crawler substrate on polkadot-v0.9.41

Hello

Currently, I have working on substrate here is node https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fdev.gsviec.com#/explorer/node

image

And try to add code

BootstrapPeersCustom = []string{
 "/ip4/18.223.218.133/tcp/30333/p2p/12D3KooWSjZH2ETsHsPsfqFuCjbtsEDFrT9L8VeG647iTyMJ3aiC",
}

Then the result after running

{"PeerID":"12D3KooWSjZH2ETsHsPsfqFuCjbtsEDFrT9L8VeG647iTyMJ3aiC","Maddrs":["/ip4/18.223.218.133/tcp/30333"],"Protocols":["/1ddce31fac84071085205b3a01888178bba5b08cb30a668a732c9d13939081c5/transactions/1","/ipfs/ping/1.0.0","/1ddce31fac84071085205b3a01888178bba5b08cb30a668a732c9d13939081c5/kad","/sup/kad","/1ddce31fac84071085205b3a01888178bba5b08cb30a668a732c9d13939081c5/sync/2","/1ddce31fac84071085205b3a01888178bba5b08cb30a668a732c9d13939081c5/sync/warp","/sup/sync/warp","/sup/transactions/1","/1ddce31fac84071085205b3a01888178bba5b08cb30a668a732c9d13939081c5/grandpa/1","/1ddce31fac84071085205b3a01888178bba5b08cb30a668a732c9d13939081c5/state/2","/sup/block-announces/1","/paritytech/grandpa/1","/ipfs/id/1.0.0","/ipfs/id/push/1.0.0","/1ddce31fac84071085205b3a01888178bba5b08cb30a668a732c9d13939081c5/block-announces/1","/sup/sync/2","/sup/state/2","/1ddce31fac84071085205b3a01888178bba5b08cb30a668a732c9d13939081c5/light/2","/sup/light/2"],"AgentVersion":"Substrate Node/v4.0.0-dev-a2bd00b67e2 (Node-Thien)","ConnectDuration":"1.376036919s","CrawlDuration":"1.903481279s","VisitStartedAt":"2023-09-05T09:49:30.632198132Z","VisitEndedAt":"2023-09-05T09:49:32.535679338Z","ConnectErrorStr":"","CrawlErrorStr":"unknown","IsExposed":null}

But we have more then peers that the result is correct ?

Note

time="2023-09-05T09:49:30Z" level=info msg="Starting Nebula crawler..."
time="2023-09-05T09:49:30Z" level=info msg="Initializing JSON client" out=a
time="2023-09-05T09:49:30Z" level=info msg="Starting to crawl the network"
time="2023-09-05T09:49:30Z" level=info msg="Initializing crawl..."
time="2023-09-05T09:49:32Z" level=info msg="Handled crawl result from worker crawler-14" crawled=1 crawlerID=crawler-14 error="getting closest peer with CPL 8: protocols not supported: [/sub/kad]" inCrawlQueue=0 isDialable=false remoteID=12D3KooWSjZH2ETs
time="2023-09-05T09:49:32Z" level=info msg="Persisted result from worker crawler-14" duration="563.683ยตs" persisted=1 persisterID=persister-07 remoteID=12D3KooWSjZH2ETs success=true
time="2023-09-05T09:49:32Z" level=info msg="Waiting for persister to stop" persisterID=persister-01
time="2023-09-05T09:49:32Z" level=info msg="Waiting for persister to stop" persisterID=persister-03
time="2023-09-05T09:49:32Z" level=info msg="Waiting for persister to stop" persisterID=persister-05
time="2023-09-05T09:49:32Z" level=info msg="Waiting for persister to stop" persisterID=persister-07
time="2023-09-05T09:49:32Z" level=info msg="Waiting for persister to stop" persisterID=persister-09
time="2023-09-05T09:49:32Z" level=info msg="Waiting for persister to stop" persisterID=persister-11
time="2023-09-05T09:49:32Z" level=info msg="Waiting for persister to stop" persisterID=persister-13
time="2023-09-05T09:49:32Z" level=info msg="Waiting for persister to stop" persisterID=persister-15
time="2023-09-05T09:49:32Z" level=info msg="Waiting for persister to stop" persisterID=persister-17
time="2023-09-05T09:49:32Z" level=info msg="Waiting for persister to stop" persisterID=persister-19
time="2023-09-05T09:49:32Z" level=info msg="Persisting crawl result..."
time="2023-09-05T09:49:32Z" level=info msg="Persisting crawl properties..."
time="2023-09-05T09:49:32Z" level=info msg="Logging crawl results..."
time="2023-09-05T09:49:32Z" level=info
time="2023-09-05T09:49:32Z" level=info
time="2023-09-05T09:49:32Z" level=info msg=Agent count=1 value="Substrate Node/v4.0.0-dev-a2bd00b67e2 (Node-Thien)"
time="2023-09-05T09:49:32Z" level=info
time="2023-09-05T09:49:32Z" level=info msg=Protocol count=1 value=/1ddce31fac84071085205b3a01888178bba5b08cb30a668a732c9d13939081c5/state/2
time="2023-09-05T09:49:32Z" level=info msg=Protocol count=1 value=/paritytech/grandpa/1
time="2023-09-05T09:49:32Z" level=info msg=Protocol count=1 value=/ipfs/id/push/1.0.0
time="2023-09-05T09:49:32Z" level=info msg=Protocol count=1 value=/1ddce31fac84071085205b3a01888178bba5b08cb30a668a732c9d13939081c5/kad
time="2023-09-05T09:49:32Z" level=info msg=Protocol count=1 value=/sup/kad
time="2023-09-05T09:49:32Z" level=info msg=Protocol count=1 value=/1ddce31fac84071085205b3a01888178bba5b08cb30a668a732c9d13939081c5/sync/2
time="2023-09-05T09:49:32Z" level=info msg=Protocol count=1 value=/1ddce31fac84071085205b3a01888178bba5b08cb30a668a732c9d13939081c5/sync/warp
time="2023-09-05T09:49:32Z" level=info msg=Protocol count=1 value=/sup/sync/warp
time="2023-09-05T09:49:32Z" level=info msg=Protocol count=1 value=/sup/transactions/1
time="2023-09-05T09:49:32Z" level=info msg=Protocol count=1 value=/sup/block-announces/1
time="2023-09-05T09:49:32Z" level=info msg=Protocol count=1 value=/sup/sync/2
time="2023-09-05T09:49:32Z" level=info msg=Protocol count=1 value=/1ddce31fac84071085205b3a01888178bba5b08cb30a668a732c9d13939081c5/block-announces/1
time="2023-09-05T09:49:32Z" level=info msg=Protocol count=1 value=/sup/state/2
time="2023-09-05T09:49:32Z" level=info msg=Protocol count=1 value=/1ddce31fac84071085205b3a01888178bba5b08cb30a668a732c9d13939081c5/transactions/1
time="2023-09-05T09:49:32Z" level=info msg=Protocol count=1 value=/ipfs/ping/1.0.0
time="2023-09-05T09:49:32Z" level=info msg=Protocol count=1 value=/1ddce31fac84071085205b3a01888178bba5b08cb30a668a732c9d13939081c5/grandpa/1
time="2023-09-05T09:49:32Z" level=info msg=Protocol count=1 value=/ipfs/id/1.0.0
time="2023-09-05T09:49:32Z" level=info msg=Protocol count=1 value=/1ddce31fac84071085205b3a01888178bba5b08cb30a668a732c9d13939081c5/light/2
time="2023-09-05T09:49:32Z" level=info msg=Protocol count=1 value=/sup/light/2
time="2023-09-05T09:49:32Z" level=info
time="2023-09-05T09:49:32Z" level=info msg="Finished crawl" crawlDuration=1.910313629s crawledPeers=1 dialablePeers=1 undialablePeers=0

Add CSV output option

If the --neighbors and, e.g., the --csv flag is given a neighbors adjacency list should be generated after the crawl. Format:

peer_id_1,neighbor_1
peer_id_1,neighbor_2
peer_id_1,neighbor_3
peer_id_1,neighbor_4
peer_id_1,neighbor_5
...
peer_id_2,neighbor_x
peer_id_2,neighbor_y
peer_id_2,neighbor_z
...

Is it possible to continue monitoring nodes after one unsuccessful dial?

Hello, it seems like after one unsuccessful dial attempt, the monitor will mark a node as unreachable and will not attempt any further dial. It is possible to continue monitoring the disconnected nodes for a period of time before considering them as permanently unreachable? In this way, we can possibly gather some data on the disconnected node to analyze the offline pattern for some server nodes (for example, some nodes may have regular offline time due to various reasons).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.