GithubHelp home page GithubHelp logo

natscli's Introduction

The NATS Command Line Interface

A command line utility to interact with and manage NATS.

This utility replaces various past tools that were named in the form nats-sub and nats-pub, adds several new capabilities and support full JetStream management.

Features

  • JetStream management
  • JetStream data and configuration backup
  • Message publish and subscribe
  • Service requests and creation
  • Benchmarking and Latency testing
  • Super Cluster observation
  • Configuration context maintenance
  • NATS eco system schema registry

Installation

Releases are published to GitHub where zip, rpm and debs for various operating systems can be found.

Installation via go install

The nats cli can be installed directly via go install. To install the latest version:

go install github.com/nats-io/natscli/nats@latest

To install a specific release:

go install github.com/nats-io/natscli/[email protected]

OS X installation via Homebrew

For OS X brew can be used to install the latest version:

brew tap nats-io/nats-tools
brew install nats-io/nats-tools/nats

Arch Linux installation via yay

For Arch users there is an AUR package that you can install with:

yay natscli

Installation from the shell

The following script will install the latest version of the nats cli on Linux and OS X:

curl -sf https://binaries.nats.dev/nats-io/natscli/nats@latest | sh

Nightly docker images

Nightly builds are included in the synadia/nats-server:nightly Docker images.

Configuration Contexts

The nats CLI supports multiple named configurations, for the rest of the document we'll interact via demo.nats.io. To enable this we'll create a demo configuration and set it as default.

First we add a configuration to capture the default localhost configuration.

nats context add localhost --description "Localhost"

Output

NATS Configuration Context "localhost"

  Description: Localhost
  Server URLs: nats://127.0.0.1:4222

Next we add a context for demo.nats.io:4222 and we select it as default.

nats context add nats --server demo.nats.io:4222 --description "NATS Demo" --select

Output

NATS Configuration Context "nats"

  Description: NATS Demo
  Server URLs: demo.nats.io:4222

These are the contexts, the * indicates the default

nats context ls

Output

Known contexts:

   localhost           Localhost
   nats*               NATS Demo

The context is selected as default, use nats context --help to see how to add, remove and edit contexts.

To switch to another context we can use:

nats ctx select localhost

To switch context back to previous one, we can use context previous subcommand:

nats ctx -- -

Configuration file

nats-cli stores contextes in ~/.config/nats/context. Those contextes are stored as JSON documents. You can find the description and expected value for this configuration file by running nats --help and look for the global flags.

JetStream management

For full information on managing JetStream please refer to the JetStream Documentation

As of nats-server v2.2.0 JetStream is GA.

Publish and Subscribe

The nats CLI can publish messages and subscribe to subjects.

Basic Behaviours

We will subscribe to the cli.demo subject:

nats sub cli.demo 

Output

12:30:25 Subscribing on cli.demo

We can now publish messages to the cli.demo subject.

First we publish a single message:

nats pub cli.demo "hello world" 

Output

12:31:20 Published 11 bytes to "cli.demo"

Next we publish 5 messages with a counter and timestamp in the format message 5 @ 2020-12-03T12:33:18+01:00:

nats pub cli.demo "message {{.Count}} @ {{.TimeStamp}}" --count=5

Output

12:33:17 Published 33 bytes to "cli.demo"
12:33:17 Published 33 bytes to "cli.demo"
12:33:17 Published 33 bytes to "cli.demo"
12:33:18 Published 33 bytes to "cli.demo"
12:33:18 Published 33 bytes to "cli.demo"

We can also publish messages read from STDIN:

echo hello|nats pub cli.demo 

Output

12:34:15 Reading payload from STDIN
12:34:15 Published 6 bytes to "cli.demo"

Finally, NATS supports HTTP style headers and the CLI behaves like curl:

nats pub cli.demo 'hello headers' -H Header1:One -H Header2:Two 

Output

12:38:44 Published 13 bytes to "cli.demo"

The receiver will show:

nats sub cli.demo  

Output

[#47] Received on "cli.demo"
Header1: One
Header2: Two

hello headers

match requests and replies

We can print matching replay-requests together

nats sub --match-replies cli.demo

Output

[#48] Received on "cli.demo" with reply "_INBOX.12345"

[#48] Matched reply on "_INBOX.12345"

sub --match-replies --dump subject.name

Output
X.json
X_reply.json

JetStream

When receiving messages from a JetStream Push Consumer messages can be acknowledged when received by passing --ack, the message metadata is also produced:

nats sub js.out.testing --ack 

Output

12:55:23 Subscribing on js.out.testing with acknowledgement of JetStream messages
[#1] Received JetStream message: consumer: TESTING > TAIL / subject: js.in.testing / delivered: 1 / consumer seq: 568 / stream seq: 2638 / ack: true
test JS message

Queue Groups

When subscribers join a Queue Group the messages are randomly load shared within the group. Perform the following subscribe in 2 or more shells and then publish messages using some of the methods shown above, these messages will only be received by one of the subscribers at a time.

nats sub cli.demo --queue=Q1

Service Requests and Creation

NATS supports a RPC mechanism where a service received Requests and replies with data in response.

nats reply 'cli.weather.>' "Weather Service" 

Output

12:43:28 Listening on "cli.weather.>" in group "NATS-RPLY-22"

In another shell we can send a request to this service:

nats request "cli.weather.london" '' 

Output

12:46:34 Sending request on "cli.weather.london"
12:46:35 Received on "_INBOX.BJoZpwsshQM5cKUj8KAkT6.HF9jslpP" rtt 404.76854ms
Weather Service

This shows that the service round trip was 404ms, and we can see the response Weather Service.

To make this a bit more interesting we can interact with the wttr.in web service:

nats reply 'cli.weather.>' --command "curl -s wttr.in/{{2}}?format=3" 

Output

12:47:03 Listening on "cli.weather.>" in group "NATS-RPLY-22"

We can perform the same request again:

nats request "cli.weather.{london,newyork}" '' --raw 

Output

london: 🌦 +7°C
newyork: ☀️ +2°C

Now the nats CLI parses the subject, extracts the {london,newyork} from the subjects and calls curl, replacing {{2}} with the body of the 2nd subject token - {london,newyork}.

Translating message data using a converter command

Additional to the raw output of messages using nats sub and nats stream view you can also translate the message data by running it through a command.

The command receives the message data as raw bytes through stdin and the output of the command will be the shown output for the message. There is the additional possibility to add the filter subject by using {{Subject}} as part of the arguments for the tranlation command.

Examples for using the translation feature:

Here we use the jq tool to format our json message payload into a more readable format:

We subscribe to a subject that will receive json data.

nats sub --translate 'jq .' cli.json

Now we publish some example data.

nats pub cli.json '{"task":"demo","duration":60}'

The Output will show the message formatted.

23:54:35 Subscribing on cli.json
[#1] Received on "cli.json"
{
  "task": "demo",
  "duration": 60
}

Another example is creating hex dumps from any message to avoid terminal corruption.

By changing the subscription into:

nats sub --translate 'xxd' cli.json

We will get the following output for the same published msg:

00:02:56 Subscribing on cli.json
[#1] Received on "cli.json"
00000000: 7b22 7461 736b 223a 2264 656d 6f22 2c22  {"task":"demo","
00000010: 6475 7261 7469 6f6e 223a 3630 7d         duration":60}

Examples for using the translation feature with template:

A somewhat artificial example using the subject as argument would be:

nats sub --translate "sed 's/\(.*\)/{{Subject}}: \1/'" cli.json

Output

00:22:19 Subscribing on cli.json
[#1] Received on "cli.json"
cli.json: {"task":"demo","duration":60}

The translation feature makes it possible to write specialized or universal translators to aid in debugging messages in streams or core nats.

Benchmarking and Latency Testing

Benchmarking and latency testing is key requirement for evaluating the production preparedness of your NATS network.

Benchmarking

Here we'll run these benchmarks against a local server instead of demo.nats.io.

nats context select localhost 

Output

NATS Configuration Context "localhost"

  Description: Localhost
  Server URLs: nats://127.0.0.1:4222

We can benchmark core NATS publishing performance, here we publish 10 million messages from 5 concurrent publishers. By default messages are published as quick as possible without any acknowledgement or confirmations:

nats bench test --msgs=10000000 --pub 5 

Output

01:30:14 Starting benchmark [msgs=10,000,000, msgsize=128 B, pubs=5, subs=0, js=false, stream=benchstream  storage=memory, syncpub=false, pubbatch=100, jstimeout=30s, pull=false, pullbatch=100, request=false, reply=false, noqueue=false, maxackpending=-1, replicas=1, purge=false]
Finished      0s [================================================] 100%
Finished      0s [================================================] 100%
Finished      0s [================================================] 100%
Finished      0s [================================================] 100%
Finished      0s [================================================] 100%

Pub stats: 14,047,987 msgs/sec ~ 1.67 GB/sec
 [1] 3,300,540 msgs/sec ~ 402.90 MB/sec (2000000 msgs)
 [2] 3,306,601 msgs/sec ~ 403.64 MB/sec (2000000 msgs)
 [3] 3,296,538 msgs/sec ~ 402.41 MB/sec (2000000 msgs)
 [4] 2,813,752 msgs/sec ~ 343.48 MB/sec (2000000 msgs)
 [5] 2,811,227 msgs/sec ~ 343.17 MB/sec (2000000 msgs)
 min 2,811,227 | avg 3,105,731 | max 3,306,601 | stddev 239,453 msgs

Adding --sub 2 will start two subscribers on the same subject and measure the rate of messages:

nats bench test --msgs=10000000 --pub 5 --sub 2 

Output

...
01:30:52 Starting benchmark [msgs=10,000,000, msgsize=128 B, pubs=5, subs=2, js=false, stream=benchstream  storage=memory, syncpub=false, pubbatch=100, jstimeout=30s, pull=false, pullbatch=100, request=false, reply=false, noqueue=false, maxackpending=-1, replicas=1, purge=false]
01:30:52 Starting subscriber, expecting 10,000,000 messages
01:30:52 Starting subscriber, expecting 10,000,000 messages
Finished      6s [================================================] 100%
Finished      6s [================================================] 100%
Finished      6s [================================================] 100%
Finished      6s [================================================] 100%
Finished      6s [================================================] 100%
Finished      6s [================================================] 100%
Finished      6s [================================================] 100%

NATS Pub/Sub stats: 4,906,104 msgs/sec ~ 598.89 MB/sec
 Pub stats: 1,635,428 msgs/sec ~ 199.64 MB/sec
  [1] 328,573 msgs/sec ~ 40.11 MB/sec (2000000 msgs)
  [2] 328,147 msgs/sec ~ 40.06 MB/sec (2000000 msgs)
  [3] 327,411 msgs/sec ~ 39.97 MB/sec (2000000 msgs)
  [4] 327,318 msgs/sec ~ 39.96 MB/sec (2000000 msgs)
  [5] 327,283 msgs/sec ~ 39.95 MB/sec (2000000 msgs)
  min 327,283 | avg 327,746 | max 328,573 | stddev 520 msgs
 Sub stats: 3,271,233 msgs/sec ~ 399.32 MB/sec
  [1] 1,635,682 msgs/sec ~ 199.67 MB/sec (10000000 msgs)
  [2] 1,635,616 msgs/sec ~ 199.66 MB/sec (10000000 msgs)
  min 1,635,616 | avg 1,635,649 | max 1,635,682 | stddev 33 msgs

JetStream testing can be done by adding the --js flag. You can for example measure first the speed of publishing into a stream

nats bench js.bench --js --pub 2 --msgs 1000000 --purge 

Output

01:37:36 Starting benchmark [msgs=1,000,000, msgsize=128 B, pubs=2, subs=0, js=true, stream=benchstream  storage=memory, syncpub=false, pubbatch=100, jstimeout=30s, pull=false, pullbatch=100, request=false, reply=false, noqueue=false, maxackpending=-1, replicas=1, purge=true]
01:37:36 Purging the stream
Finished      2s [================================================] 100%
Finished      2s [================================================] 100%

Pub stats: 415,097 msgs/sec ~ 50.67 MB/sec
 [1] 207,907 msgs/sec ~ 25.38 MB/sec (500000 msgs)
 [2] 207,572 msgs/sec ~ 25.34 MB/sec (500000 msgs)
 min 207,572 | avg 207,739 | max 207,907 | stddev 167 msgs

And then you can for example measure the speed of receiving (i.e. replay) the messages from the stream using ordered push consumers

nats bench js.bench --js --sub 4 --msgs 1000000 

Output

01:40:05 Starting benchmark [msgs=1,000,000, msgsize=128 B, pubs=0, subs=4, js=true, stream=benchstream  storage=memory, syncpub=false, pubbatch=100, jstimeout=30s, pull=false, pullbatch=100, request=false, reply=false, noqueue=false, maxackpending=-1, replicas=1, purge=false]
01:40:05 Starting subscriber, expecting 1,000,000 messages
01:40:05 Starting subscriber, expecting 1,000,000 messages
01:40:05 Starting subscriber, expecting 1,000,000 messages
01:40:05 Starting subscriber, expecting 1,000,000 messages
Finished      2s [================================================] 100%
Finished      2s [================================================] 100%
Finished      2s [================================================] 100%
Finished      2s [================================================] 100%

Sub stats: 1,522,920 msgs/sec ~ 185.90 MB/sec
 [1] 382,739 msgs/sec ~ 46.72 MB/sec (1000000 msgs)
 [2] 382,772 msgs/sec ~ 46.73 MB/sec (1000000 msgs)
 [3] 382,407 msgs/sec ~ 46.68 MB/sec (1000000 msgs)
 [4] 381,060 msgs/sec ~ 46.52 MB/sec (1000000 msgs)
 min 381,060 | avg 382,244 | max 382,772 | stddev 698 msgs

Similarily you can benchmark synchronous request-reply type of interactions using the --request and --reply flags. For example you can first start one (or more) replier(s)

nats bench test --sub 2 --reply

And then run a benchmark with one (or more) synchronous requester(s)

nats bench test --pub 10 --request  

Output

03:04:56 Starting benchmark [msgs=100,000, msgsize=128 B, pubs=10, subs=0, js=false, stream=benchstream  storage=memory, syncpub=false, pubbatch=100, jstimeout=30s, pull=false, pullbatch=100, request=true, reply=false, noqueue=false, maxackpending=-1, replicas=1, purge=false]
03:04:56 Benchmark in request-reply mode
Finished      2s [================================================] 100%
Finished      2s [================================================] 100%
Finished      2s [================================================] 100%
Finished      2s [================================================] 100%
Finished      2s [================================================] 100%
Finished      2s [================================================] 100%
Finished      2s [================================================] 100%
Finished      2s [================================================] 100%
Finished      2s [================================================] 100%
Finished      2s [================================================] 100%

Pub stats: 40,064 msgs/sec ~ 4.89 MB/sec
 [1] 4,045 msgs/sec ~ 505.63 KB/sec (10000 msgs)
 [2] 4,031 msgs/sec ~ 503.93 KB/sec (10000 msgs)
 [3] 4,034 msgs/sec ~ 504.37 KB/sec (10000 msgs)
 [4] 4,031 msgs/sec ~ 503.92 KB/sec (10000 msgs)
 [5] 4,022 msgs/sec ~ 502.85 KB/sec (10000 msgs)
 [6] 4,028 msgs/sec ~ 503.59 KB/sec (10000 msgs)
 [7] 4,025 msgs/sec ~ 503.22 KB/sec (10000 msgs)
 [8] 4,028 msgs/sec ~ 503.59 KB/sec (10000 msgs)
 [9] 4,025 msgs/sec ~ 503.15 KB/sec (10000 msgs)
 [10] 4,018 msgs/sec ~ 502.28 KB/sec (10000 msgs)
 min 4,018 | avg 4,028 | max 4,045 | stddev 7 msgs

There are numerous other flags that can be set to configure size of messages, using push or pull JetStream consumers and much more, see nats bench --help.

Latency

Latency is the rate at which messages can cross your network, with the nats CLI you can connect a publisher and subscriber to your NATS network and measure the latency between the publisher and subscriber.

nats latency --server-b localhost:4222 --rate 500000  

Output

==============================
Pub Server RTT : 64µs
Sub Server RTT : 70µs
Message Payload: 8B
Target Duration: 5s
Target Msgs/Sec: 500000
Target Band/Sec: 7.6M
==============================
HDR Percentiles:
10:       57µs
50:       94µs
75:       122µs
90:       162µs
99:       314µs
99.9:     490µs
99.99:    764µs
99.999:   863µs
99.9999:  886µs
99.99999: 1.483ms
100:      1.483ms
==============================
Actual Msgs/Sec: 499990
Actual Band/Sec: 7.6M
Minimum Latency: 25µs
Median Latency : 94µs
Maximum Latency: 1.483ms
1st Sent Wall Time : 3.091ms
Last Sent Wall Time: 5.000098s
Last Recv Wall Time: 5.000168s

Various flags exist to adjust message size and target rates, see nats latency --help

Super Cluster observation

NATS publish a number of events and have a Request-Reply API that expose a wealth of internal information about the state of the network.

For most of these features you will need a System Account enabled, most of these commands are run against that account.

I create a system context before running these commands and pass that to the commands.

Lifecycle Events

nats event --context system 

Output

Listening for Client Connection events on $SYS.ACCOUNT.*.CONNECT
Listening for Client Disconnection events on $SYS.ACCOUNT.*.DISCONNECT
Listening for Authentication Errors events on $SYS.SERVER.*.CLIENT.AUTH.ERR

[12:18:35] [puGCIK5UcWUxBXJ52q4Hti] Client Connection

   Server: nc1-c1
  Cluster: c1

   Client:
                 ID: 17
               User: one
               Name: NATS CLI Version development
            Account: one
    Library Version: 1.11.0  Language: go
               Host: 172.21.0.1

[12:18:35] [puGCIK5UcWUxBXJ52q4Hw8] Client Disconnection

   Reason: Client Closed
   Server: nc1-c1
  Cluster: c1

   Client:
                 ID: 17
               User: one
               Name: NATS CLI Version development
            Account: one
    Library Version: 1.11.0  Language: go
               Host: 172.21.0.1

   Stats:
      Received: 0 messages (0 B)
     Published: 1 messages (0 B)
           RTT: 1.551714ms

Here one can see a client connected and disconnected shortly after, several other system events are supported.

If an account is running JetStream the nats event tool can also be used to look at JetStream advisories by passing --js-metric --js-advisory

These events are JSON messages and can be viewed raw using --json or in Cloud Events format with --cloudevent, finally a short version of the messages can be shown:

nats event --short 

Output

Listening for Client Connection events on $SYS.ACCOUNT.*.CONNECT
Listening for Client Disconnection events on $SYS.ACCOUNT.*.DISCONNECT
Listening for Authentication Errors events on $SYS.SERVER.*.CLIENT.AUTH.ERR
12:20:58 [Connection] user: one cid: 19 in account one
12:20:58 [Disconnection] user: one cid: 19 in account one: Client Closed
12:21:00 [Connection] user: one cid: 20 in account one
12:21:00 [Disconnection] user: one cid: 20 in account one: Client Closed
12:21:00 [Connection] user: one cid: 21 in account one

Super Cluster Discovery and Observation

When a cluster or super cluster of NATS servers is configured with a system account a wealth of information is available via internal APIs, the nats tool can interact with these and observe your servers.

A quick view of the available servers and your network RTT to each can be seen with nats server ping:

nats server ping 

Output

nc1-c1                                                       rtt=2.30864ms
nc3-c1                                                       rtt=2.396573ms
nc2-c1                                                       rtt=2.484994ms
nc3-c2                                                       rtt=2.549958ms
...

---- ping statistics ----
9 replies max: 3.00 min: 1.00 avg: 2.78

A general server overview can be seen with nats server list:

nats server list 

Output

+----------------------------------------------------------------------------------------------------------------------------+
|                                                      Server Overview                                                       |
+--------+------------+-----------+---------------+-------+------+--------+-----+---------+-----+------+--------+------------+
| Name   | Cluster    | IP        | Version       | Conns | Subs | Routes | GWs | Mem     | CPU | Slow | Uptime | RTT        |
+--------+------------+-----------+---------------+-------+------+--------+-----+---------+-----+------+--------+------------+
| nc1-c1 | c1         | localhost | 2.2.0-beta.34 | 1     | 97   | 2      | 2   | 13 MiB  | 0.0 | 0    | 5m29s  | 3.371675ms |
| nc2-c1 | c1         | localhost | 2.2.0-beta.34 | 0     | 97   | 2      | 2   | 13 MiB  | 0.0 | 0    | 5m29s  | 3.48287ms  |
| nc3-c1 | c1         | localhost | 2.2.0-beta.34 | 0     | 97   | 2      | 2   | 14 MiB  | 0.0 | 0    | 5m30s  | 3.57123ms  |
| nc1-c3 | c3         | localhost | 2.2.0-beta.34 | 0     | 96   | 2      | 2   | 15 MiB  | 0.0 | 0    | 5m29s  | 3.655548ms |
...
+--------+------------+-----------+---------------+-------+------+--------+-----+---------+-----+------+--------+------------+
|        | 3 Clusters | 9 Servers |               | 1     | 867  |        |     | 125 MiB |     | 0    |        |            |
+--------+------------+-----------+---------------+-------+------+--------+-----+---------+-----+------+--------+------------+

+----------------------------------------------------------------------------+
|                              Cluster Overview                              |
+---------+------------+-------------------+-------------------+-------------+
| Cluster | Node Count | Outgoing Gateways | Incoming Gateways | Connections |
+---------+------------+-------------------+-------------------+-------------+
| c1      | 3          | 6                 | 6                 | 1           |
| c3      | 3          | 6                 | 6                 | 0           |
| c2      | 3          | 6                 | 6                 | 0           |
+---------+------------+-------------------+-------------------+-------------+
|         | 9          | 18                | 18                | 1           |
+---------+------------+-------------------+-------------------+-------------+

Data from a specific server can be accessed using it's server name or ID:

nats server info nc1-c1 

Output

Server information for nc1-c1 (NBNIKFCQZ3J6I7JDTUDHAH3Z3HOQYEYGZZ5HOS63BX47PS66NHPT2P72)

Process Details:

         Version: 2.2.0-beta.34
      Git Commit: 2e26d919
      Go Version: go1.14.12
      Start Time: 2020-12-03 12:18:00.423780567 +0000 UTC
          Uptime: 10m1s

Connection Details:

   Auth Required: true
    TLS Required: false
            Host: localhost:10000
     Client URLs: localhost:10000
                  localhost:10002
                  localhost:10001

Limits:

        Max Conn: 65536
        Max Subs: 0
     Max Payload: 1.0 MiB
     TLS Timeout: 2s
  Write Deadline: 10s

Statistics:

       CPU Cores: 2 1.00%
          Memory: 13 MiB
     Connections: 1
   Subscriptions: 0
            Msgs: 240 in 687 out
           Bytes: 151 KiB in 416 KiB out
  Slow Consumers: 0

Cluster:

            Name: c1
            Host: 0.0.0.0:6222
            URLs: nc1:6222
                  nc2:6222
                  nc3:6222

Super Cluster:

            Name: c1
            Host: 0.0.0.0:7222
        Clusters: c1
                  c2
                  c3

Additional to this various reports can be generated using nats server report, this allows one to list all connections and subscriptions across the entire cluster with filtering to limit the results by account etc.

Additional raw information in JSON format can be retrieved using the nats server request commands.

Schema Registry

We are adopting JSON Schema to describe the core data formats of events and advisories - as shown by nats event. Additionally all the API interactions with the JetStream API is documented using the same format.

These schemas can be used using tools like QuickType to generate stubs for various programming languages.

The nats CLI allows you to view these schemas and validate documents using these schemas.

nats schema ls 

Output

Matched Schemas:

  io.nats.jetstream.advisory.v1.api_audit
  io.nats.jetstream.advisory.v1.consumer_action
  io.nats.jetstream.advisory.v1.max_deliver
...

The schemas can be limited using a regular expression, try nats schema ls request to see all API requests.

Schemas can be viewed in their raw JSON or YAML formats using nats schema info io.nats.jetstream.advisory.v1.consumer_action, these schemas include descriptions about each field and more.

Finally, if you are interacting with the API using JSON request messages constructed using languages that is not supported by our own management libraries you can use this tool to validate your messages:

nats schema validate io.nats.jetstream.api.v1.stream_create_request request.json 

Output

Validation errors in request.json:

  retention: retention must be one of the following: "limits", "interest", "workqueue"
  (root): Must validate all the schemas (allOf)

Here I validate request.json against the Schema that describes the API to create Streams, the validation indicates that I have an incorrect value in the retention field.

natscli's People

Contributors

1995parham avatar boris-ilijic avatar bruth avatar bwerthmann avatar codegangsta avatar colinsullivan1 avatar davedotdev avatar derekcollison avatar dselans avatar dsidirop avatar erhhung avatar gcolliso avatar jarema avatar jnmoyne avatar kozlovic avatar masudur-rahman avatar matthiashanel avatar mcp5 avatar mdawar avatar miraculli avatar mprimi avatar neilalexander avatar philpennock avatar ramonberrutti avatar rickardgranberg avatar ricky-luna avatar ripienaar avatar samuelattwood avatar scottf avatar wallyqs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

natscli's Issues

nats-box is out of date

The synadia/nats-box image seems to be on a 6+ month old version of this tool, so I installed the latest release manually on my machine to have the new features (like the experimental KV store). Just figured I would mention this here in case nats-box is still being maintained.

crash when adding a stream mirror

nats stream add --mirror test
? Stream Name backup
? Storage backend file
? Retention Policy Limits
? Discard Policy Old
? Stream Messages Limit -1
panic: runtime error: index out of range [0] with length 0

goroutine 1 [running]:
main.(*streamCmd).prepareConfig(0xc0001b6b40, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/Users/matthiashanel/repos/natscli/nats/stream_command.go:1511 +0x1d6f
main.(*streamCmd).addAction(0xc0001b6b40, 0xc00013e6c0, 0x0, 0x0)
	/Users/matthiashanel/repos/natscli/nats/stream_command.go:1801 +0x65
gopkg.in/alecthomas/kingpin%2ev2.(*actionMixin).applyActions(0xc0003223d8, 0xc00013e6c0, 0x0, 0x0)
	/Users/matthiashanel/go/pkg/mod/gopkg.in/alecthomas/[email protected]/actions.go:28 +0x6d
gopkg.in/alecthomas/kingpin%2ev2.(*Application).applyActions(0xc000176690, 0xc00013e6c0, 0x0, 0x0)
	/Users/matthiashanel/go/pkg/mod/gopkg.in/alecthomas/[email protected]/app.go:557 +0xdf
gopkg.in/alecthomas/kingpin%2ev2.(*Application).execute(0xc000176690, 0xc00013e6c0, 0xc0003463c0, 0x2, 0x2, 0x0, 0x0, 0x0, 0xc000011e01)
	/Users/matthiashanel/go/pkg/mod/gopkg.in/alecthomas/[email protected]/app.go:390 +0x95
gopkg.in/alecthomas/kingpin%2ev2.(*Application).Parse(0xc000176690, 0xc0000200b0, 0x4, 0x4, 0x1, 0xc000010a88, 0x0, 0x1)
	/Users/matthiashanel/go/pkg/mod/gopkg.in/alecthomas/[email protected]/app.go:222 +0x228
main.main()
	/Users/matthiashanel/repos/natscli/nats/main.go:115 +0x1926

Support of NATS Streaming Server

I couldn't seem to find concrete answers from the docs about that does the nats CLI tool support the NATS Streaming Server (precursor to JetStream).

In some of the docs it mentions using the synadia/nats-box docker image with the nats commands on a NATS & NATS Streaming Server setup, and the nats sub and nats pub command seemed to work for me. However all of the nats server info, nats server ls, nats stream ls etc all time out for me.

Is there a document somewhere the lists out what is working with the NATS Streaming Server and what isn't? Or a mention somewhere that the CLI doesn't support NATS Streaming Server that I missed?

Can't `go get @0.0.24`

Hi there!

I'm trying to install natscli using this command:

$ go get github.com/nats-io/[email protected]

And it fails with this message:

go get: github.com/nats-io/natscli@none updating to
        github.com/nats-io/[email protected] requires
        github.com/codahale/[email protected]: parsing go.mod:
        module declares its path as: github.com/HdrHistogram/hdrhistogram-go
                but was required as: github.com/codahale/hdrhistogram

Am I doing something wrong?

Allow TLS verification to be skipped

Hello,

In production we're using valid TLS certificates only for securing the connection, not validating the client. However in test I'm using self-signed certificates for this. There doesn't appear to be a way to disable the NATS client from attempting to verify the TLS certificate of the server. It would be nice to have this option.

Thanks

How to backup all streams of a NATS Streaming Server?

Problem:

I have to backup all the streams of my NATS streaming server. I don't know what streams are there. I want to backup all the streams into a single snapshot. Currently, we can backup only a single stream using the following command.

$ nats stream backup <stream-name> /backup/dir/<stream-name>.tgz

What I have tried so far:

I have tried providing a wildcard instead of <stream-name>. It does not work.

$ nats stream backup * /backup/dir/backup.tgz
nats: error: "*" is not a valid stream name, try --help

Possible Workaround:

At first, I can list all the streams using nats str ls command. Then, I can loop through all the streams and backup them individualy.

However, this does not satisfy my requirement as I want to backup all the streams into a single snapshot. My snapshot should represent a complete state of the NATS streaming server just not a single stream.

codahale/hdrhistogram repo url has been transferred under the github HdrHstogram umbrella

Problem

The codahale/hdrhistogram repo has been transferred under the github HdrHstogram umbrella with the help from the original author in Sept 2020 (new repo url https://github.com/HdrHistogram/hdrhistogram-go). The main reasons are to group all implementations under the same roof and to provide more active contribution from the community as the original repository was archived several years ago.

The dependency URL should be modified to point to the new repository URL. The tag "v0.9.0" was applied at the point of transfer and will reflect the exact code that was frozen in the original repository.

If you are using Go modules, you can update to the exact point of transfer using the @v0.9.0 tag in your go get command.

go mod edit -replace github.com/codahale/hdrhistogram=github.com/HdrHistogram/[email protected]

Performance Improvements

From the point of transfer, up until now (mon 16 aug 2021), we've released 3 versions that aim support the standard HdrHistogram serialization/exposition formats, and deeply improve READ performance.
We recommend to update to the latest version.

server passwd output is misaligned

$ nats server passwd
? Enter password [? for help] **********************
                              ? Reenter password [? for help] **********************

$2a$11$DAFPlebnj555dIvBLDk9D.Zi48QcXbdRd8JhsfpApgG4Hn9FO4cjC

This is what I use using Bash 5.1.4 and Gnome Terminal 3.38.2.

non clustered JS are presented differently from clustered JS

When I issue this command, I don't necessarily know if the server is clustered or not and thus wether or not to expect *. So it's hard to say what I'm looking at.

This may be as easy as adding the *. But maybe the returned monitoring data needs to change to look like clustered (with size 1)

nats -s nats://admin:admin@localhost:4111 server report jetstream 1
+-------------------------------------------------------------------------------------------------------+
|                                           JetStream Summary                                           |
+---------------+---------+---------+-----------+----------+-------+--------+-------+---------+---------+
| Server        | Cluster | Streams | Consumers | Messages | Bytes | Memory | File  | API Req | API Err |
+---------------+---------+---------+-----------+----------+-------+--------+-------+---------+---------+
| leaf-server-1 | leaf    | 2       | 0         | 6        | 228 B | 0 B    | 228 B | 1       | 0       |
+---------------+---------+---------+-----------+----------+-------+--------+-------+---------+---------+
|               |         | 2       | 0         | 6        | 228 B | 0 B    | 228 B | 1       | 0       |
+---------------+---------+---------+-----------+----------+-------+--------+-------+---------+---------+

This should show a leader in leaf as well

nats -s nats://admin:admin@localhost:4222 server report jetstream 4
+-------------------------------------------------------------------------------------------------------+
|                                           JetStream Summary                                           |
+---------------+---------+---------+-----------+----------+-------+--------+-------+---------+---------+
| Server        | Cluster | Streams | Consumers | Messages | Bytes | Memory | File  | API Req | API Err |
+---------------+---------+---------+-----------+----------+-------+--------+-------+---------+---------+
| leaf-server-1 | leaf    | 2       | 0         | 4        | 152 B | 0 B    | 152 B | 8       | 0       |
| hub-server-3* | hub     | 1       | 0         | 2        | 76 B  | 0 B    | 76 B  | 4       | 1       |
| hub-server-2  | hub     | 1       | 0         | 2        | 76 B  | 0 B    | 76 B  | 1       | 0       |
| hub-server-1  | hub     | 1       | 0         | 2        | 76 B  | 0 B    | 76 B  | 0       | 0       |
+---------------+---------+---------+-----------+----------+-------+--------+-------+---------+---------+
|               |         | 5       | 0         | 10       | 380 B | 0 B    | 380 B | 13      | 1       |
+---------------+---------+---------+-----------+----------+-------+--------+-------+---------+---------+

+---------------------------------------------------------+
|               RAFT Meta Group Information               |
+--------------+--------+---------+--------+--------+-----+
| Name         | Leader | Current | Online | Active | Lag |
+--------------+--------+---------+--------+--------+-----+
| hub-server-1 |        | true    | true   | 0.26s  | 0   |
| hub-server-2 |        | true    | true   | 0.26s  | 0   |
| hub-server-3 | yes    | true    | true   | 0.00s  | 0   |
+--------------+--------+---------+--------+--------+-----+

If the leaf is using a clustered jetstream the result shows mutliple leader.

nats --context=sys server report jetstream 6
+-----------------------------------------------------------------------------------------------------------+
|                                             JetStream Summary                                             |
+-------------+----------------+---------+-----------+----------+-------+--------+------+---------+---------+
| Server      | Cluster        | Streams | Consumers | Messages | Bytes | Memory | File | API Req | API Err |
+-------------+----------------+---------+-----------+----------+-------+--------+------+---------+---------+
| srv-A-4252* | test-cluster-2 | 0       | 0         | 0        | 0 B   | 0 B    | 0 B  | 0       | 0       |
| srv-A-4242  | test-cluster-2 | 0       | 0         | 0        | 0 B   | 0 B    | 0 B  | 0       | 0       |
| srv-A-4292  | test-cluster-2 | 0       | 0         | 0        | 0 B   | 0 B    | 0 B  | 0       | 0       |
| srv-A-4222* | test-cluster-1 | 0       | 0         | 0        | 0 B   | 0 B    | 0 B  | 0       | 0       |
| srv-A-4282  | test-cluster-1 | 0       | 0         | 0        | 0 B   | 0 B    | 0 B  | 0       | 0       |
| srv-A-4232  | test-cluster-1 | 0       | 0         | 0        | 0 B   | 0 B    | 0 B  | 0       | 0       |
+-------------+----------------+---------+-----------+----------+-------+--------+------+---------+---------+
|             |                | 0       | 0         | 0        | 0 B   | 0 B    | 0 B  | 0       | 0       |
+-------------+----------------+---------+-----------+----------+-------+--------+------+---------+---------+

+-------------------------------------------------------+
|              RAFT Meta Group Information              |
+------------+--------+---------+--------+--------+-----+
| Name       | Leader | Current | Online | Active | Lag |
+------------+--------+---------+--------+--------+-----+
| srv-A-4222 | yes    | true    | true   | 0.00s  | 0   |
| srv-A-4232 |        | true    | true   | 0.12s  | 0   |
| srv-A-4282 |        | true    | true   | 0.12s  | 0   |
+------------+--------+---------+--------+--------+-----+

Go install fails

It’s not possible to just use the go install standard approach that go 1.17 employs

It’s would be nice 👌 if it was possible.

The problem relates to having overrides in the go.mod.

if the next tagged release has no overrides then go install will just work :)

Clarify Message Id vs Sequence

If I understood correctly, MsgId and Sequence is different thing. When getting message from stream using nats stream get, it prompt for "Message ID to retrieve" , which I think it should be sequence.

stream state printed after purge is stale/misleading

after purge, I'd expect to see smth like this:

State:

             Messages: 0
                Bytes: 0 B
             FirstSeq: 21,544 @ 0001-01-01T00:00:00 UTC
              LastSeq: 21,543 @ 2021-04-20T00:34:52 UTC
     Active Consumers: 0

Instead the stream state from before purge is used, which is not 0.

nats --context=c2-test s purge stest-10 --trace
20:35:07 >>> $JS.API.STREAM.NAMES
{"offset":0}

20:35:07 <<< $JS.API.STREAM.NAMES
{"type":"io.nats.jetstream.api.v1.stream_names_response","total":20,"offset":0,"limit":1024,"streams":["stest-1","stest-10","stest-11","stest-12","stest-13","stest-14","stest-15","stest-16","stest-17","stest-18","stest-19","stest-2","stest-20","stest-3","stest-4","stest-5","stest-6","stest-7","stest-8","stest-9"],"by_meta_leader":true}

? Really purge Stream stest-10 Yes
20:35:10 >>> $JS.API.STREAM.INFO.stest-10


20:35:10 <<< $JS.API.STREAM.INFO.stest-10
{"type":"io.nats.jetstream.api.v1.stream_info_response","config":{"name":"stest-10","subjects":["test.10.*"],"retention":"limits","max_consumers":-1,"max_msgs":-1,"max_bytes":-1,"discard":"old","max_age":0,"max_msg_size":-1,"storage":"file","num_replicas":3,"duplicate_window":120000000000},"created":"2021-04-20T00:33:34.11969Z","state":{"messages":15111,"bytes":695106,"first_seq":6433,"first_ts":"2021-04-20T00:34:50.332776Z","last_seq":21543,"last_ts":"2021-04-20T00:34:52.437344Z","consumer_count":0},"cluster":{"name":"test-cluster-2","leader":"srv-A-4252","replicas":[{"name":"srv-A-4292","current":true,"active":158874000},{"name":"srv-A-4242","current":true,"active":158871000}]}}

20:35:10 >>> $JS.API.STREAM.PURGE.stest-10


20:35:10 <<< $JS.API.STREAM.PURGE.stest-10
{"type":"io.nats.jetstream.api.v1.stream_purge_response","success":true,"purged":15111}

Information for Stream stest-10 created 2021-04-19T20:33:34-04:00

Configuration:

             Subjects: test.10.*
     Acknowledgements: true
            Retention: File - Limits
             Replicas: 3
       Discard Policy: Old
     Duplicate Window: 2m0s
     Maximum Messages: unlimited
        Maximum Bytes: unlimited
          Maximum Age: 0.00s
 Maximum Message Size: unlimited
    Maximum Consumers: unlimited


Cluster Information:

                 Name: test-cluster-2
               Leader: srv-A-4252
              Replica: srv-A-4292, current, seen 0.16s ago
              Replica: srv-A-4242, current, seen 0.16s ago

State:

             Messages: 15,111
                Bytes: 679 KiB
             FirstSeq: 6,433 @ 2021-04-20T00:34:50 UTC
              LastSeq: 21,543 @ 2021-04-20T00:34:52 UTC
     Active Consumers: 0

Consumer --output doesn't record the stream name

When running NATS CLI (v.0.0.21) and using the --output option, the stream name is not recorded.

√ workspace/sotesoft/dockerapi % nats consumer add --output=list.of.values.json business-service-layer list-of-values
? Delivery target pull
? Start policy (all, new, last, 1h, msg sequence) new
? Acknowledgement policy explicit
? Replay policy original
? Filter Stream by subject (blank for all) bsl.list-of-values.>
? Maximum Allowed Deliveries 2
? Maximum Acknowledgements Pending 
√ workspace/sotesoft/dockerapi % cat list.of.values.json 
{
  "durable_name": "list-of-values",
  "deliver_subject": "pull",
  "deliver_policy": "new",
  "ack_policy": "explicit",
  "max_deliver": 2,
  "filter_subject": "bsl.list-of-values.\u003e",
  "replay_policy": "original"
}%

When you run this output file as input using the --config option, the Steam name is prompted. This will cause an automated process for the installation of a consumer to hang.

Here is what happens when the above list.of.values file is used to create the consumer.

?1 workspace/sotesoft/dockerapi % nats consumer add --config=list.of.values.json 
? Select a Stream business-service-layer
Information for Consumer business-service-layer > list-of-values

Configuration:

        Durable Name: list-of-values
    Delivery Subject: pull
      Filter Subject: bsl.list-of-values.>
        Deliver Next: true
          Ack Policy: Explicit
            Ack Wait: 30s
       Replay Policy: Original
  Maximum Deliveries: 2

State:

   Last Delivered Message: Consumer sequence: 0 Stream sequence: 0
     Acknowledgment floor: Consumer sequence: 0 Stream sequence: 0
         Outstanding Acks: 0
     Redelivered Messages: 0
     Unprocessed Messages: 0

√ workspace/sotesoft/dockerapi % nats --version
0.0.21

Running on Mac OS 11.2.1

Jetstream bench publisher ack timeout for publishAsync batch is 1 second

Get the error : JS PubAsync did not receive a positive ack while running nats bench for jetstream

Command. : nats bench test --js --pub 10 --msgs 1000000 --pubbatch 1500 --size 100 --storage file --replicas 3

`11:57:13 Starting benchmark [msgs=1,000,000, msgsize=100 B, pubs=10, subs=0, js=true, storage=file, syncpub=false, pubbatch=1,500, pull=false, pullbatch=100, request=false, reply=false, noqueue=false, maxackpending=-1, replicas=3, nopurge=false, nodelete=false]
11:57:13 Deleting any existing stream
11:57:13 Purging the stream
11:57:13 Will delete the stream at the end of the run
0s [>---------------------------------------------------------] 4%
0s [>---------------------------------------------------------] 4%
1s [=>--------------------------------------------------------] 6%
1s [=>--------------------------------------------------------] 6%
1s [=>--------------------------------------------------------] 6%
1s [>---------------------------------------------------------] 4%
1s [=>--------------------------------------------------------] 6%
1s [=>--------------------------------------------------------] 6%
1s [=>--------------------------------------------------------] 6%
1s [>---------------------------------------------------------] 4%

11:57:15 JS PubAsync did not receive a positive ack`

"nats server ping" returns error

Testing against a jetstream cluster configured with

nats:
  image: synadia/nats-server:nightly-20210122
  logging:
    debug: true
    trace: true

  jetstream:
    enabled: true

cluster:
  enabled: true
  name: "nats"
  replicas: 3

Running a cli binary built from 103af2f.

Running ping gives an error output.

$ nats server ping --trace
15:25:28 Unexpected NATS error from server nats://nats:4222: nats: Got an error trying to unmarshal: unexpected end of JSON input

---- ping statistics ----
no responses received

With debug level logging, I got this log output

nats-0 nats [6] 2021/01/22 22:38:20.232880 [DBG] 127.0.0.1:51656 - cid:12 - Client connection created
nats-0 nats [6] 2021/01/22 22:38:22.351599 [DBG] 127.0.0.1:51656 - cid:12 - "v1.11.0:go:NATS CLI Version development" - Client Ping Timer
nats-0 nats [6] 2021/01/22 22:38:25.265245 [DBG] 127.0.0.1:51656 - cid:12 - "v1.11.0:go:NATS CLI Version development" - Client connection closed: Client Closed
nats-2 nats [6] 2021/01/22 22:38:26.538377 [DBG] 10.16.165.247:55368 - rid:11 - Router Ping Timer
nats-2 nats [6] 2021/01/22 22:38:26.538401 [DBG] 10.16.165.247:55368 - rid:11 - Delaying PING due to client activity 0s ago
nats-1 nats [6] 2021/01/22 22:38:26.557274 [DBG] 10.16.217.211:6222 - rid:7 - Router Ping Timer
nats-1 nats [6] 2021/01/22 22:38:26.557297 [DBG] 10.16.217.211:6222 - rid:7 - Delaying PING due to client activity 0s ago

Binary missing in ZIP files

Downloaded the zip file for the 0.0.21 linux-amd64 release, but looks like it's just a Helm chart and no binary. Is this a mistake or am I missing something?

Add object store preview command

Some notes:

  • Add default KV and Object bucket as context and make BUCKET optional in these commands where sensible

General outline:

# creates object store, defaults to file store
nats obj add BUCKET —description FOO —ttl 10h —memory —replica 3

# list objects in the bucket table, just names, ls or json format
nats obj ls BUCKET
nats obj info BUCKET

# uploads a file with progress by default and showing info at the end, if FILE is '-' reads STDIN but requires --name
nats obj put BUCKET FILE 

# upload a file, no progress, json info
nats obj put BUCKET FILE —no-progress —json

# gets FILE from BUCKET into same name locally
nats obj get BUCKET FILE

# gets FILE from BUCKET into different name
nats obj get BUCKET FILE -O target

# get object info, optionally as JSON
nats obj info BUCKET FILE

# deletes a file, prompts, accepts —force
nats obj rm BUCKET FILE 

# not sure how this will look exactly need to play with the API
nats obj watch BUCKET

# adds a link on NAME to BUCKET based on source specification, this might be some URL style link to a object will need to explore api
nats obj link BUCKET NAME <source specification>

# make readonly, optional force via -f
nats obj seal BUCKET

# Backup and restore
nats obj backup BUCKET target
nats obj restore BUCKET source

Adding consumer using '--flow-control' causes panic

I'm using the NATS CLI to add a consumer in a script so I don't want any interactivity. When I run the following command:

nats con add STREAM CONSUMER --filter 'subject' --pull --deliver=last --replay=instant --max-deliver=-1 --max-pending=0 --flow-control

I get:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x80b073]

goroutine 1 [running]:
gopkg.in/alecthomas/kingpin%2ev2.(*boolValue).Set(0xc000220698, 0xdbcd9f, 0x4, 0xc, 0x13d8d00)
        /Users/rip/go/pkg/mod/gopkg.in/alecthomas/[email protected]/values_generated.go:24 +0x73
gopkg.in/alecthomas/kingpin%2ev2.(*Application).setValues(0xc0004dc000, 0xc000432090, 0x0, 0x0, 0xc, 0xc000432090, 0x0)
        /Users/rip/go/pkg/mod/gopkg.in/alecthomas/[email protected]/app.go:488 +0x469
gopkg.in/alecthomas/kingpin%2ev2.(*Application).Parse(0xc0004dc000, 0xc000114830, 0xc, 0xc, 0x1, 0xc000220550, 0x0, 0x1)
        /Users/rip/go/pkg/mod/gopkg.in/alecthomas/[email protected]/app.go:199 +0xf0
main.main()
        /Users/rip/go/src/github.com/nats-io/natscli/nats/main.go:109 +0x1587

It doesn't matter if I do --flow-control, --flow-control=true or --flow-control=false, the same error happens.
Omitting the --flow-control and answering the prompt does work however.

I am using NATS CLI version 0.0.22

UX: easy to create a push-mode consumer pushing to subject pull

Someone using this nats CLI tried to create a pull-mode consumer on a stream, and when asked for the Delivery target they typed in the word pull.

Hey presto, a push-mode consumer pushing to NATS subject pull.

The only currently supported way of entering this, AFAIK, is to instead just hit enter for an empty target. I only "remembered" this because when prompted for the target, I'd entered ? and read the help text.

Perhaps we should have a case-insensitive comparison of the string against pull and insert a "hey, are you sure?" question; or, if we don't want to affect scriptability here, just throw up a big "WARNING: foo" message in this scenario.

missing or extra cluster name in server ls and --trace is not working

config used:

server.conf

port: 4222
server_name: hub-server

leafnodes {
	port: 7422
}

include ./accounts.conf

leaf.conf

port: 4111
server_name: leaf-server

leafnodes {
	remotes = [
		{
			urls: ["nats-leaf://admin:[email protected]:7422"]
			account: "SYS"
		},
		{
			urls: ["nats-leaf://acc:[email protected]:7422"]
			account: "ACC"
		}
	]
}
include ./accounts.conf

accounts.conf

accounts {
	SYS: {
		users: [{user: admin, password: admin}]
	},
	ACC: {
		users: [{user: acc, password: acc}],
		jetstream: enabled
	}
}
system_account: SYS

# connections without credentials will operate under user acc and thus account ACC
no_auth_user: acc

nats cli command with either an extra cluster name or a missing one. (output is the same no matter which server nats is pointed at)
In addition --trace does not seem to work

nats --trace  --server nats://admin:admin@localhost:4111 server ls
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                           Server Overview                                                           │
├─────────────┬─────────────┬───────────┬───────────────┬────┬───────┬──────┬────────┬─────┬────────┬─────┬──────┬────────┬───────────┤
│ Name        │ Cluster     │ IP        │ Version       │ JS │ Conns │ Subs │ Routes │ GWs │ Mem    │ CPU │ Slow │ Uptime │ RTT       │
├─────────────┼─────────────┼───────────┼───────────────┼────┼───────┼──────┼────────┼─────┼────────┼─────┼──────┼────────┼───────────┤
│ hub-server  │             │ 0.0.0.0   │ 2.3.0-beta.10 │ no │ 0     │ 76   │ 0      │ 0   │ 29 MiB │ 0.1 │ 0    │ 12.99s │ 900.956µs │
│ leaf-server │ leaf-server │ 0.0.0.0   │ 2.3.0-beta.10 │ no │ 1     │ 77   │ 0      │ 0   │ 30 MiB │ 0.0 │ 0    │ 9.35s  │ 887.021µs │
├─────────────┼─────────────┼───────────┼───────────────┼────┼───────┼──────┼────────┼─────┼────────┼─────┼──────┼────────┼───────────┤
│             │ 1 Clusters  │ 2 Servers │               │ 0  │ 1     │ 153  │        │     │ 59 MiB │     │ 0    │        │           │
╰─────────────┴─────────────┴───────────┴───────────────┴────┴───────┴──────┴────────┴─────┴────────┴─────┴──────┴────────┴───────────╯

╭────────────────────────────────────────────────────────────────────────────────╮
│                                Cluster Overview                                │
├─────────────┬────────────┬───────────────────┬───────────────────┬─────────────┤
│ Cluster     │ Node Count │ Outgoing Gateways │ Incoming Gateways │ Connections │
├─────────────┼────────────┼───────────────────┼───────────────────┼─────────────┤
│ leaf-server │ 1          │ 0                 │ 0                 │ 1           │
├─────────────┼────────────┼───────────────────┼───────────────────┼─────────────┤
│             │ 1          │ 0                 │ 0                 │ 1           │
╰─────────────┴────────────┴───────────────────┴───────────────────┴─────────────╯

crash in compactStrings if cluster.Replicas is empty.

renderCluster chokes on this (shortened to 1)

{
    "by_meta_leader": false,
    "complete": true,
    "limit": 256,
    "offset": 0,
    "streams": [
        {
            "cluster": {
                "name": "test-cluster-2"
            },
            "config": {
                "discard": "old",
                "duplicate_window": 120000000000,
                "max_age": 0,
                "max_bytes": -1,
                "max_consumers": -1,
                "max_msg_size": -1,
                "max_msgs": -1,
                "name": "stest-32",
                "num_replicas": 3,
                "retention": "limits",
                "storage": "file",
                "subjects": [
                    "test.32.*"
                ]
            },
            "created": "2021-04-19T21:30:14.161726Z",
            "state": {
                "bytes": 0,
                "consumer_count": 0,
                "first_seq": 0,
                "first_ts": "0001-01-01T00:00:00Z",
                "last_seq": 0,
                "last_ts": "0001-01-01T00:00:00Z",
                "messages": 0
            }
        },
    ],
    "total": 1,
    "type": "io.nats.jetstream.api.v1.stream_list_response"
}

failed in compactStrings, accessing element 0

	// we dont chop the 0 item off
	for i := shortest - 1; i > 0; i-- {
		s := hnParts[0][i]

This is an issue resulting from an experiment where a follower responded (by_meta_leader: true)
But we fail in compactStrings if source is empty.

nats stream info json reply

I defined a stream with

nats add stream: 

         Subjects: GS.>, QS.>, TSM.UPDATE.>, LM.>
   Acknowledgements: true
          Retention: File - Limits
           Replicas: 1
     Discard Policy: Old
   Duplicate Window: 10m0s
   Maximum Messages: unlimited
      Maximum Bytes: unlimited
        Maximum Age: 1d0h0m0s
 Maximum Message Size: unlimited
   Maximum Consumers: unlimited
 State:
          Messages: 30
             Bytes: 1.6 KiB
          FirstSeq: 1 @ 2021-01-16T13:59:58 UTC
           LastSeq: 30 @ 2021-01-16T14:21:46 UTC
 Active Consumers: 1

and exported the json definition with "nats stream info [stream] -j"
The resulting json was:

  {
    "config": {
      "name": "MERCURY",
      "subjects": [
        "GS.\u003e",
        "QS.\u003e",
        "TSM.UPDATE.\u003e",
        "LM.\u003e"
      ],
      "retention": "limits",
      "max_consumers": -1,
      "max_msgs": -1,
      "max_bytes": -1,
      "max_age": 86400000000000,
      "max_msg_size": -1,
      "storage": "file",
      "discard": "old",
      "num_replicas": 1,
      "duplicate_window": 600000000000
    },
    "state": {
      "messages": 30,
      "bytes": 1683,
      "first_seq": 1,
      "first_ts": "2021-01-16T13:59:58.985443012Z",
      "last_seq": 30,
      "last_ts": "2021-01-16T14:21:46.635100851Z",
      "consumer_count": 1
    }
  }

If I try to edit something (or nothing at all) and run:

nats stream edit MERCURY --config mercury_stream.json

There are many unexpected differences found :

 Differences (-old +new):
  api.StreamConfig{
        Name:         "MERCURY",
-       Subjects:     []string(Inverse(Sort, []string{"GS.>", "LM.>", "QS.>", "TSM.UPDATE.>"})),
+       Subjects:     []string(Inverse(Sort, []string(nil))),
        Retention:    s"Limits",
-       MaxConsumers: -1,
+       MaxConsumers: 0,
-       MaxMsgs:      -1,
+       MaxMsgs:      0,
-       MaxBytes:     -1,
+       MaxBytes:     0,
-       MaxAge:       s"24h0m0s",
+       MaxAge:       s"0s",
-       MaxMsgSize:   -1,
+       MaxMsgSize:   0,
        Storage:      s"File",
        Discard:      s"Old",
-       Replicas:     1,
+       Replicas:     0,
        NoAck:        false,
        Template:     "",
-       Duplicates:   s"10m0s",
+       Duplicates:   s"0s",
  }

I'll try with

 nats stream add --output

and report

Support exposing current metric value on server check command

Current behaviour

Currently when we execute the nats server check command, the client expects a set of threshold flags as inputs to be able to answer if the server is healthy or not according to the provided thresholds.

Feature request

It would be useful to know get current metric values, instead of just knowing if the threshold was exceeded or not.

This would enable us to create a prometheus exporter component that could export prometheus metrics on all of the nats server check set of commands. We would use these metrics on alerts and define the needed thresholds on the alerts themselves.

Desired behaviour

  • Optional threshold flags, because those thresholds could be set on the alerts.
  • New metrics on prometheus format to show the current health state.

Example

Currently

nats server check stream --server nats://nats:4222 \
  --stream TEST \
  --peer-expect 1 \ 
  --lag-critical 100 \
  --msgs-warn 4000 \
  --msgs-critical 3000 \
  --min-sources 33 \
  --max-sources 34 \
  --peer-lag-critical 100 \
  --peer-seen-critical 5m \
  --format prometheus

Example output:

# HELP nats_server_check_stream_peer_lagged RAFT peers that are lagged more than configured threshold
# TYPE nats_server_check_stream_peer_lagged gauge
nats_server_check_stream_peer_lagged{item="TEST"} 0
...

Proposed

nats server check stream --server nats://nats:4222 \
  --stream TEST \
  --format prometheus

In the previous example it would export a metric saying if a given peer has lag according to the provided threshold flag --peer-lag-critical 100.
In this example, it would just export the peer lag itself for each peer <> stream.

This strategy could be applied for every other type of metric currently available on the tool.

Example output:

# HELP nats_server_check_stream_peer_lag RAFT peer lag
# TYPE nats_server_check_stream_peer_lag gauge
nats_server_check_stream_peer_lag{item="TEST", peer="nats-2"} 200
...

Thanks 🙏

Tag commands which require a system account

It would be very helpful if the commands available could minimally use a standard piece of syntax to mention those which require a system account as opposed to any account. I find myself repeatedly telling people "that error doesn't mean there's a problem, your account just doesn't have access".

Perhaps also if there were a way to decorate in the context with flags, we could mark those contexts (probably manually, or perhaps automatically after a connection test) which are system accounts, and then based on the current context we could elide showing the commands/completions which require a system account.

some values in a stream config file can be overwritten during stream add, but not for consumer

Using a config file, the stream/subject name can be overwritten such that this works:
for i in {0..20} ; do nats --context=c2-test s add --config stream.cfg stest-$i --subjects "test.$i.*" --trace; done

When attempting something similar with consumer (and changing the consumer name) this happens:

> nats --context=c2-test c add --config cons.cfg stest dur
nats: error: durable consumer name in cons.cfg does not match CLI consumer name dur, try --help

NB: stream add does only allow that for some values, others like replication count are simply ignored.

Issue: dupe-window isn't honored on the command line

$ natscli stream create --user=jsadmin --password=password --subjects=foo --storage=file --retention=limits --ack --max-msgs=100 --max-bytes=204800 --max-age="1h" --discard=old --max-msg-size=20480 --dupe-window="" test-stream

? Duplicate tracking time window 
Stream test-stream was created

Information for Stream test-stream

NATS CLI pulls can leak subscriptions

Using NATS cli to pull consumer messages (nats consumer next ) orphaned Pull Request subscriptions on the consumer are created if there is no message to pull in that cli invocation.

Can't get messages in JS mode

nats-server -js

Go Client code

log.Info("NATS init")
	// Connect Options.
	opts := []nats.Option{nats.Name("NATS Sample Subscriber")}
	opts = setupConnOptions(opts)

	// Connect to NATS
	nc, err := nats.Connect(n.Host, opts...)
	if err != nil {
		log.Fatal(err)
	}

	// Create JetStream Context
	js, _ := nc.JetStream(nats.PublishAsyncMaxPending(256))


	// Simple Async Ephemeral Consumer
	js.Subscribe("NATS.Positions", func(m *nats.Msg) {
		fmt.Printf("Received a JetStream message: %s\n", string(m.Data))
	})

	if err := nc.LastError(); err != nil {
		log.Fatal(err)
	}

	log.Printf("Listening on [%s]", PositionTopic)
	runtime.Goexit()

CLI Publisher

➜  ~ nats pub NATS.Positions --count 10 "Message {{Count}}: {{ Random 10 100 }}"
21:18:26 Published 87 bytes to "NATS.Positions"
21:18:26 Published 26 bytes to "NATS.Positions"
21:18:26 Published 103 bytes to "NATS.Positions"
21:18:26 Published 24 bytes to "NATS.Positions"
21:18:26 Published 34 bytes to "NATS.Positions"
21:18:26 Published 44 bytes to "NATS.Positions"
21:18:26 Published 108 bytes to "NATS.Positions"
21:18:26 Published 23 bytes to "NATS.Positions"
21:18:26 Published 83 bytes to "NATS.Positions"
21:18:26 Published 103 bytes to "NATS.Positions"

Improve handling split clusters

When a cluster is split stream info and consumer info will still work, but stream names wont.

However our startup uses StreamNames to get list of streams in order to figure out if we need to ask the user.

If we flip things so that we do stream info - and save/return that info - and only then prompt we can cut down startup API access significantly. We never do stream names and we can optimise other access to the loaded stream using the cached instance.

for streams with replication factor 1, it is hard to tell where they are located.

For the consumer, one can see cluster information.
Stream, not so much. Ideally that would look identical to server with R>1.

> nats -s nats://ce1:ce1@localhost:4222 str info
? Select a Stream test3
Information for Stream test3 created 2021-04-26T17:06:04-04:00

Configuration:

             Subjects: foo
     Acknowledgements: true
            Retention: File - WorkQueue
             Replicas: 1
       Discard Policy: New
     Duplicate Window: 2m0s
     Maximum Messages: unlimited
        Maximum Bytes: unlimited
          Maximum Age: 0.00s
 Maximum Message Size: unlimited
    Maximum Consumers: unlimited


State:

             Messages: 0
                Bytes: 0 B
             FirstSeq: 0
              LastSeq: 0
     Active Consumers: 1

> nats -s nats://ce1:ce1@localhost:4222 str report
Obtaining Stream stats

+-----------------------------------------------------------------------------+
|                                Stream Report                                |
+--------+---------+-----------+----------+-------+------+---------+----------+
| Stream | Storage | Consumers | Messages | Bytes | Lost | Deleted | Replicas |
+--------+---------+-----------+----------+-------+------+---------+----------+
| test3  | File    | 1         | 0        | 0 B   | 0    | 0       |          |
+--------+---------+-----------+----------+-------+------+---------+----------+

> nats -s nats://ce1:ce1@localhost:4222 c info
? Select a Stream test3
? Select a Consumer con
Information for Consumer test3 > con created 2021-04-26T17:08:00-04:00

Configuration:

        Durable Name: con
           Pull Mode: true
         Deliver All: true
          Ack Policy: Explicit
            Ack Wait: 30s
       Replay Policy: Instant
     Max Ack Pending: 20,000

Cluster Information:

                Name: leaf
              Leader: NANK5OKHNEDCIR5Z37OTKOPJVEL2GU6ZNIYZEHOC7ZAOJUM3ZNNHFUDF

State:

   Last Delivered Message: Consumer sequence: 0 Stream sequence: 0
     Acknowledgment floor: Consumer sequence: 0 Stream sequence: 0
         Outstanding Acks: 0 out of maximum 20000
     Redelivered Messages: 0
     Unprocessed Messages: 0

> nats -s nats://ce1:ce1@localhost:4222 c report
? Select a Stream test3
Consumer report for test3 with 1 consumers

+----------+------+------------+----------+-------------+-------------+-------------+-----------+-----------------------------------------------------------+
| Consumer | Mode | Ack Policy | Ack Wait | Ack Pending | Redelivered | Unprocessed | Ack Floor | Cluster                                                   |
+----------+------+------------+----------+-------------+-------------+-------------+-----------+-----------------------------------------------------------+
| con      | Pull | Explicit   | 30.00s   | 0           | 0           | 0           | 0         | NANK5OKHNEDCIR5Z37OTKOPJVEL2GU6ZNIYZEHOC7ZAOJUM3ZNNHFUDF* |
+----------+------+------------+----------+-------------+-------------+-------------+-----------+-----------------------------------------------------------+

>

Add a --count option to nsc sub

This is a suggestion only, but it would be nice if nats could just exit or autounsubscribe + exit after --count n many messages.

Piping yes to stream deletion causes SIGSEGV

yes yes | nats str delete SOME_STREAM When SOME_STREAM is the name of an actual stream, will SIGSEGV.

? Really delete Stream Junk (y/N) panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x83b0fc9]
goroutine 1 [running]:
github.com/AlecAivazis/survey/v2/terminal.(*RuneReader).ReadLine(0x57303e00, 0x0, 0x8d51968, 0x5700e110, 0x89a3a74, 0x5700e118, 0x57303e00)
    /Users/rip/go/pkg/mod/github.com/!alec!aivazis/survey/[email protected]/terminal/runereader.go:56 +0x3b9
github.com/AlecAivazis/survey/v2.(*Confirm).getBool(0x57084f00, 0x8874000, 0x570b5364, 0x87c5d00, 0x0, 0x0)
    /Users/rip/go/pkg/mod/github.com/!alec!aivazis/survey/[email protected]/confirm.go:57 +0x109
github.com/AlecAivazis/survey/v2.(*Confirm).Prompt(0x57084f00, 0x570b5364, 0x5700e108, 0x89a6b58, 0x5700e110, 0x89a3a74)
    /Users/rip/go/pkg/mod/github.com/!alec!aivazis/survey/[email protected]/confirm.go:136 +0xe5
github.com/AlecAivazis/survey/v2.Ask(0x5727fdf4, 0x1, 0x1, 0x8732240, 0x5701f7f4, 0x0, 0x0, 0x0, 0x5706ca20, 0x811cc0b)
    /Users/rip/go/pkg/mod/github.com/!alec!aivazis/survey/[email protected]/survey.go:293 +0x400
github.com/AlecAivazis/survey/v2.AskOne(...)
    /Users/rip/go/pkg/mod/github.com/!alec!aivazis/survey/[email protected]/survey.go:236
main.askConfirmation(0x57026aa0, 0x19, 0x5727fe00, 0x1, 0x1, 0x57026aa0)
    /Users/rip/go/src/github.com/nats-io/natscli/nats/util.go:273 +0xe3
main.(*streamCmd).rmAction(0x57124500, 0x5724e0a0, 0x0, 0x0)
    /Users/rip/go/src/github.com/nats-io/natscli/nats/stream_command.go:1701 +0x174
gopkg.in/alecthomas/kingpin%2ev2.(*actionMixin).applyActions(0x57302e4c, 0x5724e0a0, 0x0, 0x0)
    /Users/rip/go/pkg/mod/gopkg.in/alecthomas/[email protected]/actions.go:28 +0x51
gopkg.in/alecthomas/kingpin%2ev2.(*Application).applyActions(0x57120900, 0x5724e0a0, 0x0, 0x0)
    /Users/rip/go/pkg/mod/gopkg.in/alecthomas/[email protected]/app.go:557 +0xb6
gopkg.in/alecthomas/kingpin%2ev2.(*Application).execute(0x57120900, 0x5724e0a0, 0x57318200, 0x2, 0x2, 0x0, 0x0, 0x0, 0x877a940)
    /Users/rip/go/pkg/mod/gopkg.in/alecthomas/[email protected]/app.go:390 +0x7d
gopkg.in/alecthomas/kingpin%2ev2.(*Application).Parse(0x57120900, 0x57016128, 0x3, 0x3, 0x1, 0x572c83b8, 0x0, 0x2)
    /Users/rip/go/pkg/mod/gopkg.in/alecthomas/[email protected]/app.go:222 +0x183
main.main()
    /Users/rip/go/src/github.com/nats-io/natscli/nats/main.go:109 +0x113b

Issue backing up a memory store

  1. Create a memory store named mem-test and publish a few messages.
 $ ./nats stream info mem-test
Information for Stream mem-test

Configuration:

             Subjects: foo
     Acknowledgements: true
            Retention: Memory - Limits
             Replicas: 1
       Discard Policy: Old
     Duplicate Window: 2m0s
     Maximum Messages: unlimited
        Maximum Bytes: unlimited
          Maximum Age: 0s
 Maximum Message Size: unlimited
    Maximum Consumers: unlimited

State:

            Messages: 4
               Bytes: 96 B
            FirstSeq: 1 @ 2020-12-22T16:46:03 UTC
             LastSeq: 4 @ 2020-12-22T16:46:07 UTC
    Active Consumers: 0

$ ./nats stream view mem-test
[1] Subject: foo Received: 2020-12-22T09:46:03-07:00

hello

[2] Subject: foo Received: 2020-12-22T09:46:05-07:00

hello

[3] Subject: foo Received: 2020-12-22T09:46:05-07:00

hello

[4] Subject: foo Received: 2020-12-22T09:46:07-07:00

hello

09:57:23 Reached apparent end of data
  1. Attempt a backup...
$ ./nats stream backup mem-test mem-test-backup
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x159261b]

goroutine 1 [running]:
main.(*streamCmd).backupAction(0xc00016cb00, 0xc0001723f0, 0x0, 0x0)
	/Users/colinsullivan/Dropbox/go/src/github.com/nats-io/natscli/nats/stream_command.go:408 +0x85b
gopkg.in/alecthomas/kingpin%2ev2.(*actionMixin).applyActions(0xc000346cd8, 0xc0001723f0, 0x0, 0x0)
	/Users/colinsullivan/Dropbox/go/pkg/mod/gopkg.in/alecthomas/[email protected]/actions.go:28 +0x6d
gopkg.in/alecthomas/kingpin%2ev2.(*Application).applyActions(0xc000332000, 0xc0001723f0, 0x0, 0x0)
	/Users/colinsullivan/Dropbox/go/pkg/mod/gopkg.in/alecthomas/[email protected]/app.go:557 +0xdc
gopkg.in/alecthomas/kingpin%2ev2.(*Application).execute(0xc000332000, 0xc0001723f0, 0xc00034abc0, 0x2, 0x2, 0x0, 0x0, 0x0, 0x3)
	/Users/colinsullivan/Dropbox/go/pkg/mod/gopkg.in/alecthomas/[email protected]/app.go:390 +0x8f
gopkg.in/alecthomas/kingpin%2ev2.(*Application).Parse(0xc000332000, 0xc000020060, 0x4, 0x4, 0x1, 0xc00013c418, 0x0, 0x1)
	/Users/colinsullivan/Dropbox/go/pkg/mod/gopkg.in/alecthomas/[email protected]/app.go:222 +0x1fe
main.main()
	/Users/colinsullivan/Dropbox/go/src/github.com/nats-io/natscli/nats/main.go:92 +0x1387

nats: error: could not select Stream: invalid character '+' looking for beginning of value

Reproduction steps

Start NATS

docker run -ti -p 4222:4222 --name jetstream synadia/jsm@sha256:505dcdeb6752fb6ec88ef5b905e9c4e0b9ff55dee4f219eebb43ae5c9ae64159 server

Create a new stream with the subject filter set to > and all the default options

nats stream add BUG --subjects=">"

Running nats consumer or nats stream commands returns an error

nats stream info --trace
19:24:23 >>> $JS.API.STREAM.NAMES
{"offset":0}

19:24:23 <<< $JS.API.STREAM.NAMES
+OK {"stream": "BUG", "seq": 13}

nats: error: could not pick a Stream to operate on: invalid character '+' looking for beginning of value

Once this is done you cannot rm the offending stream.

Server report displays time since epoch when node not active

In case a node is not active the report shows humanizes the result of time.Since(time.Unix(0,0)) so gets displayed in years:

+---------------------------------------------------------+
|               RAFT Meta Group Information               |
+------+--------+---------+--------+----------------+-----+
| Name | Leader | Current | Online | Active         | Lag |
+------+--------+---------+--------+----------------+-----+
| A    |        | true    | true   | 0.36s          | 0   |
| B    | yes    | true    | true   | 0.00s          | 0   |
| C    |        | false   | false  | 51y83d6h10m28s | 14  |
| D    |        | false   | false  | 7m23s          | 14  |
| E    |        | true    | true   | 0.36s          | 0   |
+------+--------+---------+--------+----------------+-----+

maybe could display unknown or -?

+---------------------------------------------------------+
|               RAFT Meta Group Information               |
+------+--------+---------+--------+----------------+-----+
| Name | Leader | Current | Online | Active         | Lag |
+------+--------+---------+--------+----------------+-----+
| A    |        | true    | true   | 0.36s          | 0   |
| B    | yes    | true    | true   | 0.00s          | 0   |
| C    |        | false   | false  | -              | 14  |
| D    |        | false   | false  | 7m23s          | 14  |
| E    |        | true    | true   | 0.36s          | 0   |
+------+--------+---------+--------+----------------+-----+
nats -s nats://sys:[email protected]:4222 server report jetstream 5 --trace
22:10:28 >>> $SYS.REQ.SERVER.PING.JSZ: {}
22:10:28 <<< {"data":{"server_id":"NDNJ4EA6VJOU3AX3QBTCGGBNAPZAXGM6ON2UZP73BE7YZCMV64BETW5I","now":"2021-03-12T06:10:28.823705Z","config":{"max_memory":6442450944,"max_storage":456062287872,"store_dir":"nodes/A/jetstream"},"memory":0,"storage":0,"api":{"total":176,"errors":32},"current_api_calls":0,"meta_cluster":{"name":"ABC","leader":"B","replicas":[{"name":"D","current":false,"offline":true,"active":337619352000,"lag":1},{"name":"B","current":true,"active":359040000,"lag":1},{"name":"E","current":false,"active":240190733000,"lag":14},{"name":"C","current":false,"offline":true,"active":778235578000,"lag":14}]}},"server":{"name":"A","host":"0.0.0.0","id":"NDNJ4EA6VJOU3AX3QBTCGGBNAPZAXGM6ON2UZP73BE7YZCMV64BETW5I","cluster":"ABC","ver":"2.2.0-RC.8","seq":25904,"jetstream":true,"time":"2021-03-12T06:10:28.823775Z"}}
22:10:28 <<< {"data":{"server_id":"NCUWG26LWK4OVMW6FURORX4C3HS5IFQJWLDDWXYL6XUEC2FCKZGNPPRX","now":"2021-03-12T06:10:28.823826Z","config":{"max_memory":-1,"max_storage":-1,"store_dir":"./nodes/E"},"memory":0,"storage":0,"api":{"total":4,"errors":4},"current_api_calls":0,"meta_cluster":{"name":"ABC","leader":"B","replicas":[{"name":"B","current":true,"active":359162000,"lag":14},{"name":"A","current":false,"active":240190701000,"lag":14},{"name":"C","current":false,"offline":true,"active":1615529428823841000,"lag":14},{"name":"D","current":false,"offline":true,"active":1615529428823841000,"lag":14}]}},"server":{"name":"E","host":"0.0.0.0","id":"NCUWG26LWK4OVMW6FURORX4C3HS5IFQJWLDDWXYL6XUEC2FCKZGNPPRX","cluster":"ABC","ver":"2.2.0-RC.8","seq":2046,"jetstream":true,"time":"2021-03-12T06:10:28.823856Z"}}
22:10:28 <<< {"data":{"server_id":"ND2GVEEXLVAQMWBXQM6FZRBFLPFELETDHC75JYFNM7255EGW5NTHFN5M","now":"2021-03-12T06:10:28.823901Z","config":{"max_memory":-1,"max_storage":-1,"store_dir":"./nodes/B"},"memory":0,"storage":479165,"api":{"total":368,"errors":138},"current_api_calls":0,"total_streams":1,"total_messages":10195,"total_message_bytes":479165,"meta_cluster":{"name":"ABC","leader":"B","replicas":[{"name":"A","current":true,"active":359185000},{"name":"C","current":false,"offline":true,"active":1615529428823921000,"lag":14},{"name":"D","current":false,"offline":true,"active":443238313000,"lag":14},{"name":"E","current":true,"active":359168000}]}},"server":{"name":"B","host":"0.0.0.0","id":"ND2GVEEXLVAQMWBXQM6FZRBFLPFELETDHC75JYFNM7255EGW5NTHFN5M","cluster":"ABC","ver":"2.2.0-RC.8","seq":2109,"jetstream":true,"time":"2021-03-12T06:10:28.823937Z"}}

Wait after purge

At the moment we do a stream info right after purge and show it - most often that shows messages still.

We should rather poll a few times till its zeroed and then show the info.

it would be nice if the context edit command would take the default options for editing

Ideally all the flags that correspond to config options

nats context edit mix -h
usage: nats context edit <name>

Edit a context in your EDITOR

Flags:
  -h, --help                    Show context-sensitive help (also try --help-long and --help-man).
      --version                 Show application version.
  -s, --server=NATS_URL         NATS server urls
      --user=NATS_USER          Username or Token
      --password=NATS_PASSWORD  Password
      --creds=NATS_CREDS        User credentials
      --nkey=NATS_NKEY          User NKEY
      --tlscert=NATS_CERT       TLS public certificate
      --tlskey=NATS_KEY         TLS private key
      --tlsca=NATS_CA           TLS certificate authority chain
      --timeout=NATS_TIMEOUT    Time to wait on responses from NATS
      --js-api-prefix=PREFIX    Subject prefix for access to JetStream API
      --js-event-prefix=PREFIX  Subject prefix for access to JetStream Advisories
      --js-domain=DOMAIN        JetStream domain to access
      --context=CONTEXT         Configuration context
      --trace                   Trace API interactions

would be available to edit the context. In this example I'd like to just have the server urls overwritten, without opening up an editor.

nats context edit mix --server "nats://127.0.0.1:4222,nats://127.0.0.1:4232,nats://127.0.0.1:4242,nats://127.0.0.1:4252,nats://127.0.0.1:4262,nats://127.0.0.1:4272,nats://127.0.0.1:4282,nats://127.0.0.1:4292,nats://127.0.0.1:4202"

This way it'd be much easier to share a context for a demo etc... just by providing the right commands to create one.

go install github.com/nats-io/natscli@latest fails

this is due to go 1.17 and its depreciation warning for go get.

fails:

go install github.com/nats-io/natscli@latest
go: downloading github.com/nats-io/natscli v0.0.20
go install: github.com/nats-io/natscli@latest (in github.com/nats-io/[email protected]):
        The go.mod file for the module providing named packages contains one or
        more replace directives. It must not contain directives that would cause
        it to be interpreted differently than if it were the main module.

works:


go get github.com/nats-io/natscli
go get: installing executables with 'go get' in module mode is deprecated.
        Use 'go install pkg@version' instead.
        For more information, see https://golang.org/doc/go-get-install-deprecation
        or run 'go help get' or 'go help install'.
i

nats context ls not unicode-aware for string lengths

The context ls command, at least, appears to be using something other than unicode display width for columnar alignment.

$ nats context ls
Known contexts:

   ngs-pdp-applewood   homesvr creds for synadia pdp-prod-applewood [our-js]
   ngs-pdp-applewood-anyhomesvr creds for synadia pdp-prod-applewood [any js]
   ngs-pdp-に           homesvr creds for synadia pdp-prod-に [our-js]
   ngs-pdp-に-any       homesvr creds for synadia pdp-prod-に [any js]
   tngs-pdp-a          homesvr creds for test-syn pdp-test-a [our-js]
[...]

feature: context validation

When copying contexts between systems and wanting to make sure I have everything right, I find I want nats context validate or nats context ls --check or something like that.

For each context (ideally), validate that all referenced file-paths exist and are not empty.

The ls --check approach would clearly be validate all of them. Perhaps nats context validate should just validate the one, unless --all is given.

Bonus feature: nats context validate --all --test-connect -- open connections to the NATS servers and authenticate, before dropping the connection.

nagios-style check for jetstream responder found

The nats server check connection mode is really helpful. I'd like a Jetstream equivalent.

If I can supply a context or creds for a monitoring account, and also --js-domain, as I can now, to target which jetstream leafnode I'm talking to, then a nats CLI mode which can check for there being a jetstream responder will let me monitor jetstream being "up" and plug this into any NAGIOS-compatible setup.

A bonus would be if it can take the max available space for disk/mem and the used space and transition into warning state if the available space is less than a given percentage (CLI-overrideable in the usual manner).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.