GithubHelp home page GithubHelp logo

nyzoverifier's Issues

Suggestion: Nyzo ICP compatability?

Not sure if this is the right place for this but just had the idea as a third alternative for Nyzo Verifiers....
Instead of using a VPS (which is prone to centralization) or self-hosting (which is unreliable) would it be possible to piggy back off the technology that Dfinity created when it comes to hosting? https://dfinity.org/
By utilizing a VPS-like solution over another blockchain, whether that be ICP or something similar, I think it could help towards network decentralization.

Too lenient in queue verifier proofs.

Ref:

A huge part of the queue is composed of "red" verifiers.
With current working, a new IP only has to answer once to a nodejoin so it is added to the "nodes" file, then all it has to do is send nodejoins from its IP to reset the "failed" counter.
That "failed" counter is only incremented on a failed answer. It is reset on a successful answer (good), but also on any nodejoin incoming message.
Big players with large IP blocks exploit that to drastically reduce costs and keep their IPs alive just by spamming nodejoins to the cycle every few hours once they are in the "nodes" file to maintain their rank.

Proposal 1:
Do not reset the failed counter on incoming nodejoin message, just on a successful answer.
Maybe slightly raise the threshold to avoid side effects, like DDoS being too effective.

Proposal 2:
(Thanks to Dkat)
Faking a nodejoin or nodejoin answer is way too easy and can be done with a few lines of code, not needing a real verifier, not needing chain data.

To be valid, a nodejoin answer could require a simple challenge, for instance:

  • latest known block, signed with in-queue verifier key (not tracking, maybe late, but not too much)
  • small pow challenge
  • probably limit answers to in-cycle nodes IP, with a quota to avoid DoS by requesting many challenges.
    Requiring a more complex answer to nodejoin will make it harder to fake a real verifier, and will incite to run official code.

NTTP Suggestion: Cycle TX management

In the current working, once a cycle tx is issued, it remains active until

  • it is replaced by another one
  • it pass (> 50% YES)

Some cycle tx, or spammy cycle tx can therefore remain indefinitely active, with many YES and NO votes, or just NO votes.
They occupy blocks space and will not clear.

I suggest 2 possible changes to the cycle TX cycle of life:

  • A cycle TX that gets > 50% NO votes gets removed (could be formally recorded by a 1 µn meta tx from the cycle)
  • A cycle TX has a timeout of 6 Months. After that time, if it did not pass, it's removed.

NO PRIVATE KEY FLIE

root@choosefabric:/var/lib/nyzo/production# ls -a
. .. nickname trusted_entry_points

more /var/lib/nyzo/production/verifier_private_seed
more: stat of /var/lib/nyzo/production/verifier_private_seed failed: No such file or directory

Nodejoin Spam and Temp Queue size attack

By mitigating the memory and resource use of incoming nodejoin messages, v572 opens for a new kind of attack:
To enter the cycle, you have to enter the queue ("nodes" file).
Since 572, to enter that queue you first have to enter a - size limited - temporary queue.
The size of this temporary queue is static, currently 1000.
The scarce resource to enter the real queue is no more IPv4, it's a slot in that temporary queue.

Several things gave me hints:

  • several new users complaining about their newly in queue verifier not showing on nyzo.co, after days.
  • heavy nodejoin spam, either continuous or by burst, sometimes targetting specific class of verifiers.
  • such targeted verifiers (version xxx001 for instance) hardly showing new verifiers

The extended logging from my NCFP-10 PR was aimed at collecting data, but also run an optional extra script to auto mitigate these attacks, by rate-limiting them.

When a verifier is subject to nodejoin spam, its log looks like:

[1596735532.002 (2020-08-06 17:38:52.002 UTC)]: added new out-of-cycle node to queue: 6c31...076d
[1596735532.003 (2020-08-06 17:38:52.003 UTC)]: removed node from new out-of-cycle queue due to size
[1596735532.004 (2020-08-06 17:38:52.004 UTC)]: nodejoin_from 3.248.137.142 6c31...076d Launch9
[1596735532.023 (2020-08-06 17:38:52.023 UTC)]: added new out-of-cycle node to queue: 3c94...c24b
[1596735532.023 (2020-08-06 17:38:52.023 UTC)]: removed node from new out-of-cycle queue due to size
[1596735532.026 (2020-08-06 17:38:52.026 UTC)]: nodejoin_from 44.226.163.107 3c94...c24b WWW_BUY_COM 16
[1596735532.033 (2020-08-06 17:38:52.033 UTC)]: nodejoin_from 44.227.41.15 4cc5...7e91 New12 here
[1596735532.034 (2020-08-06 17:38:52.034 UTC)]: added new out-of-cycle node to queue: ecbf...c341
[1596735532.035 (2020-08-06 17:38:52.035 UTC)]: removed node from new out-of-cycle queue due to size
[1596735532.038 (2020-08-06 17:38:52.038 UTC)]: added new out-of-cycle node to queue: 91fc...b7ec
[1596735532.038 (2020-08-06 17:38:52.038 UTC)]: removed node from new out-of-cycle queue due to size
[1596735532.038 (2020-08-06 17:38:52.038 UTC)]: nodejoin_from 44.226.13.107 91fc...b7ec Foxwie10
[1596735532.049 (2020-08-06 17:38:52.049 UTC)]: nodejoin_from 35.183.163.76 ecbf...c341 ZBank16
[1596735532.061 (2020-08-06 17:38:52.061 UTC)]: added new out-of-cycle node to queue: db0e...f9bc
[1596735532.061 (2020-08-06 17:38:52.061 UTC)]: removed node from new out-of-cycle queue due to size

This goes on and on, with around 100 nodejoins per second.
The logs show the process is effective: many many temp nodes do drop off the temp queue.
When the temp queue is full - and it clearly is in these occasions - then every time a new nodejoin comes in, a random node from the temp queue drops.
Send enough and you wipe out almost everyone else.

If you can borrow more than 1000 ips, even for a short period of time (amazon ips, alibaba, socks proxies, botnets, some blockchains renting their nodes: renting 10k ips is easy and cheap)
then you can flood the temp queue of the in-cycle nodes, get all new temp candidates out and have more chances only yours remain.
Spam with temp ips, spam with your new nodes ips, you enter the real queue, others don't.
Regularly spam nodejoin with borrowed ips, you make it statistically harder for others to join.

This is not a very efficient attack. It would need to be done at scale and in a continuous way to be useful,

However, the logs show this does happen IRL, and in a continuous way (at least on some verifiers)

Another sample from a node that already banned a lot of spamming ips:
[1597137583.389 (2020-08-11 09:19:43.389 UTC)]: removed node from new out-of-cycle queue due to size
[1597137583.390 (2020-08-11 09:19:43.390 UTC)]: nodejoin_from 44.227.158.100 6ed0...ca7a NowFuture14
[1597137583.395 (2020-08-11 09:19:43.395 UTC)]: added new out-of-cycle node to queue: e37e...e3e4
[1597137583.396 (2020-08-11 09:19:43.396 UTC)]: nodejoin_from 99.79.142.3 e37e...e3e4 DoYouW0
[1597137583.411 (2020-08-11 09:19:43.411 UTC)]: nodejoin_from 50.116.12.77 8a1d...de82 feiya200_u
[1597137583.417 (2020-08-11 09:19:43.417 UTC)]: added new out-of-cycle node to queue: 8adb...5069
[1597137583.417 (2020-08-11 09:19:43.417 UTC)]: removed node from new out-of-cycle queue due to size
[1597137583.420 (2020-08-11 09:19:43.420 UTC)]: added new out-of-cycle node to queue: f968...6fc3
[1597137583.420 (2020-08-11 09:19:43.420 UTC)]: removed node from new out-of-cycle queue due to size
[1597137583.420 (2020-08-11 09:19:43.420 UTC)]: nodejoin_from 99.79.129.23 8adb...5069 PACITES4
[1597137583.422 (2020-08-11 09:19:43.422 UTC)]: added new out-of-cycle node to queue: 0a40...6494
[1597137583.422 (2020-08-11 09:19:43.422 UTC)]: nodejoin_from 99.79.175.115 f968...6fc3 noNation19
[1597137583.422 (2020-08-11 09:19:43.422 UTC)]: nodejoin_from 52.39.118.45 0a40...6494 Detail POD 17
[1597137583.424 (2020-08-11 09:19:43.424 UTC)]: added new out-of-cycle node to queue: 0533...5c9f
[1597137583.424 (2020-08-11 09:19:43.424 UTC)]: removed node from new out-of-cycle queue due to size

Temp queue full, nodes dropping. This is happening, since weeks if not more. This node can hardly see real new candidates.
The bottleneck is the temp queue size.

In addition to the temp queue size "attack", the heavy nodejoin spam itself is an effective DoS attack.
It now seems to be targetted toward low mesh count verifiers, and/or verifiers running a xxx001 version.
In cycle verifiers under attack experience high cpu load, ram usage, can swap and end up in a state where they can't follow the cycle anymore.
Some technical users had to modify the code or use xxx001 version to get extended IP log of the attacks, so to identify the sources of the attacks and ban them via iptables - (Mostly amazon ec2 ips).

Private data returned by public functions

In the Node class, private variables identifier and ipAddress are returned by reference in their getter functions:

https://github.com/n-y-z-o/nyzoVerifier/blob/master/src/main/java/co/nyzo/verifier/Node.java#L31

public byte[] getIdentifier() {
  return identifier;
}
...
public byte[] getIpAddress() {
  return ipAddress;
}

As a result these private variables can be altered by calling functions after getting access to the private variable reference. While this is a minor issue that does not translate in any undefined behavior from wherever it's called in the rest of the code, it might be good to fix it for the sake of best practices and to prevent any potential future misbehavior.

This article describes the best practice in this case, which recommends copying the data to return into a new array as follows:

public byte[] getIdentifier() {
    byte[] copy = new byte[this.identifier.length];
    System.arraycopy(this.identifier, 0, copy, 0, copy.length);
    return copy;
}

don't send removal vote if it is empty

Currently, incycle verifiers are sending out a lot of empty Removal Votes (~7500 messages per hour) which somewhat eats CPU (for verifying the messages) and slow down the connection.

https://github.com/n-y-z-o/nyzoVerifier/blob/master/src/main/java/co/nyzo/verifier/VerifierPerformanceManager.java#L218

Suggestion:

if (vote.getIdentifiers().size() > 0)
        {
        	// Send the messages.
            int numberOfMessages = 0;
            Message message = new Message(MessageType.VerifierRemovalVote39, vote);
            for (Node node : mesh) {
                ByteBuffer ipAddress = ByteBuffer.wrap(node.getIpAddress());
                if (numberOfMessages < messagesPerIteration &&
                        BlockManager.verifierInCurrentCycle(ByteBuffer.wrap(node.getIdentifier())) &&
                        voteMessageIpToTimestampMap.getOrDefault(ipAddress, Long.MAX_VALUE) <= cutoffTimestamp) {

                    voteMessageIpToTimestampMap.put(ipAddress, System.currentTimeMillis());
                    numberOfMessages++;
                    Message.fetch(node, message, null);
                }
            }
        }

stats from 1 incycle verifier:

   4356  NodeJoinV2_43 (49%)
   3361  VerifierRemovalVote39 (38%)
    458  NewBlock9
    367  MissingBlockRequest25
    168  BlockWithVotesRequest37
     39  BlockRequest11
     17  StatusRequest17
      3  MeshRequest15
      2  MissingBlockVoteRequest23

Testnet can't send nyzo

Hello I'm trying to launch my own testnet by following an old nyzo note https://tech.nyzo.co/releaseNotes/v492.

I created two aws ec2 instances to start it I followed the instructions.
For one instance the private key is : 0101010101010101-​0101010101010101-​0101010101010101-​0101010101010101
(key_80410g410g410g410g410g410g410g410g410g410g41CTrfoxsm
id__88H8W.TS2w6m_mbsbjQYoobaqND_7qgi6_dSz06S3U.tTSxdyJti
8a88e3dd7409f195-​fd52db2d3cba5d72-​ca6709bf1d94121b-​f3748801b40f6f5c)

I have no informations on how to send nyzo to an adress.

I started nyzo client by using
sudo java -jar build/libs/nyzoVerifier-1.0.jar co.nyzo.verifier.client.Client

The CC command give this result :

╔══════════════════════════╦═══╦═════════════════════╗
║ block height             ║   ║ 3                   ║
║ coins in system          ║ + ║ ∩100,000,000.000000 ║
║ locked account sum       ║ - ║ ∩100,000,000.000000 ║
║ unlock threshold         ║ + ║ ∩0.000000           ║
║ unlock transfer sum      ║ - ║ ∩0.000000           ║
║ seed account balance     ║ - ║ ∩0.000000           ║
║ transfer account balance ║ - ║ ∩0.000000           ║
║ cycle account balance    ║ - ║ ∩0.000000           ║
║ coins in circulation     ║ = ║ ∩0.000000           ║
╚══════════════════════════╩═══╩═════════════════════╝

I tried to use the commande ST to send a transaction with this parameters :

sender key : key_80410g410g410g410g410g410g410g410g410g410g41CTrfoxsm 
receiver ID: id__84u4w_Ge._.0B7yovgQ.pLuytm8XXBak.NoNt7w~B_qQFzyzrCfQ
sender data: test
amount: 1000

But I got an error from my two verifiers :

transaction not accepted by 4cf7...1c0b: There was a problem and your transaction was not accepted by the system (error="This sender was not found in the system."). To prote
ct yourself against possible coin theft, please wait to resubmit this transaction. Refer to the Nyzo white paper for full details on why this is necessary, how long you need
 to wait, and to understand how Nyzo provides stronger protection than other blockchains against this type of potential vulnerability.
transaction not accepted by 8a88...6f5c: There was a problem and your transaction was not accepted by the system (error="This sender was not found in the system."). To prote
ct yourself against possible coin theft, please wait to resubmit this transaction. Refer to the Nyzo white paper for full details on why this is necessary, how long you need
 to wait, and to understand how Nyzo provides stronger protection than other blockchains against this type of potential vulnerability.
waiting for block...

What is the way of making transaction in a testnet ?
What kind of transaction do I need to make in the genesis block to have not locked nyzo ? :)

Stuck verifier due bad block stop 3 different sentinels and drop out just now

holo6 just dropped out:
*** left at block 16025645, cycle length: 2574
verifier version 607
VPS had no issue CPU load was <30%
at block i_016023877 was corrupt verifier got stuck and all sentinels 3 of them that tracked him (each sentinel had 10-20 verifiers in total) got stuck at same block, the holo6 then dropped out some 3-4h later as none of 3 sentinel protected it

holo6 score 6 hours after the drop (score was perfect before it drop a bit only as it was stuck for ~4h before the drop):
holo6 -137532 360 -137574 180 -137520 360 -137544 360 -137874 360 -148116 0 -148106 0 -148116 0

I will post logs of sentinel, holo6 log and will post block that do not match hash of the others, I do not want to presume anything how can perfectly fine verifier that had none issue on past 1year just get bad block all of sudden on top of it how that bad block can kill three sentinels that tracked holo6
sentinel where version: 610, 612 and 3th I updated so I am not sure which version it was but it was 608+ maybe even 612+

All 3 sentinel had to be cleaned of all block and resynced to get working again

holo6 there is log + block that was malformed, only sentinel2 have the log when it got stuck rest I was late to save logs

nyzo.zip

Potential migration issue for node IP changes

Summary: Migrating from node A (old) to node B (new) with same private key for a node A in the cycle leads to node being dropped from the cycle

Severity: Medium-high (existing verifiers are blocked from hardware/cloud provider updates at the risk of being dropped from cycle)

In the white paper:

Only one verifier is allowed to be run at each IPv4 address. As nodes take very little computational power, a single system could otherwise run many instances of the Nyzo software and take a disproportionate share of transaction fees. This will prevent some users from running Nyzo verifiers in situations with shared public IP addresses (dorms, offices), but that is an acceptable limitation to ensure fairness in transaction verification. Also, this limitation does not prevent multiple Nyzo clients at an address.

Two mechanisms are in place to enforce the IPv4 address restriction. First, in the list of verifiers waiting to join the verification cycle as a new verifier, any verifier changes at an address cause that address to be demoted to the end of the queue. Second, when an existing verifier produces a block, a penalty score is applied if that verifier is not currently listed as active in the mesh. To prevent shuffling of a large set of verifiers over a smaller set of IP addresses, the verifier at an IP address is only allowed to be reassigned at a time interval slightly larger than the time interval of the current verification cycle length. So, attempts to circumvent the IPv4 address restriction will result in difficulty joining the cycle as a new verifier and will risk being removed from the cycle as an existing verifier.

The problem:

As node resource and bandwidth requirements increase, it may be prudent for users currently in the cycle to migrate to nodes with larger capacity/processing power. Unfortunately, this also changes the IP address usually. It is unclear from above, but two interpretations are possible:

  1. Nodes in the cycle that change IP address and same private key should be allowed to do so no more than ~once per cycle (my reading of the above)
  2. Nodes in the cycle that change IP address and same private key are dropped, since the time that the new IP address is blocked from the network is approximately one cycle

If (1), then this is a bug. If (2), then this is a design feature.

Now, replicating the bug (assuming my reading of 1 is correct):

  1. Have node A (old) with registered private key and nickname operating in the cycle
  2. Set up node B (new) from scratch with instructions from github, without starting the node yet
  3. Stop nyzo_verifier instance at node A - node becomes red at mesh page
  4. Transfer private key (of node A) and nickname to node B, and start the nyzo_verifier
  5. Wait a few minutes - node remains red at mesh page

The following output is generated in logs: (info hidden by +++)


nickname: +++
version: 476
ID: +++
mesh: 0 active, 536 inactive
cycle length: 301
transactions: 0
retention edge: -1
trailing edge: -1
frozen edge: 892692 (ajmo2-1)
open edge: 892692
blocks transmitted/created: 0/0
votes requested: 0
new timestamp: 2604
old timestamp: 1543045657578
blocks: 14: 0,[892673,892692]
balance lists: 3(G=-,r=-,f=-)


  1. Shut down node B and start nyzo_verifier instance at node A - node becomes operational at mesh page

Is there a resolution or is this "working as designed"?

nyzoVerifier stuck at the same height

The nyzoVerifier (version 524, now updated to latest 526) running on 4.4.0-150-generic #176-Ubuntu is running for over 3 days and on the mesh page page still indicated in yellow (tracking issue). The log file shows the following lines repeating endlessly:

requesting block with votes for height 3341506, interval=2205
trying to fetch BlockWithVotesRequest37 from 884c...3b15
requesting block with votes for height 3341506, interval=2107
trying to fetch BlockWithVotesRequest37 from Aopuuu03
requesting block with votes for height 3341506, interval=2114
trying to fetch BlockWithVotesRequest37 from Nget1756
waiting for message queue to clear from thread [Verifier-mainLoop], size is 1

The "height" in the log seems stuck at the same value 3341506.
What could be causing this issue? How to fix it? If known, a suggestion to add more helpful message to the log.

Any RPC API docs?

Is there a restful interface document or json-rpc interface document? I have many interfaces that I can’t find, such as obtaining balance based on address(id__...), obtaining block details based on block height, etc.

NTTP Suggestion: prefilled nyzostring including amount

Current prefilled Nyzostrings pre_ do encode "recipient" and "sender data" but no amount.

For most uses, it makes sense to give someone a string that would also embed the amount.
Pay for a product, service, micropay with minimum amount...

  • Could it be possible to evolve current "pre_" nyzostring specs to also embed an amount?
  • If not, can we spec another prefix for such a nyzostring?

Eager for a feedback before proposing a PR, unless you prefer to do it yourself.
To be useful, the nyzo.co wallet would have to recognize that new string just as it recognizes a "pre_" string in place of recipient.

Deployment on an Azure VM

I have a verifier working in AWS. I am trying to get one running in Azure. I have followed the configuration steps, but the verifier never shows up in the mesh. I have ip rules configured. Is there a log that may help?

JSONRPC api got an error: java.lang.NullPointerException

The method 'transactionconfirmed' of JSONRPC api got an error: java.lang.NullPointerException

Is there something wrong with my code ?

curl -X POST http://127.0.0.0:30288/jsonrpc -H 'Content-type:application/json' -d '{"jsonrpc":"2.0","method" :"transactionconfirmed","params" :{"tx":"53deda5c5e53194aae9eb7de7bedc617a7f86d7c91fb49b216b8e0e9d1514526ba71fcfcf6c835b55f3cabc7dc68d82eae0e3ddedfc4dca9b51a33cd7a0a0b00"},"id":1}'

When will NTTP-1 be implemented

I don't want my private key to be stolen by the cloud service administrators or hackers.
Is there any progress on the NTTP-1 Implementation price and time?

Add support for vote tx in nyzostrings

Nyzo Strings currently define:

  • Micropay("pay_")
  • PrefilledData("pre_")
  • PrivateSeed("key_")
  • PublicIdentifier("id__")
  • Signature("sig_")
  • Transaction("tx__")

tx__ is quite useful to locally sign a transaction then send it for inclusion.
The client api has a "forward transaction" endpoint for that.

However, type 4 txs (vote) do not follow the same data structure as usual transactions and can't be encoded the same way.

Thus, we'd like to suggest a new nyzostring type - tx4_ for instance - that would represent a type 4 signed vote tx.
"Forward transaction" endpoint could then handle both tx__ and tx4_.

Verifiers are getting hard to sync up

Recently, I saw many verifiers got dropped because they lost sync and never get back to a normal state again (despite the owners trying to reboot the machine, spin up new VPS, pray...). After digging for a while, I found out that the reason those verifiers kept losing synced state is because of lack of block vote, which we can increase the speed of fetching here:
https://github.com/n-y-z-o/nyzoVerifier/blob/master/src/main/java/co/nyzo/verifier/Verifier.java#L526-L528
Those time numbers are not suitable for the current mesh size anymore.

This would be a trade-off between performance and stability but I guess we have to choose stability. Currently, I'm using these numbers and it works fine (1/4 of original numbers). CPU usage got spike a little bit after starting nyzo service (under 20 sec).

frozenEdge.getVerificationTimestamp() < System.currentTimeMillis() - 7500L &&
                            lastVoteRequestTimestamp < System.currentTimeMillis() - 1000L

tunable params maybe nice to have.

Stuck Verifier

Seems like a different issue, no red outline on nyzo.co.

Verifier was showing as yellow, tracking issue. Stuck at block 16148619
v611002

^^^^^^^^^^^^^^^^^^^^^ casting vote for height 16148619
broadcasting message: BlockVote19 to 2575
trying to fetch BlockRequest11 from SpaceX
trying to fetch MissingBlockVoteRequest23 from CITIZEN12
trying to fetch MissingBlockVoteRequest23 from S-Dedicated-2
trying to fetch MissingBlockVoteRequest23 from Yearbooks
trying to fetch MissingBlockVoteRequest23 from guru
trying to fetch MissingBlockVoteRequest23 from ProMac11
trying to fetch MissingBlockVoteRequest23 from iPhone
trying to fetch MissingBlockVoteRequest23 from KOyou 9
trying to fetch MissingBlockVoteRequest23 from Wendell
trying to fetch MissingBlockVoteRequest23 from Dailreport5
trying to fetch MissingBlockVoteRequest23 from MAMBA1
requesting block with votes for height 16148619, interval=4662
trying to fetch BlockWithVotesRequest37 from gunray5
nodejoin_from 85.244.93.155 2756...c1b5 joelpandrade
trying to fetch BlockRequest11 from ff
nodejoin_from 51.89.123.85 4c3e...0274 barbora
trying to fetch BlockRequest11 from CCC 19
trying to fetch BlockRequest11 from ff
trying to fetch MissingBlockVoteRequest23 from Do_You_Try3
trying to fetch MissingBlockVoteRequest23 from gateway011
trying to fetch MissingBlockVoteRequest23 from Togepi
trying to fetch MissingBlockVoteRequest23 from DoYouW1
trying to fetch MissingBlockVoteRequest23 from Signcycle 16
trying to fetch MissingBlockVoteRequest23 from CrazyB01
trying to fetch MissingBlockVoteRequest23 from Glasses
trying to fetch MissingBlockVoteRequest23 from MAMBA1
trying to fetch MissingBlockVoteRequest23 from barbora28Fjn
trying to fetch MissingBlockVoteRequest23 from mCAFE 20
requesting block with votes for height 16148619, interval=4065
trying to fetch BlockWithVotesRequest37 from Alex Senemar 15
waiting for message queue to clear from thread [Verifier-mainLoop], size is 1
^^^^^^^^^^^^^^^^^^^^^ casting vote for height 16148619
broadcasting message: BlockVote19 to 2575
trying to fetch BlockRequest11 from GramT22
top verifier be87...d9df has 2394 votes with a cycle length of 2545 (94.1%)
added new out-of-cycle node to queue: 16d9...4a64
nodejoin_from 158.69.33.13 16d9...4a64 noway78
trying to fetch BlockRequest11 from Res0ec21
added new out-of-cycle node to queue: ef33...95b3
nodejoin_from 51.161.50.73 ef33...95b3 bmw3251010
trying to fetch BlockRequest11 from LASNA 22
trying to fetch MissingBlockVoteRequest23 from Foxwie7
trying to fetch MissingBlockVoteRequest23 from Cory
trying to fetch MissingBlockVoteRequest23 from ovyzo11-7
trying to fetch MissingBlockVoteRequest23 from Nyzo rules
trying to fetch MissingBlockVoteRequest23 from VUL1RT
trying to fetch MissingBlockVoteRequest23 from Alita Angel
trying to fetch MissingBlockVoteRequest23 from bull0002
trying to fetch MissingBlockVoteRequest23 from Gaara988
trying to fetch MissingBlockVoteRequest23 from Lucky24111
trying to fetch MissingBlockVoteRequest23 from Mopsu01
requesting block with votes for height 16148619, interval=4044
trying to fetch BlockWithVotesRequest37 from Andy
trying to fetch BlockRequest11 from Clients
added new out-of-cycle node to queue: 5c1e...7186
nodejoin_from 51.89.123.105 5c1e...7186 barbora
^^^^^^^^^^^^^^^^^^^^^ casting vote for height 16148619
broadcasting message: BlockVote19 to 2575
trying to fetch BlockRequest11 from eNagaina5346
top verifier be87...d9df has 2398 votes with a cycle length of 2545 (94.2%)
trying to fetch MissingBlockVoteRequest23 from mototoya000
trying to fetch MissingBlockVoteRequest23 from Yuge_Chungus
trying to fetch MissingBlockVoteRequest23 from rumba6
trying to fetch MissingBlockVoteRequest23 from TOBE19
trying to fetch MissingBlockVoteRequest23 from Gogirl A19
trying to fetch MissingBlockVoteRequest23 from HELLO63
trying to fetch MissingBlockVoteRequest23 from Sakura31356
trying to fetch MissingBlockVoteRequest23 from Keep18840
trying to fetch MissingBlockVoteRequest23 from Artboy11
trying to fetch MissingBlockVoteRequest23 from noway74
requesting block with votes for height 16148619, interval=4612
trying to fetch BlockWithVotesRequest37 from pvip150
nodejoin_from 3.232.136.94 0991...c1d0 Alphabet41
waiting for message queue to clear from thread [Verifier-mainLoop], size is 1
nodejoin_from 54.38.34.75 e7cd...a411 Nescafe15
trying to fetch BlockRequest11 from Hilarious50492
nodejoin_from 135.125.190.4 3faf...7416 Alice
trying to fetch BlockRequest11 from aaa004
^^^^^^^^^^^^^^^^^^^^^ casting vote for height 16148619
broadcasting message: BlockVote19 to 2575
trying to fetch BlockRequest11 from Obama6
trying to fetch MissingBlockVoteRequest23 from mrle310
trying to fetch MissingBlockVoteRequest23 from *N*pass15763
trying to fetch MissingBlockVoteRequest23 from Gogirl A19
trying to fetch MissingBlockVoteRequest23 from MCR2023
trying to fetch MissingBlockVoteRequest23 from Binance
trying to fetch MissingBlockVoteRequest23 from Sam_FTX 11
trying to fetch MissingBlockVoteRequest23 from Blockcollider14
trying to fetch MissingBlockVoteRequest23 from flower 11
trying to fetch MissingBlockVoteRequest23 from Input14
trying to fetch MissingBlockVoteRequest23 from BelongB16
requesting block with votes for height 16148619, interval=4785
trying to fetch BlockWithVotesRequest37 from mototoya008
trying to fetch BlockRequest11 from Fat Man
trying to fetch BlockRequest11 from Charmander
top verifier be87...d9df has 2397 votes with a cycle length of 2545 (94.2%)
added new out-of-cycle node to queue: f72d...892c
removed node from new out-of-cycle queue due to size
nodejoin_from 51.161.60.76 f72d...892c pvip108
nodejoin_from 18.191.26.157 29d8...e55e PigCash40
^^^^^^^^^^^^^^^^^^^^^ casting vote for height 16148619
broadcasting message: BlockVote19 to 2575
nodejoin_from 116.203.33.209 3aa8...f753 OhHaiNyzo-1
trying to fetch BlockRequest11 from bull0002
trying to fetch MissingBlockVoteRequest23 from Belinda
trying to fetch MissingBlockVoteRequest23 from Lacia05
trying to fetch MissingBlockVoteRequest23 from f5
trying to fetch MissingBlockVoteRequest23 from Fushimi Inari
trying to fetch MissingBlockVoteRequest23 from elasticsearch012
trying to fetch MissingBlockVoteRequest23 from Y20Y21
trying to fetch MissingBlockVoteRequest23 from IIHLOHA11
trying to fetch MissingBlockVoteRequest23 from Giacomo20
trying to fetch MissingBlockVoteRequest23 from SBLY3
trying to fetch MissingBlockVoteRequest23 from snjor139
requesting block with votes for height 16148619, interval=4169
trying to fetch BlockWithVotesRequest37 from rumba12
waiting for message queue to clear from thread [Verifier-mainLoop], size is 2
nodejoin_from 52.214.208.238 26b8...eb3a Forbole2
trying to fetch BlockRequest11 from lucyLuky15
trying to fetch BlockRequest11 from Fireplug
added new out-of-cycle node to queue: 1005...b6d0
nodejoin_from 142.44.145.117 1005...b6d0 automotives85
trying to fetch BlockRequest11 from Shameful41611
trying to fetch MissingBlockVoteRequest23 from apk
trying to fetch MissingBlockVoteRequest23 from Mask Jame 3
trying to fetch MissingBlockVoteRequest23 from Only02
trying to fetch MissingBlockVoteRequest23 from NRV-SC17
trying to fetch MissingBlockVoteRequest23 from Omyback28201
trying to fetch MissingBlockVoteRequest23 from ajmo1
trying to fetch MissingBlockVoteRequest23 from rFirst
trying to fetch MissingBlockVoteRequest23 from Nash01
trying to fetch MissingBlockVoteRequest23 from rmb
trying to fetch MissingBlockVoteRequest23 from Sam_FTX 8
requesting block with votes for height 16148619, interval=4221
trying to fetch BlockWithVotesRequest37 from Arionum04
waiting for message queue to clear from thread [Verifier-mainLoop], size is 3
^^^^^^^^^^^^^^^^^^^^^ casting vote for height 16148619
broadcasting message: BlockVote19 to 2575
trying to fetch BlockRequest11 from CPRCP7
top verifier be87...d9df has 2397 votes with a cycle length of 2545 (94.2%)
added new out-of-cycle node to queue: ff30...8af8
removed node from new out-of-cycle queue due to size
nodejoin_from 51.161.28.69 ff30...8af8 burr170
trying to fetch BlockRequest11 from RICKY37
added new out-of-cycle node to queue: 9fd7...141b
removed node from new out-of-cycle queue due to size
nodejoin_from 51.89.123.109 9fd7...141b barbora
trying to fetch BlockRequest11 from Elon Musk
added new out-of-cycle node to queue: 846e...2293
nodejoin_from 144.217.196.57 846e...2293 noway58
trying to fetch MissingBlockVoteRequest23 from A1200O4898
trying to fetch MissingBlockVoteRequest23 from boogie315208
trying to fetch MissingBlockVoteRequest23 from CITIZEN1
trying to fetch MissingBlockVoteRequest23 from wdc-v1
trying to fetch MissingBlockVoteRequest23 from Jack-LA14
trying to fetch MissingBlockVoteRequest23 from Getting_12
trying to fetch MissingBlockVoteRequest23 from FBG26505
trying to fetch MissingBlockVoteRequest23 from 3359
trying to fetch MissingBlockVoteRequest23 from win10
trying to fetch MissingBlockVoteRequest23 from 11Ncode14659
requesting block with votes for height 16148619, interval=4295
trying to fetch BlockWithVotesRequest37 from Randolph Scott 3
waiting for message queue to clear from thread [Verifier-mainLoop], size is 1
trying to fetch BlockRequest11 from NvGirl14435
^^^^^^^^^^^^^^^^^^^^^ casting vote for height 16148619
broadcasting message: BlockVote19 to 2575
nodejoin_from 137.184.235.65 d774...ad5c barrick
trying to fetch BlockRequest11 from KingSuper16
added new out-of-cycle node to queue: 120b...8434
nodejoin_from 66.70.130.67 120b...8434 DeadKHBEf
trying to fetch BlockRequest11 from sm1gunray8
top verifier be87...d9df has 2397 votes with a cycle length of 2545 (94.2%)
trying to fetch MissingBlockVoteRequest23 from IOI-Again
trying to fetch MissingBlockVoteRequest23 from Harmony11
trying to fetch MissingBlockVoteRequest23 from LASNA 83
trying to fetch MissingBlockVoteRequest23 from CARplay18
trying to fetch MissingBlockVoteRequest23 from Cinco de Cuatro
trying to fetch MissingBlockVoteRequest23 from evropovkirilovfilipas
trying to fetch MissingBlockVoteRequest23 from MOyAde4
trying to fetch MissingBlockVoteRequest23 from gunray5
trying to fetch MissingBlockVoteRequest23 from inco052
trying to fetch MissingBlockVoteRequest23 from xOlife6342
requesting block with votes for height 16148619, interval=4085
trying to fetch BlockWithVotesRequest37 from Hard Time
nodejoin_from 46.137.39.75 23dd...765e Polychain3
nodejoin_from 145.239.82.153 8c18...6a7f DEALER12
waiting for message queue to clear from thread [Verifier-mainLoop], size is 3
trying to fetch BlockRequest11 from Hulama19
trying to fetch BlockRequest11 from aq5293
added 51.15.86.249 to dynamic whitelist
^^^^^^^^^^^^^^^^^^^^^ casting vote for height 16148619
broadcasting message: BlockVote19 to 2575
added new out-of-cycle node to queue: ba1f...1a05
nodejoin_from 144.217.204.29 ba1f...1a05 Nutka130
added new out-of-cycle node to queue: 7ed5...cafd
nodejoin_from 51.89.89.7 7ed5...cafd gogo57411
added new out-of-cycle node to queue: 6845...2099
removed node from new out-of-cycle queue due to size
nodejoin_from 51.161.50.77 6845...2099 bmw3251014
added new out-of-cycle node to queue: e1cb...3f0f
removed node from new out-of-cycle queue due to size
nodejoin_from 158.69.33.17 e1cb...3f0f noway82
trying to fetch BlockRequest11 from Armored1595
trying to fetch MissingBlockVoteRequest23 from Nash03
trying to fetch MissingBlockVoteRequest23 from log002
trying to fetch MissingBlockVoteRequest23 from BURGERKING14
trying to fetch MissingBlockVoteRequest23 from videoMAC11
trying to fetch MissingBlockVoteRequest23 from MAMBA3
trying to fetch MissingBlockVoteRequest23 from 10Dollar13
trying to fetch MissingBlockVoteRequest23 from NvGirl14435
trying to fetch MissingBlockVoteRequest23 from Cool01
trying to fetch MissingBlockVoteRequest23 from EasyD3
trying to fetch MissingBlockVoteRequest23 from z0rn-rac-11453
requesting block with votes for height 16148619, interval=6145
trying to fetch BlockWithVotesRequest37 from Richkey13
nodejoin_from 3.97.146.179 6ccf...39e6 KalpaTech18
added new out-of-cycle node to queue: b104...787e
removed node from new out-of-cycle queue due to size
nodejoin_from 54.213.210.50 b104...787e Coverlet23
waiting for message queue to clear from thread [Verifier-mainLoop], size is 3
trying to fetch BlockRequest11 from but_ton22
top verifier be87...d9df has 2397 votes with a cycle length of 2545 (94.2%)
nodejoin_from 94.23.161.249 c037...3448 corr03
^^^^^^^^^^^^^^^^^^^^^ casting vote for height 16148619
broadcasting message: BlockVote19 to 2575

aso...

When in this state, it won't move by itself, needs blocks resync and restart.
Also note the "removed node from new out-of-cycle queue due to size" lines.

601 sentinel isn't working

Numerous people reported that latest sentinel versions aren't working properly, which could be the cause of recent cycle dropouts.

I had the same tracking problem on the newest version with sentinels. All is good after downgrade. Plenty of power and multiple always track verifiers per sentinel

i can second that sentinels with v601 fall behind in some cases. i dont know the reason yet, because similar setups have different results.

I found my sentinel had stopped freezing blocks after I upgraded it to v601. Unfortunately it was supposed to be protecting one of my friend's verifiers and it dropped out of the cycle. I downgraded my Sentinel to v598 and it's been working fine so far.

I have just verified this myself on a fresh sentinel installation I created yesterday, sentinel gets stuck on a certain block and doesn't move. Downgrading to v587 fixes the issue.

Here is a log:

froze block [Block: v=2, height=8947803, hash=7d87...dace, id=5228...5e93], efficiency: 11.0% froze block [Block: v=2, height=8947803, hash=7d87...dace, id=5228...5e93], efficiency: 11.0% froze block [Block: v=2, height=8947803, hash=7d87...dace, id=5228...5e93], efficiency: 11.0% froze block [Block: v=2, height=8947803, hash=7d87...dace, id=5228...5e93], efficiency: 11.0% waiting for message queue to clear from thread [Thread-8], size is 1 requesting block with votes for height 8947804 [1599462571.662 (2020-09-07 07:09:31.662 UTC)]: trying to fetch BlockWithVotesRequest37 from e168...3ea5 froze block [Block: v=2, height=8947803, hash=7d87...dace, id=5228...5e93], efficiency: 11.0% [1599462571.751 (2020-09-07 07:09:31.751 UTC)]: block-with-votes response is [BlockWithVotesResponse(block=[Block: v=2, height=8947804, hash=bf18...b752, id=a132...726e], votes=0)] froze block [Block: v=2, height=8947803, hash=7d87...dace, id=5228...5e93], efficiency: 11.0% froze block [Block: v=2, height=8947803, hash=7d87...dace, id=5228...5e93], efficiency: 11.0% froze block [Block: v=2, height=8947803, hash=7d87...dace, id=5228...5e93], efficiency: 11.0% froze block [Block: v=2, height=8947803, hash=7d87...dace, id=5228...5e93], efficiency: 11.0% froze block [Block: v=2, height=8947803, hash=7d87...dace, id=5228...5e93], efficiency: 11.0% froze block [Block: v=2, height=8947803, hash=7d87...dace, id=5228...5e93], efficiency: 11.0% requesting block with votes for height 8947804 [1599462574.716 (2020-09-07 07:09:34.716 UTC)]: trying to fetch BlockWithVotesRequest37 from acc0...f75a [1599462574.861 (2020-09-07 07:09:34.861 UTC)]: block-with-votes response is [BlockWithVotesResponse(block=[Block: v=2, height=8947804, hash=bf18...b752, id=a132...726e], votes=0)] froze block [Block: v=2, height=8947803, hash=7d87...dace, id=5228...5e93], efficiency: 11.0% froze block [Block: v=2, height=8947803, hash=7d87...dace, id=5228...5e93], efficiency: 11.0%

Mitigate ISPs and owners of large ip blocks impact

A single entity owns 26% of the queue ips.
46% of the queue is from ovh ips, thanks to their "no recurring IP fees".
Huge majority of these ips is composed of "fake" red verifiers.

Current queue is run by "Proof of Quantity", not "Proof of diversity".

The following is more a reflection and request for comments than a "ready to be implemented" proposal.

https://github.com/Open-Nyzo/Nyzo-Q

Latest scoring method "linear_ip_lottery2" is the most mature but would still need more evaluation before being implemented.

What does 'type' mean in transaction body?

{
    "start_timestamp": 1618225693000, 
    "hash": "20db858edce54e6345303994eec74f02a77d3f16caa1d7787fd71cf33064a589", 
    "transactions": [
        {
            "fee": 867856, 
            "sender": "12d454a69523f739-eb5eb71c7deb8701-1804df336ae0e2c1-9e0b24a636683e31", 
            "receiver_nyzo_string": "id__81bkmarm8_tXYTYV77VIyN4p1d-RrL3zNqWb9apUr3WP-ys.NG-H", 
            "timestamp": 1618225694000, 
            "sender_nyzo_string": "id__81bkmarm8_tXYTYV77VIyN4p1d-RrL3zNqWb9apUr3WP-ys.NG-H", 
            "signature": "53deda5c5e53194aae9eb7de7bedc617a7f86d7c91fb49b216b8e0e9d1514526ba71fcfcf6c835b55f3cabc7dc68d82eae0e3ddedfc4dca9b51a33cd7a0a0b00", 
            "previous_hash_height": 0, 
            "sender_data": "", 
            "amount": 347142109, 
            "previous_block_hash": "bc4cca2a2a50a229256ae3f5b2b5cd49aa1df1e2d0192726c4bb41cdcea15364", 
            "type_enum": "seed", 
            "type": "0000000000000001", 
            "id": "803c581e05020f5c-d610d25a0ebd413a-768d7943fb73cfd6-0ddae8d66ab19b9d", 
            "receiver": "12d454a69523f739-eb5eb71c7deb8701-1804df336ae0e2c1-9e0b24a636683e31"
        }
    ], 
    "balance_list_hash": "f400305efb457c07d70a46ae2a5cb738de6c7d8bcca7bd4512e91f10bf46ec1e", 
    "verification_timestamp": 1618225701680, 
    "previous_block_hash": "be0dc3deb13dd7fd365f1431acde8d2eaa14bd3b714f508cce26bd29e2d2d597", 
    "height": 11632699
}

Called the 'block' method of JSONRPC, got the txs list, and there are many parameters in it that I don’t really understand, etc the type_enum, type. BTW, How can I get the tx result from the transaction body?

Improve sentinel voting script

The sentinel voting script proved to be a useful tool for people with many in-cycle verifiers.

However, it's not fail-proof and real-world use shows that many times, a significant part of the votes issued that way did not make it to the chain.
Since there is no verification mechanism, the user has no feedback on the missing votes, has to check by other means and can't re-issue just the missing votes.

Other tools - like Nyzocli mass vote tool https://github.com/EggPool/NyzoCli/blob/master/MassVote.md - handle that in a more usable form but require a one time install of third-party code.

This is a proposal to improve core sentinel voting script and have it more efficient and user friendly.

Efficiency:
in sendSignatures, the script does forge a signature for every managed verifier.

for (ManagedVerifier verifier : managedVerifiers) {

However, only in-cycle can vote for cycle tx.
On regular sentinels - that handle both in-cycle and out-of-cycle verifiers - this is a significant waste of transactions.
Since that function also has the cycleNodes list data, it should be straightforward to forge transaction for in-cycle verifiers only.

Effectiveness:

// Make a message to send the transaction to the next 5 verifiers in the cycle.

Every message is then sent to the next 5 verifiers. But the timestamp comes from ClientTransactionUtil.suggestedTransactionTimestamp() that uses a 3 block offset.
// The offset is calculated based on the difference between the open edge and predicted consensus edge, with a

So, if cycle moves as intended, 2 blocks only remain. With many messages, the cycle could move faster.
keeping 5 blocks window, it could be more effective to use +1 to +6 or +2 to +7 instead of +0 to +5

Verify and retry:
Once the messages are sent - and no matter if they were indeed sent or failed - it's supposedly done.
Messages could fail (temporarily overload target) or the target could not be able to embed the transaction in its block.
As a result, some messages do not make it.

A failsafe could be the following, adjusting the whole workflow:

  • generate a hash list of votes to emit - only from in-cycles -
  • repeat:
    -- guess timestamp, sign and forward these messages, stored expected blocks in the hash list
    -- wait for the cycle to freeze the last expected block
    -- fetch expected blocks, remove from hash list the votes that were embedded
  • until the list is empty

Current scattering on NCFP-3 votes shows how the current process is not reliable enough.
This would ease for sure a lot of the voting pain users are experiencing, and release pressure from Discord admins trying to motivate users to vote and vote again day after day.

In queue node doesn't show on nyzo.co(consensus bootstrap response: null)

In queue node doesn't show on nyzo.co/queue
Version: 586 587
"consensus bootstrap response: null" in nyzo-verifier-stdout.log

sending node-join messages to trusted entry point: [TrustedEntryPoint: host=verifier9.nyzo.co, port=9444]
unrecognized message type in updateNode(): Error65534
unrecognized message type in updateNode(): Error65534
unrecognized message type in updateNode(): Error65534
unrecognized message type in updateNode(): Error65534
unrecognized message type in updateNode(): Error65534
unrecognized message type in updateNode(): Error65534
unrecognized message type in updateNode(): Error65534
unrecognized message type in updateNode(): Error65534
unrecognized message type in updateNode(): Error65534
10 mesh responses pending after 2.0 wait
entering frozen-edge consensus process because open edge, 7695036, is 7695036 past frozen edge, 0 and cycleComplete=true
sending Bootstrap request to [TrustedEntryPoint: host=verifier0.nyzo.co, port=9444]
sending Bootstrap request to [TrustedEntryPoint: host=verifier1.nyzo.co, port=9444]
sending Bootstrap request to [TrustedEntryPoint: host=verifier2.nyzo.co, port=9444]
sending Bootstrap request to [TrustedEntryPoint: host=verifier3.nyzo.co, port=9444]
sending Bootstrap request to [TrustedEntryPoint: host=verifier4.nyzo.co, port=9444]
sending Bootstrap request to [TrustedEntryPoint: host=verifier5.nyzo.co, port=9444]
sending Bootstrap request to [TrustedEntryPoint: host=verifier6.nyzo.co, port=9444]
sending Bootstrap request to [TrustedEntryPoint: host=verifier7.nyzo.co, port=9444]
sending Bootstrap request to [TrustedEntryPoint: host=verifier8.nyzo.co, port=9444]
sending Bootstrap request to [TrustedEntryPoint: host=verifier9.nyzo.co, port=9444]
consensus bootstrap response: null

just i_000000000.nyzoblock in /var/lib/nyzo/production/blocks/individual/
I have tried always_track_blockchain=1 and sudo wget -O /var/lib/nyzo/production/trusted_entry_points https://nyzo.co/trustedEntryPointsGenerator . But node still not on nyzo.co/queue.
Maybe the problem of BlockManager ?
@n-y-z-o Can you check ?

Potential bug - dereferencing null pointer

In the following code:

https://github.com/n-y-z-o/nyzoVerifier/blob/master/src/main/java/co/nyzo/verifier/Verifier.java#L448

Block frozenEdge = BlockManager.frozenBlockForHeight(frozenEdgeHeight);
if (frozenEdge.getVerificationTimestamp() <=  System.currentTimeMillis() - Block.minimumVerificationInterval &&  frozenEdge.getBlockHeight() < BlockManager.openEdgeHeight(false)) {
    extendBlock(frozenEdge);
}

frozenEdge is not tested against null before dereferencing, although it seems from the implementation of frozenBlockForHeight that it may return null in some cases. Also the return of this function seems to be checked everywhere else in the code (example here), adding null checking would also add for better consistency.

403 error response from https://s3-us-west-2.amazonaws.com/nyzo/000001.nyzotransaction

Hi, I'm getting this error in the logs. Is this an issue or ok?

java.io.IOException: Server returned HTTP response code: 403 for URL: https://s3-us-west-2.amazonaws.com/nyzo/000001.nyzotransaction
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1900)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1498)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:268)
at java.net.URL.openStream(URL.java:1057)
at co.nyzo.verifier.SeedTransactionManager.fetchFile(SeedTransactionManager.java:133)
at co.nyzo.verifier.SeedTransactionManager$1.run(SeedTransactionManager.java:77)
at java.lang.Thread.run(Thread.java:748)

The standard logs look like this...

requesting block with votes for height 4723359, interval=6858
trying to fetch BlockWithVotesRequest37 from Voyager
freezing block [Block:v=1,height=4723359,hash=d9d7...58e3,id=80f2...0e18] with standard mechanism
$$$$$ removing vote map of size 1409
cleaning up because block 4723359 was frozen
casting late vote for height 4723359
^^^^^^^^^^^^^^^^^^^^^ casting vote for height 4723359
top verifier is null
requesting block with votes for height 4723360, interval=7175
trying to fetch BlockWithVotesRequest37 from koottum
freezing block [Block:v=1,height=4723360,hash=de3a...1681,id=3865...5936] with standard mechanism
$$$$$ removing vote map of size 1409
cleaning up because block 4723360 was frozen
casting late vote for height 4723360
^^^^^^^^^^^^^^^^^^^^^ casting vote for height 4723360
top verifier is null
requesting block with votes for height 4723361, interval=7143
trying to fetch BlockWithVotesRequest37 from NDream1623
freezing block [Block:v=1,height=4723361,hash=a012...38ca,id=16bc...12f0] with standard mechanism
$$$$$ removing vote map of size 1405
cleaning up because block 4723361 was frozen
casting late vote for height 4723361
^^^^^^^^^^^^^^^^^^^^^ casting vote for height 4723361
top verifier is null

can't run nyzoVerifier

I try to run nyzoVerifier by 'java -jar -Xmx3G xxx/nyzoVerifier/build/libs/nyzoVerifier-1.0.jar', but it seems that nothing happened.

and i try to analyze what it happened by "jstack -l pid"

and i get the result, it seems to be blocking when generating the private key.

"main" #1 prio=5 os_prio=0 tid=0x00007f528804b800 nid=0x307f runnable [0x00007f528ee05000]
java.lang.Thread.State: RUNNABLE
at java.io.FileInputStream.readBytes(Native Method)
at java.io.FileInputStream.read(FileInputStream.java:255)
at sun.security.provider.NativePRNG$RandomIO.readFully(NativePRNG.java:424)
at sun.security.provider.NativePRNG$RandomIO.getMixRandom(NativePRNG.java:404)
- locked <0x00000000c051ed08> (a java.lang.Object)
at sun.security.provider.NativePRNG$RandomIO.implNextBytes(NativePRNG.java:534)
at sun.security.provider.NativePRNG$RandomIO.access$400(NativePRNG.java:331)
at sun.security.provider.NativePRNG$Blocking.engineNextBytes(NativePRNG.java:268)
at java.security.SecureRandom.nextBytes(SecureRandom.java:468)
at net.i2p.crypto.eddsa.KeyPairGenerator.generateKeyPair(KeyPairGenerator.java:75)
at java.security.KeyPairGenerator$Delegate.generateKeyPair(KeyPairGenerator.java:697)
at co.nyzo.verifier.KeyUtil.generateSeed(KeyUtil.java:130)
at co.nyzo.verifier.Verifier.loadPrivateSeed(Verifier.java:100)
- locked <0x00000000c04d1c18> (a java.lang.Class for co.nyzo.verifier.Verifier)
at co.nyzo.verifier.Verifier.(Verifier.java:49)

error: <class 'socket.error'>

Hello,
I have got a error when i triy to execute the command "supervisor reload".

The entire error is :

error: <class 'socket.error'>, [Errno 2] No such file or directory: file: /usr/lib/python2.7/socket.py line: 228

I'm running this into a ubuntu docker container.

Thank

v484 issue after VPS restart

Update to v484 is fine, but if VPS on v484 is restarted (for upgrade, maintenance...) the time needed for verifier to recover from yellow state to white is over 5 hours
Verifier is very slow to connect to mash and rate is 2-3 connections per minute

The Nyzo (v1) protocol has failed

It has come to light that a single operator is controlling a majority of the Nyzo cycle.
With majority vote they can use cycle funds as they want, block and kick other verifiers out of cycle, etc.
As a result, Nyzo and it's novel Proof Of Diversity model has failed and is no longer secure.

Proof is available here: https://github.com/Open-Nyzo/Nyzo-clusters

And here are additional explanations courtesy of NyZoSy:

  • The largest cluster consist of 2,638 addresses.
  • Among them, 1,526 are currently in-cycle verifiers. 55.33% of the cycle
  • Total sent to qtrade by this cluster from currently in cycle verifiers: 4,408,062 Nyzos
  • Total sent to qtrade by this cluster: 8,861,490 Nyzos
  • That's close to half of all that was ever deposited to qTrade (20,409,262 Nyzos)

Other strangities have been recorded over time, like spamming of other messages, invalid blocks, and recently selective, targeted blacklisting.
Now another strange thing is going on, with verifiers and sentinels not being able to restart unless the blocks are cleared, every time (means not productive until several cycles)

The CE version was not a miracle cure, did not fix all. Just a helper, if enough users ran it and reported the logs and issues.

But all this is nothing with regard to the current situation.
These are just mosquitos bites, distracting everyone from the fact that 55% of the cycle is controlled by a single entity.
Don't trust me, verify. Data is on github, see pinned message, run your own analysis.

55% of the cycle means for instance the cycle funds are theirs.
Means 55% of the mined nyzo are theirs.
Means they can mess with the protocol, not answer to certain verifiers, freeze who they want.
Also means they can greatly influence what new verifier enters the queue.

Nyzo narrative being diversity, decentralization, and relying only on the cycle and cycle rules to enforce that diversity, the current situation is a complete nonsense. It's a proof the current protocol, the current cycle have failed.
No one from that 55% of the cycle showed up yet. Not a single user, coming out and offering help to deal with the connectivity issues related to their nodes, or asking how to mitigate the issue, how to help Nyzo long term.
Only explanation I can see is that this is a hostile and coordinated takeover, over years, while milking everyone else.

Comment from core devs would be greatly appreciated to help guide the Nyzo community and open doors to Nyzo v2 through the use of Nyzo "sustainability feature" as described in the last pages of the original whitepaper: https://relay0.nyzo.co/staticWeb/nyzo.pdf

Nyzo verifier stop tracking the chain "Verifier-mainLoop" java.lang.OutOfMemoryError: unable to create new native thread"

Nyzo version 611 CE edition
high speed 2 core VPS (KVM) 1GB ram DDR4 + 1GB swap ubuntu 20.04 openjdk version "1.8.0_312"

I tried to change java verifier conf from 3GB to 1GB and 768MB to limit memory use to no help, also I have more VPS with same provider and same conf and no issue

node join or some spam make nyzo java to crash, verifier still work but CPU load after crash is 100% on one core and no new block is being produced:

`^^^^^^^^^^^^^^^^^^^^^ casting vote for height 15672008
broadcasting message: BlockVote19 to 2623
added new out-of-cycle node to queue: 781f...49d6
removed node from new out-of-cycle queue due to size
nodejoin_from 66.70.130.114 781f...49d6 Disgusted8uWfG
storing new vote, height=15672008, hash=c4b5...f134
freezing block [Block: v=2, height=15672008, hash=c4b5...f134, id=4f93...1820] with standard mechanism
after registering block [Block: v=2, height=15672008, hash=c4b5...f134, id=4f93...1820] in BlockchainMetricsManager(), cycleTransactionSum=∩15,147.169596, cycleLength=2584
BlockVoteManager: removing vote map of size 2583
cleaning up because block 15672008 was frozen
sending verifier-removal vote to node at IP: 157.90.168.189
top verifier 9365...7aab has 2427 votes with a cycle length of 2584 (93.9%)
added new out-of-cycle node to queue: ba29...5a52
removed node from new out-of-cycle queue due to size
nodejoin_from 3.80.0.193 ba29...5a52 Siva8
maximum cycle transaction amount=∩100000.000000, balance=∩68641817.666927
maximum cycle transaction amount=∩100000.000000, balance=∩68641817.666927
transmitting block [Block: v=2, height=15672009, hash=4dd4...abe7, id=18d8...f7fd]
broadcasting message: NewBlock9 to 2623

Exception in thread "Verifier-mainLoop" java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:717)
at co.nyzo.verifier.Message.fetchTcp(Message.java:258)
at co.nyzo.verifier.Message.fetch(Message.java:199)
at co.nyzo.verifier.Message.broadcast(Message.java:148)
at co.nyzo.verifier.Verifier.verifierMain(Verifier.java:419)
at co.nyzo.verifier.Verifier.access$000(Verifier.java:23)
at co.nyzo.verifier.Verifier$1.run(Verifier.java:260)
at java.lang.Thread.run(Thread.java:748)

added new out-of-cycle node to queue: 032a...651e
removed node from new out-of-cycle queue due to size
nodejoin_from 3.96.79.172 032a...651e Sam_FTX 4
maximum cycle transaction amount=∩100000.000000, balance=∩68641817.666927
nodejoin_from 49.12.40.18 61e8...2367 Tik
maximum cycle transaction amount=∩100000.000000, balance=∩68641817.666927
added new out-of-cycle node to queue: 0c7d...4245
removed node from new out-of-cycle queue due to size
nodejoin_from 142.44.252.185 0c7d...4245 Nutka286`

CORS policy API

This is a proposal to add a Access-Control-Allow-Origin=* header to the API endpoints. This would allow a wider variety of applications to use the API.

While a browser extension may be an exception to the rule in that it allows such requests to take place (and it is peculiar that it does), most modern day browsers actively enforce the CORS policy and local requests can't be made to the API endpoints.

On the right, the request from the Nyzo tip extension, which doesn't get blocked by CORS policy.
On the left, the request from a local index.html file, which does get blocked by the CORS policy.

I made sure to use the same XMLHttpRequest structure, copied over from the extension, but the results do differ.

unknown (1)
unknown

Broken Sentinel voting

As reported numerous times by numerous users on Nyzo Discord, sentinel voting doesn't work at all on latest versions.
Latest known version that works is v587.

Can you fix voting so that verifier operators are able to cast their votes for pressing issues?

Potential performance optimization on winningNodeForCycleHash

Thanks for the nice algorithm!

Maybe a slight improvement:

if (ByteUtil.arraysAreEqual(ipHash, winningHash)) {

closerHash returns a hash; after scanning all 3 input hashes.
Then the returned hash is only used to compare to ipHash again.
It could be faster to return a boolean hash1IsCloserOrSame(reference, hash0, hash1) and thus avoid the extra hash comparison.

Another option would be to compute distance (reference, hash) only, and store winningDistance along with winningHash
That way for each step you only need one distance computation instead of 2, hash0 being winningHash computed over and over.

Edit: The following is not true after a closer look. I left the comment anyway

Also, in dense regions, 15 tickets are computed 15 times each, while only the last IP can get them.
At the expense of more complex logic, several hash and comparisons could be avoided (unsure of the global perf gain vs code complexity)

Sorry if you already benchmarked these and there is no significant gain.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.