GithubHelp home page GithubHelp logo

nomidot's Introduction

Nomidot (tentative name)

Nominating on Polkadot/Kusama can be complicated. Nomidot is a dashboard for those with DOTs or KSM looking to "set it and forget it."

We do not hold any keys, however, so it is still the user's responsibility to keep safe custody of keys.

Potential Difficulties with Nominating

  1. Creating, and understanding the role of stash and controller keys.
  2. Finding the right validators to nominate to maximize rewards and minimize slashes.
  3. Mitigating the risk of getting caught in long unbonding periods.

1. Creating, and understanding the role of stash and controller keys.

For the casual nominator, this distinction can be intimidating enough to discourage their participation in staking. As the distinction is only with regard to the intended use of the keys and not with the cryptography underlying the keys themselves, incorrect use can expose the user to unnecessary risks.

This is an early point of friction for the uninitiated. To mitigate any confusion we have a guided tutorial to get a user from having no keys at all to bonding the two together.

We will in the near future have guided tutorials on how to lean on the more powerful key management system in BIP39 key derivations à la Parity Signer.

2. Finding the right validators to nominate to maximize rewards and minimize slashes.

As with all blockchains, it is difficult to query and analyze historical data. Nomidot runs an ETL script on Kusama in the background which gives us a Postgresql database to query interesting information for our users. All information can be validated directly from a full node.

The first version will include:

  • slashing history of a particular validator or set of validators
  • rewards history of a particular validator or set of validators
  • staker percentages over time (trends in staker allocations)

3. Mitigating the risk of getting caught in long unbonding periods.

A potential point of churn if expectations are not communicated clearly and early, is the period of illiquidity for stakers during unbonding periods (as well as bonded periods of course). While there are teams like Chorus One working on liquid staking methods (https://blog.chorus.one/announcing-the-liquid-staking-working-group/), at the moment there a are series of possible decisions that can lead users being entangled in a snare of long unbonding periods (during which they cannot access their funds).

Nomidot will take extra precautionary steps at the UI level to guide users away from potential pitfalls.

Contributing

Please see the Contributing Guidelines

Our design systems are based on the following components managed through Storybook: https://quirky-sammet-29ba03.netlify.com/?path=/docs/design-system-intro--page

Get Started

Run the following commands:

git clone https://github.com/paritytech/Nomidot
cd Nomidot
yarn install
yarn start

The app will be running on http://localhost:8000.

nomidot's People

Contributors

amaury1093 avatar axelchalon avatar fevo1971 avatar joepetrowski avatar niklabh avatar pmespresso avatar tbaut avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nomidot's Issues

nodewatcher: JS out of memory

Sometime on Dec 23rd:

2019-12-23 07:02:25    NODE-WATCHER: blockIndex: 112582
2019-12-23 07:02:25    NODE-WATCHER: lastKnownBestFinalized: 332240
<--- Last few GCs --->

[26:0x560e79aab0a0] 80703180 ms: Mark-sweep 1849.1 (1897.1) -> 1849.0 (1868.1) MB, 1699.2 / 0.0 ms  (average mu = 0.555, current mu = 0.000) last resort GC in old space requested
[26:0x560e79aab0a0] 80704916 ms: Mark-sweep 1849.0 (1868.1) -> 1848.6 (1865.6) MB, 1735.4 / 0.0 ms  (average mu = 0.384, current mu = 0.000) last resort GC in old space requested


<--- JS stacktrace --->

==== JS stack trace =========================================

    0: ExitFrame [pc: 0x560e7502bb79]
Security context: 0x0a98eabc0921 <JSObject>
    1: new constructor(aka SafeSubscriber) [0x3c0ca2126501] [/node_watcher/node_modules/rxjs/internal/Subscriber.js:~113] [pc=0x35355f386b51](this=0x1fed97005869 <SafeSubscriber map = 0x230eb6dc8b91>,0x1fed97005809 <ConnectableSubscriber map = 0x230eb6dc7dd1>,0x1fed97004c59 <Object map = 0x2991fe2d5d41>,0x1aecec5c04b9 <undefined>,0x1aecec5c04b9 <undefined>)
...

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
error Command failed with signal "SIGABRT".

bug: subscribe to heads

currently the graphql subscriptions that are listening for block headers just listen to any new 'creation's, but the thing is we have many jobs writing from different points of the chain. so these subscriptions need to be refined to listen to new CREATIONs that are also at a later blocknumber than what's in state currently.

Speed up Nodwatcher Sync

Seems to average around 25 writes per second, which sounds good but that's approximately how many writes per block we have at this point. (there are only 86,400 seconds in a day)...

Screenshot 2020-01-27 at 15 29 13

Screenshot 2020-01-27 at 15 36 07

The good news is that memory usage seems to be growing slower as time goes on, currently hovering around 1.75GB.

While I'm tempted to upgrade our instance to speed it up, the CPU utilization is only at like 5-10%, so I guess we should figure out how to squeeze more out of the current instance as it is.

bug: nodewatcher pod restart with stateful block index

when a pod is restarted, because the block index progress is not persisted, it will start over from the beginning.

this should not happen.

either the nodewatcher needs to be stateless or we need to put that blockindex somewhere else

nomidot-app-server: nested query resolvers returning null?

the logs

Error: Cannot return null for non-nullable field TotalIssuance.blockNumber.
    at completeValue (/server/node_modules/graphql/execution/execute.js:560:13)
    at completeValueCatchingError (/server/node_modules/graphql/execution/execute.js:495:19)
    at resolveField (/server/node_modules/graphql/execution/execute.js:435:10)
    at executeFields (/server/node_modules/graphql/execution/execute.js:275:18)
    at collectAndExecuteSubfields (/server/node_modules/graphql/execution/execute.js:713:10)
    at completeObjectValue (/server/node_modules/graphql/execution/execute.js:703:10)
    at completeValue (/server/node_modules/graphql/execution/execute.js:591:12)
    at completeValueCatchingError (/server/node_modules/graphql/execution/execute.js:495:19)
    at /server/node_modules/graphql/execution/execute.js:618:25
    at Array.forEach (<anonymous>)
Error: Cannot return null for non-nullable field TotalIssuance.blockNumber.
    at completeValue (/server/node_modules/graphql/execution/execute.js:560:13)
    at completeValueCatchingError (/server/node_modules/graphql/execution/execute.js:495:19)
    at resolveField (/server/node_modules/graphql/execution/execute.js:435:10)
    at executeFields (/server/node_modules/graphql/execution/execute.js:275:18)
    at collectAndExecuteSubfields (/server/node_modules/graphql/execution/execute.js:713:10)
    at completeObjectValue (/server/node_modules/graphql/execution/execute.js:703:10)
    at completeValue (/server/node_modules/graphql/execution/execute.js:591:12)
    at completeValueCatchingError (/server/node_modules/graphql/execution/execute.js:495:19)
    at /server/node_modules/graphql/execution/execute.js:618:25
    at Array.forEach (<anonymous>)
Error: GraphQL Error: {
  "0": "T",
  "1": "h",
  "2": "e",
  "3": " ",
  "4": "s",
  "5": "e",
  "6": "r",
  "7": "v",
  "8": "e",
  "9": "r",
  "10": " ",
  "11": "w",
  "12": "a",
  "13": "s",
  "14": " ",
  "15": "n",
  "16": "o",
  "17": "t",
  "18": " ",
  "19": "a",
  "20": "b",
  "21": "l",
  "22": "e",
  "23": " ",
  "24": "t",
  "25": "o",
  "26": " ",
  "27": "p",
  "28": "r",
  "29": "o",
  "30": "d",
  "31": "u",
  "32": "c",
  "33": "e",
  "34": " ",
  "35": "a",
  "36": " ",
  "37": "t",
  "38": "i",
  "39": "m",
  "40": "e",
  "41": "l",
  "42": "y",
  "43": " ",
  "44": "r",
  "45": "e",
  "46": "s",
  "47": "p",
  "48": "o",
  "49": "n",
  "50": "s",
  "51": "e",
  "52": " ",
  "53": "t",
  "54": "o",
  "55": " ",
  "56": "y",
  "57": "o",
  "58": "u",
  "59": "r",
  "60": " ",
  "61": "r",
  "62": "e",
  "63": "q",
  "64": "u",
  "65": "e",
  "66": "s",
  "67": "t",
  "68": ".",
  "69": "\r",
  "70": "\n",
  "71": "P",
  "72": "l",
  "73": "e",
  "74": "a",
  "75": "s",
  "76": "e",
  "77": " ",
  "78": "t",
  "79": "r",
  "80": "y",
  "81": " ",
  "82": "a",
  "83": "g",
  "84": "a",
  "85": "i",
  "86": "n",
  "87": " ",
  "88": "i",
  "89": "n",
  "90": " ",
  "91": "a",
  "92": " ",
  "93": "s",
  "94": "h",
  "95": "o",
  "96": "r",
  "97": "t",
  "98": " ",
  "99": "w",
  "100": "h",
  "101": "i",
  "102": "l",
  "103": "e",
  "104": "!",
  "status": 503
}
    at BatchedGraphQLClient.<anonymous> (/server/node_modules/http-link-dataloader/src/BatchedGraphQLClient.ts:59:13)
    at step (/server/node_modules/http-link-dataloader/dist/src/BatchedGraphQLClient.js:40:23)
    at Object.next (/server/node_modules/http-link-dataloader/dist/src/BatchedGraphQLClient.js:21:53)
    at fulfilled (/server/node_modules/http-link-dataloader/dist/src/BatchedGraphQLClient.js:12:58)
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (internal/process/task_queues.js:93:5)
FetchError: request to http://10.0.9.43:4466/ failed, reason: read ECONNRESET
    at ClientRequest.<anonymous> (/server/node_modules/cross-fetch/node_modules/node-fetch/lib/index.js:1393:11)
    at ClientRequest.emit (events.js:210:5)
    at ClientRequest.EventEmitter.emit (domain.js:478:20)
    at Socket.socketErrorListener (_http_client.js:407:9)
    at Socket.emit (events.js:210:5)
    at Socket.EventEmitter.emit (domain.js:478:20)
    at emitErrorNT (internal/streams/destroy.js:84:8)
    at processTicksAndRejections (internal/process/task_queues.js:80:21)

UnhandledPromiseRejectionWarning: Error: Option: unwrapping a None value

The last 50k job halted at:

2020-02-13 22:10:59    NODE-WATCHER: Task --- createEra
2020-02-13 22:10:59       TASK: ERA: NomidotEra: {"idx":411,"points":{"total":53440,"individual":[340,260,500,240,380,300,520,460,300,300,220,380,260,460,420,400,520,240,400,320,360,520,380,500,340,80,400,500,220,140,340,460,320,460,380,300,380,400,300,460,400,460,240,500,420,440,200,380,320,340,340,360,400,80,320,260,160,520,380,120,280,420,280,380,340,360,360,460,320,500,360,360,300,360,340,400,460,0,400,420,260,440,120,380,420,240,540,400,360,420,560,340,0,360,280,400,300,340,440,140,420,240,320,320,140,300,140,540,420,320,160,260,260,280,340,460,360,260,400,380,200,300,440,240,400,300,260,260,20,300,540,440,60,340,360,440,320,260,440,480,300,520,240,80,300,340,300,240,0,300,320,340,400,320,360,380,200,380,180,280]},"startSessionIndex":1803}
2020-02-13 22:10:59    NODE-WATCHER: Writing: {"idx":411,"points":{"total":53440,"individual":[340,260,500,240,380,300,520,460,300,300,220,380,260,460,420,400,520,240,400,320,360,520,380,500,340,80,400,500,220,140,340,460,320,460,380,300,380,400,300,460,400,460,240,500,420,440,200,380,320,340,340,360,400,80,320,260,160,520,380,120,280,420,280,380,340,360,360,460,320,500,360,360,300,360,340,400,460,0,400,420,260,440,120,380,420,240,540,400,360,420,560,340,0,360,280,400,300,340,440,140,420,240,320,320,140,300,140,540,420,320,160,260,260,280,340,460,360,260,400,380,200,300,440,240,400,300,260,260,20,300,540,440,60,340,360,440,320,260,440,480,300,520,240,80,300,340,300,240,0,300,320,340,400,320,360,380,200,380,180,280]},"startSessionIndex":1803}
2020-02-13 22:10:59    NODE-WATCHER: Task --- createSlashing
2020-02-13 22:10:59  TASK: SLASHING: Nomidot Slashing: []
2020-02-13 22:10:59    NODE-WATCHER: Writing: []
2020-02-13 22:10:59    NODE-WATCHER: Task --- createTotalIssuance
2020-02-13 22:10:59 TASK: TOTALISSUANCE: Total Issuance: {"amount":"0x000000000000000074e4cd14df30fe3d"}
2020-02-13 22:10:59    NODE-WATCHER: Writing: {"amount":"0x000000000000000074e4cd14df30fe3d"}
2020-02-13 22:10:59    NODE-WATCHER: Task --- createNominationAndValidators
(node:26) UnhandledPromiseRejectionWarning: Error: Option: unwrapping a None value
    at Option.unwrap (/node_watcher/node_modules/@polkadot/types/codec/Option.js:170:13)
    at /node_watcher/src/tasks/createNominationAndValidators.ts:111:26
    at step (/node_watcher/src/tasks/createNominationAndValidators.ts:36:23)
    at Object.next (/node_watcher/src/tasks/createNominationAndValidators.ts:17:53)
    at fulfilled (/node_watcher/src/tasks/createNominationAndValidators.ts:8:58)
    at processTicksAndRejections (internal/process/task_queues.js:88:5)
(node:26) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:26) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

Separate node-watcher into 2 packages

Depends on: #20

The idea would be to have 2 packages:

  • @substrate/node-watcher: This one will be published to npm, it will only hold what's currently in back/node-watcher/src/nodeWatcher.ts
  • @substrate/nomidot-watcher: This one will hold the prisma server, and all the tasks that are relevant to Nomidot. Will use @substrate/node-watcher internally

The idea is to make @substrate/node-watcher reusable to other people. They would just install @substrate/node-watcher, and create their own tasks.

I just talked to Thibaut, there's a high probability that the governance platform will just pick from the same DB as we generate here with Nomidot (assuming we add 1-2 more tasks about governance). If that's the case, then this issue is low priority.

Fix nonfatal but bad practice nits creeping in

  • in console DRR: TypeError: Cannot read property 'creator' of undefined
  • in console validateDOMNesting(...): <p> cannot appear as a descendant of <p> in BalanceDisplay
  • It looks like there are several instances of 'styled-components' initialized in this application. This may cause dynamic styles not rendering properly, errors happening during rehydration process and makes your application bigger without a good reason.

nodewatcher: Motion $where input type

blockIndex: 190,800
job-name: nomidotwatcher-160366
pod-name: nomidotwatcher-160366-8h4n2

2020-02-13 22:06:45    NODE-WATCHER: Task --- createMotionStatusUpdate
(node:26) UnhandledPromiseRejectionWarning: Error: Variable '$where' cannot be non input type 'MotionWhereInput'. (line 1, column 16):
query ($where: MotionWhereInput) {
               ^
    at BatchedGraphQLClient.<anonymous> (/node_watcher/node_modules/http-link-dataloader/src/BatchedGraphQLClient.ts:74:13)
    at step (/node_watcher/node_modules/http-link-dataloader/dist/src/BatchedGraphQLClient.js:40:23)
    at Object.next (/node_watcher/node_modules/http-link-dataloader/dist/src/BatchedGraphQLClient.js:21:53)
    at fulfilled (/node_watcher/node_modules/http-link-dataloader/dist/src/BatchedGraphQLClient.js:12:58)
    at processTicksAndRejections (internal/process/task_queues.js:88:5)
(node:26) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:26) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
YJs-MacBook-Pro:~ yj$ kubectl logs nomidotwatcher-160366-8h4n2 --tail=10
(node:26) UnhandledPromiseRejectionWarning: Error: Variable '$where' cannot be non input type 'MotionWhereInput'. (line 1, column 16):
query ($where: MotionWhereInput) {
               ^
    at BatchedGraphQLClient.<anonymous> (/node_watcher/node_modules/http-link-dataloader/src/BatchedGraphQLClient.ts:74:13)
    at step (/node_watcher/node_modules/http-link-dataloader/dist/src/BatchedGraphQLClient.js:40:23)
    at Object.next (/node_watcher/node_modules/http-link-dataloader/dist/src/BatchedGraphQLClient.js:21:53)
    at fulfilled (/node_watcher/node_modules/http-link-dataloader/dist/src/BatchedGraphQLClient.js:12:58)
    at processTicksAndRejections (internal/process/task_queues.js:88:5)

Onboarding with extension

  • List all accounts from extension
  • ask user to select one as stash and one as controller
  • Check if account selected as stash has been tagged as stash, if conflict, notify the user, else, save it to our backend profiles table
  • Once Stash and Controller have been established, redirect to the @Validators page.

DB Schema spitballing

Some super useful features that actual justify having a db:

  1. Allow user to "Favourite" Validators
  2. Allow Validators to have something like an "About" page, describing and selling their case on why Nominators should nominate them.
  3. Make some personalisation for a particular public key, so we can save whether a key is a stash or controller, users can opt to "Level Up" their profile by connecting their social accounts and emails, then this will allow us to build up a mailing list.
  4. Forum type of thing for discussing an upcoming session, grievances from the previous session, etc.

nodewatcher WS: more robust retry system (e.g. with exponential delay)

When remote node drops connection, api retries connection until it reconnects, at least in the browser, but from a pod, "something" happens and the connection is never recreated. Current workaround (read: hack) is to drop the socket entirely and create a new socket. Better solution would to persist the current blocknumber somewhere before and retry the connection until it works.

Create a node watcher: Archive node -> PgSQL

The idea is to have a long-living script that will do the following:

  1. point to an archive node
  2. take as input a config file which describes which part of the state we want to store in the DB
    • actual API of this config file to be discussed, but basic idea for now is e.g.: api.rpc.chain.bestBlock: "pgsql.table_blocks" or api.derive.staking.overview: "pgsql.table_staking"
  3. Set N=1, and with PolkadotJS api query everything the config file dictates from the archive node at block N, and save it to DB.
  4. Do N:=N+1, and repeat step 3. Stop when the script reaches the tip of the chain (finalized head for now, to avoid reorgs).
  5. On each new block of the archive node, repeat step 3.

cc @Tbaut @niklabh

enhancment: minimize queries

Two places off the top of my head where we don't need to be querying as much as we are.

  1. system.events.at(blockhash) -> currently querying per task. Easily can just put that in the main nodewatcher part.
  2. things that happen once a session -> currently querying every block anyway, can easily just check whether a new session happened (or whether that session exists in the db), and then get on with it.

edit: in fact for 2, we are even querying the session index for each task. this can be pulled out higher like 1.

Percent - "Unknown types found" - node watcher on Kusama 0.7.17

Node watcher is throwing an error when run on a polkadot 0.7.17. I used a v0.7.10 so far without problem.

How to reproduce:

use the following endpoint in node-watcher:

const ARCHIVE_NODE_ENDPOINT = 'wss://dev-node.substrate.dev:9944';

it's running polkadot 0.7.17-c68121e0-x86_64-linux-gnu with the following flags:
--dev --ws-port 5566 --rpc-port 6677 --ws-external --rpc-external --ws-max-connections 100 --pruning=archive

When launching a fresh node-watcher on it, it throws:

Unknown types found, no types for OpenTip,Percent
2020-01-15 14:11:04 API/DECORATOR: Error: FATAL: Unable to initialize the API: createType(Percent):: Unable to find plain type for {"info":6,"type":"Percent"}
at EventEmitter.Init._onProviderConnect (/home/thib/github/paritytech/Nomidot/node_modules/@polkadot/api/base/Init.js:55:23)
at process._tickCallback (internal/process/next_tick.js:68:7)

Discussion: Nomidot Naming

After the mini retreat it's pretty clear the path forward is not for this UI to be a generalized, dynamic UI for whatever Substrate chain. The purpose of this UI is to be a simple wallet for Polkadot DOT holders to stake their tokens to become a nominator, transfer balances, and to participate in governance. So it follows the name cannot remain as it is.

So here's a braindump of potential new names:

  • Stakeadot
  • Volkadot
  • Volkenstake
  • Interstake
  • Volksnom
  • Nomin8
  • Nom
  • Minator
  • Minotaur
  • Yesminator

Can't resolve '@substrate/context/src'

In light-ui:

../accounts-app/node_modules/@substrate/ui-components/lib/stateful/Balance.js
Module not found: Can't resolve '@substrate/context/src' in '/Users/amaurymartiny/Workspaces/polkadot/light-ui/packages/accounts-app/node_modules/@substrate/ui-components/lib/stateful'

We knew it was going to bite us sooner or later.

Graphql Service

#41 should talk to this service to read, and make RPC's to Kusama archive node to write.

This service should expose and api for reading from csql.

[Node-watcher] Kubernetes deployment fail - Unable to compile TypeScript

kubectl logs nomidotwatcher-1548a46-964699-dvt85 -n nodewatcher-staging --tail=30 --follow
yarn run v1.15.2
$ ts-node ./src/index.ts

/node_watcher/node_modules/ts-node/src/index.ts:421
    return new TSError(diagnosticText, diagnosticCodes)
           ^
TSError: ⨯ Unable to compile TypeScript:
src/nodeWatcher.ts(24,18): error TS2339: Property 'toNumber' does not exist on type 'BlockNumber'.
src/nodeWatcher.ts(25,24): error TS2339: Property 'toNumber' does not exist on type 'BlockNumber'.
src/nodeWatcher.ts(72,24): error TS2339: Property 'gt' does not exist on type 'U32'.

    at createTSError (/node_watcher/node_modules/ts-node/src/index.ts:421:12)
    at reportTSError (/node_watcher/node_modules/ts-node/src/index.ts:425:19)
    at getOutput (/node_watcher/node_modules/ts-node/src/index.ts:530:36)
    at Object.compile (/node_watcher/node_modules/ts-node/src/index.ts:735:32)
    at Module.m._compile (/node_watcher/node_modules/ts-node/src/index.ts:814:43)
    at Module._extensions..js (internal/modules/cjs/loader.js:770:10)
    at Object.require.extensions.<computed> [as .ts] (/node_watcher/node_modules/ts-node/src/index.ts:817:12)
    at Module.load (internal/modules/cjs/loader.js:628:32)
    at Function.Module._load (internal/modules/cjs/loader.js:555:12)
    at Module.require (internal/modules/cjs/loader.js:666:19)
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

nodewatcher: rolling update

So after doing more research and getting more familiar with the tools, I think a good way to maintain this nodewatcher with rolling updates is as follows:

Case 1. We want to track more things (no mutate existing entries)

we have up to 2 nodewatcher deployments: 1 curr, 1 next (ephemeral).

curr is the one that's running at the moment, closest to, or at, the tip of the chain.
next is only running if there is an update that needs to happen and fill in the blanks left by curr.

When want to track more things, i.e. add a table or column, we currently:
stop the deployment, and restart it from block 0, just to fill in the diff. The problem is that we end up no longer tracking the tip.

Instead, we can have a next deployment which does the above, leaving curr to do what it does, and filling in the blanks in behind it.

next can be updated to curr when they're both at the tip, and then next can be shut down until there is another update that needs to happen.

The next/curr stuff in practice would look like:

  1. kubectl create -f nomidot-watcher-curr.yaml (e.g. v0.1)
  2. docker tag new-deployment-image && docker push new-deployment-image
  3. kubectl create -f nomidot_watcher-next.yaml (e.g. v0.2)
  4. (after curr/new are synced): kubectl rolling-update nomidotwatcher-curr --image=next-image

Case 2. We want to change how we track existing entries

In this case, currently we'd need to just purge the db and start over with the new image.

The better way would be similar to the above. We have a curr db, and a next db.

We just sync the next db and leave any services pointing to curr db until it's synced, then switch over the DB_NAME in nodewatcher-deloyment.yaml so next becomes curr and curr can either be purged or kept.

Build Polkassenbly front-server

Not sure how to do this with prisma but I guess we can have something like an admin password and use this in node-watcher.

Right now anyone can change anything in the DB by just knowing the IP edit: accessing it from the same cluster.

Kube DNS

handle task => watcher deployments communication

bug: sorting hex

Currently storing BigNumbers as hexadecimal as prisma can only handle up to 53 bit integers, but the thing is that we cannot sort hexadecimals (i don't think?).

Maybe there is a way, but we certainly won't get anything reliable 'order_by' on these.

Not sure yet how to fix...

bug: nodewatcher create proposal status

small bug crashed node watcher just now:

NODE-WATCHER: Task --- createProposalStatusUpdate
2020-01-31 01:08:20            MAIN: Error: Variable '$where' cannot be non input type 'ReferendumWhereUniqueInput!'. (line 1, column 16):
query ($where: ReferendumWhereUniqueInput!) {
               ^
    at BatchedGraphQLClient.<anonymous> (/node_watcher/node_modules/http-link-dataloader/src/BatchedGraphQLClient.ts:74:13)
    at step (/node_watcher/node_modules/http-link-dataloader/dist/src/BatchedGraphQLClient.js:40:23)
    at Object.next (/node_watcher/node_modules/http-link-dataloader/dist/src/BatchedGraphQLClient.js:21:53)
    at fulfilled (/node_watcher/node_modules/http-link-dataloader/dist/src/BatchedGraphQLClient.js:12:58)
    at processTicksAndRejections (internal/process/task_queues.js:88:5) {
  result: { data: null, errors: [ [Object], [Object], [Object] ], status: 200 }
}
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

nodewatcher crash (block 109595): createType(Vec<EventRecord>)

for job nomidotwatcher-68407 crashed at block 109595.

2020-02-04 07:00:39    NODE-WATCHER: Task --- createSession
Unable to decode storage system.events: createType(Vec<EventRecord>):: Vec length 904613550 exceeds 32768
2020-02-04 07:00:39        RPC-CORE: getStorage(key: StorageKey, at?: BlockHash): StorageData:: createType(Vec<EventRecord>):: Vec length 904613550 exceeds 32768
(node:26) UnhandledPromiseRejectionWarning: Error: getStorage(key: StorageKey, at?: BlockHash): StorageData:: createType(Vec<EventRecord>):: Vec length 904613550 exceeds 32768
    at CatchSubscriber.selector (/node_watcher/node_modules/@polkadot/rpc-core/index.js:195:38)
    at CatchSubscriber.error (/node_watcher/node_modules/rxjs/src/internal/operators/catchError.ts:132:23)
    at MapSubscriber._next (/node_watcher/node_modules/rxjs/src/internal/operators/map.ts:86:24)
    at MapSubscriber.Subscriber.next (/node_watcher/node_modules/rxjs/src/internal/Subscriber.ts:99:12)
    at SwitchMapSubscriber.notifyNext (/node_watcher/node_modules/rxjs/src/internal/operators/switchMap.ts:172:24)
    at InnerSubscriber._next (/node_watcher/node_modules/rxjs/src/internal/InnerSubscriber.ts:17:17)
    at InnerSubscriber.Subscriber.next (/node_watcher/node_modules/rxjs/src/internal/Subscriber.ts:99:12)
    at CombineLatestSubscriber.notifyNext (/node_watcher/node_modules/rxjs/src/internal/observable/combineLatest.ts:313:26)
    at InnerSubscriber._next (/node_watcher/node_modules/rxjs/src/internal/InnerSubscriber.ts:17:17)
    at InnerSubscriber.Subscriber.next (/node_watcher/node_modules/rxjs/src/internal/Subscriber.ts:99:12)
(node:26) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 2)

job last-50-k is still running though.

nodewatcher needs a recovery mechanism when Connection Dropped.

Currently we need to update the container to start syncing from the block where connection was dropped, then restart the pods manually.

Ideally we'd have a way for this to happen automatically.

The error:
API-WS: disconnected from wss://kusama-rpc.polkadot.io/ code: '1006' reason: 'Connection dropped by remote peer.'

Make linting work again on all files

Linting fails on ./back/server, so for now:

  • .eslintignore has a ./back/server/**/*
  • .tsconfig has ["back/server/**/*"]

Make sure linting works everywhere and CI is green.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.