GithubHelp home page GithubHelp logo

p2panda / aquadoggo Goto Github PK

View Code? Open in Web Editor NEW
68.0 5.0 5.0 1.79 MB

Node for the p2panda network handling validation, storage, aggregation and replication

License: GNU Affero General Public License v3.0

Rust 99.97% Dockerfile 0.03%
graphql local-first p2p libp2p node

aquadoggo's Introduction

p2panda

All the things a panda needs


This library provides all tools required to write a client, node or even your own protocol implementation for the p2panda network. It is shipped both as a Rust crate p2panda-rs with WebAssembly bindings and a NPM package p2panda-js with TypeScript definitions running in NodeJS or any modern web browser.

The core p2panda specification is fully functional but still under review so please be prepared for breaking API changes until we reach v1.0. Currently no p2panda implementation has recieved a security audit.

Features

  • Generate Ed25519 key pairs.
  • Create and encode Bamboo entries.
  • Publish schemas and validate data.
  • Create, update and delete data collaboratively.
  • Encrypt data with OpenMLS.
  • Materialise documents from data changes.
  • Prepare data for node servers.

Usage

import { KeyPair } from "p2panda-js";
const keyPair = new KeyPair();
console.log(keyPair.publicKey());
use p2panda_rs::identity::KeyPair;
let key_pair = KeyPair::new();
println!("{}", key_pair.public_key());

See the demo application and its source code. More examples can be found in the p2panda-rs and p2panda-js directories.

Installation

If you are using p2panda in web browsers or NodeJS applications run:

$ npm i p2panda-js

For Rust environments run:

$ cargo add p2panda-rs

Documentation

Visit the corresponding folders for development instructions and documentation:

Benchmarks

Performance benchmarks can be found in benches. You can run them using cargo-criterion:

$ cargo install cargo-criterion
$ cargo criterion
# An HTML report with plots is generated automatically
$ open target/criterion/reports/index.html

These benchmarks can be used to compare the performance across branches by running them first in a base branch and then in the comparison branch. The HTML-reports will include a comparison of the two results.

License

GNU Affero General Public License v3.0 AGPL-3.0-or-later

Supported by



This project has received funding from the European Union’s Horizon 2020 research and innovation programme within the framework of the NGI-POINTER Project funded under grant agreement No 871528 and NGI-ASSURE No 957073

aquadoggo's People

Contributors

adzialocha avatar cafca avatar jmanm avatar mycognosist avatar pietgeursen avatar sandreae avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

aquadoggo's Issues

Implement materialisation for hard-coded "bookmark" schema

We have entries that are being written to the database as they are being published. This PR is to resolve the documents described by those entries/operations and write them to the database.

  • Define the bookmark schema that we materialise
  • Create tables for that schema
  • Update publish_entry to materialise a document when it receives a new operation
  • Write materialisation function
    • Load all operations for the document to be materialised
    • Resolve the document
    • Write that document to the db

Schemas

Bookmark

  url: string
  title: string
  created: string

Tag

  bookmark: relation(bookmark)
  name: string
  description: string

Votes

  bookmark: relation(bookmark)

RPC method to get next entry arguments

Write basic RPC methods to get next entry arguments panda_nextEntryArguments:

Included tasks:

  • Introduce RPC API Service to database connection pool
  • Write SQL migrations for Entries and Logs tables
    • Logs table contains: author, schema hash, log id, sequence number?
    • Entries table contains ... all the things Bamboo entries + payload?

Process for handling next entry arguments command:

  1. Get log id for author and schema
  2. Get last sequence number for the above
  3. Resolve backlinks
    3.1. Get last entry for that schema/log if it exists
    3.2. Generate skiplink entry and retrieve that from the db (via bamboo_core::lipmaa)
  4. Return nextEntryParams

RPC method to publish bamboo entry

Implement panda_publishEntry RPC method.

Steps:

  • Decode encodedEntry and encodedMessage hex strings to bytes
  • Verify schema of CBOR payload via CDDL
  • Check if schema hash and log id are correct
  • Decode Bamboo entry
  • Get backlink and skiplink from database
  • Verify bamboo entry integrity with backlink and skiplinks
  • Insert entry in entries table
  • Upsert used log id in logs table

Related:

Better panic handling in all async task and services

System critical panics inside any task workers or any async services should be handled in a controlled manner meaning that they should be unwinded outside of the async task and propagated up to the service manager / main thread, which then can accordingly react by either just crashing or attempting a "controlled" shutdown.

Todo:

  • Crash when something goes wrong in ServiceManager #92
  • Crash when something goes wrong in worker Factory

Construct `Schema` from schema def & schema field def documents

  • check for dependencies if this is a system scheme with relations (schema_definition_v1)
    • if dependencies are met, move on
    • if dependencies aren't met... ??? send to our task queue (which doesn't exist yet)
  • construct Schema from it's definition and fields, store in the DB

Simple configuration service

  • a simple way to configure an aquadoggo
  • precursor to configuration via a configuration schema
  • should it be file based configuration?

Handle storing `SchemaId` in db

We need to handle storing SchemaIds in the db, they come in two forms:

  • an array of hashes (application schemas)
  • a string id (system schemas)

Schema provider

A service living in memory (for performance) returning schema definitions for validation etc.

It should load the "registered" application schemas from the database during node startup.

Consider dedicated channels instead of broadcast

Currently all services talk to all other services in ServiceManager and all worker pools get informed about all incoming tasks in Factory. This should not be a problem for now but later we might want to use more dedicated channels (similar to an Actor pattern) to avoid pollution.

Look at `previous_operations` data to determine document hash

Currently we look at the last used entry in a log via the backlink to determine what document hash has been used. In the future we want to use previous_operations, since in a multi-writer setting there might be no backlink for UPDATE operations.

[document x]
log 1 of author A: [x] <- [y] <- [z]
log 1 of author B:           \-- [p]             (Entry `p` does not have a backlink yet ..)

Depends on: p2panda/p2panda#163

Mistake in logic for calculating skiplink in get_entry_args

After some brain bending in the chat me and @cafca think there may be an error in the logic calculating the skiplink for the return value in get_entry_args here:

&entry_backlink.seq_num.skiplink_seq_num().unwrap(),

We suggest it should be something more like this:

&entry_backlink.next_seq_num.skiplink_seq_num().unwrap()

So that the returned value is the skiplink for the entry we are about to create (instead of for the backlink/latest_entry).

Message bus for cross-service communication

All services should be able to broadcast messages to each other on a common messaging bus. For now this would help to implement #86 but later it can also be used to send data across services, for example incoming entries for materialization etc.

Wake exit signal future more efficiently

Currently the waker of the Signal future in ServiceManager gets waken on every poll which is not necessary and unefficient. An optimization would be to only wake when the boolean flag flipped.

New linter flag `rustdoc::missing_doc_code_examples` breaks Docker build

#12 409.0 error[E0710]: an unknown tool name found in scoped lint: `rustdoc::missing_doc_code_examples`
#12 409.0   --> /home/rust/.cargo/registry/src/github.com-1ecc6299db9ec823/***-rs-0.2.0/src/lib.rs:49:5
#12 409.0    |
#12 409.0 49 |     rustdoc::missing_doc_code_examples,
#12 409.0    |     ^^^^^^^
#12 409.0 
#12 409.5 error: aborting due to previous error

https://github.com/p2panda/aquadoggo/runs/4002091673?check_suite_focus=true

One could rebuild ekidd/rust-musl-builder with a later rust version to support the new linter flag but this seems too involved. Another simpler solution is to revert the change in p2panda-rs (https://github.com/p2panda/p2panda/blob/main/p2panda-rs/src/lib.rs#L49) and use the deprecated flag again until ekidd/rust-musl-builder supports newer Rust versions.

Related: emk/rust-musl-builder#125

Dynamically generate GraphQL API from application schemas

Development

Stage 1

  • Using the StorageProvider
  • No cute filter and queries

Stage 2

  • Use sql queries in order to provide ordering and filtering by fields

Stage 3

  • Bring back the advanced filtering and sql queries into the storage provider

Outline for GraphQL API:

events_0020...38830
    parameter
      id
      view_id
      
    returns an object representing a document view
        fields
            ...depending on the schema
        meta
            id: String!
            view_id: String!
            authors: String[]
            deleted: Boolean
            edited: Boolean
            operations
        reverse_relations [tbd]
            [reverse_field_name]
        
all_events_0020...38830
    parameter
        orderBy
        orderDirection
        first
        skip  
        
    returns an array of objects

Server hangs on correct requests when deployed on machine with only 2 CPUs

Description of problem

The JSON RPC server aquadoggo (hosted on https://welle.liebechaos.org) hangs and no longer responds after receiving a well formed and correct request. Requests time out with an 503 status code (Gateway timeout). It returns correct error responses to malformed requests (for example invalid payload, missing JSON RPC method etc.). When it "hangs" it also doesn't respond anymore on malformed requests.

Important: This issue doesn't appear on local environments (MacOS & Linux) but only when deployed on remote servers (vServers deployed via docker or installed directly).

After multiple debugging sessions we could identify the problem being part of an unbounded async channel which does not receive anything even though a message was sent through this channel. Increasing the server to have 4 CPUs solves the issue.

This is the part in the code where the channel receiver never gets notified (causing the server to hang):
https://github.com/p2panda/aquadoggo/blob/debug-doggo/aquadoggo/src/rpc/api.rs#L58-L69

This is where the request begins (start reading from here to follow request):
https://github.com/p2panda/aquadoggo/blob/debug-doggo/aquadoggo/src/rpc/api.rs#L121

For debugging we created a branch debug-doggo with additional logging (compare logs below) and simplified logic (removing database handling etc.).

Steps to Reproduce

  1. Build binary file for aquadoggo_cli locally and test it is working by making a curl request to local host
  2. Copy the binary to a suitable remote host, run the process and test once again with curl request
  3. The curl request can be made locally or remotely

OR

  1. Install rust and cargo and any other required dependencies on a remote host
  2. Build aquadoggo with cargo build
  3. Run it with cargo run
  4. Test with curl requests

OR

  1. Use docker image: https://hub.docker.com/r/p2panda/aquadoggo
  2. Start aquadoggo on local and remote machine via Docker
  3. Test with curl requests

Example for an "invalid" JSON RPC request via curl:

Request:

curl -H "Content-Type: application/json" -X POST -d '{"jsonrpc":"2.0","id":"1","method":"panda_getEntryArguments","params":{"author":"bla"}}' https://welle.liebechaos.org

Response:

{
  "jsonrpc": "2.0",
  "error": {
    "code": -32602,
    "message": "Invalid params: missing field `schema`."
  },
  "id": "0"
}

Example for an "valid" JSON RPC request via curl which makes the server hang:

Request:

curl -H "Content-Type: application/json" -X POST -d '{"jsonrpc":"2.0","id":"2","method":"panda_getEntryArguments","params":{"author":"fddf32769f21f1cc9b32d91370aa8cdb598b51fb40732353cd3b8fe6cdad6ae8","schema":"0040cf94f6d605657e90c543b0c919070cdaaf7209c5e1ea58acb8f3568fa2114268dc9ac3bafe12af277d286fce7dc59b7c0c348973c4e9dacbe79485e56ac2a702"}}' https://welle.liebechaos.org

Response:

No response, request times out (504) after a few seconds

Environment Information

============FAILING SYSTEM===============

Distributor ID:	Ubuntu
Description:	Ubuntu 20.04.2 LTS
Release:	20.04
Codename:	focal

Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   40 bits physical, 48 bits virtual
CPU(s):                          2
On-line CPU(s) list:             0,1
Thread(s) per core:              1
Core(s) per socket:              2
Socket(s):                       1
NUMA node(s):                    1
Vendor ID:                       GenuineIntel
CPU family:                      6
Model:                           63
Model name:                      DO-Regular
Stepping:                        2
CPU MHz:                         2494.138
BogoMIPS:                        4988.27
Virtualization:                  VT-x
Hypervisor vendor:               KVM
Virtualization type:             full
L1d cache:                       64 KiB
L1i cache:                       64 KiB
L2 cache:                        8 MiB

===========WORKING SYSTEM===============

Distributor ID:	Ubuntu
Description:	Ubuntu 20.04.2 LTS
Release:	20.04
Codename:	focal

Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   36 bits physical, 48 bits virtual
CPU(s):                          4
On-line CPU(s) list:             0-3
Thread(s) per core:              2
Core(s) per socket:              2
Socket(s):                       1
NUMA node(s):                    1
Vendor ID:                       GenuineIntel
CPU family:                      6
Model:                           42
Model name:                      Intel(R) Core(TM) i5-2540M CPU @ 2.60GHz
Stepping:                        7
CPU MHz:                         1844.795
CPU max MHz:                     3300.0000
CPU min MHz:                     800.0000
BogoMIPS:                        5183.44
Virtualisation:                  VT-x
L1d cache:                       64 KiB
L1i cache:                       64 KiB
L2 cache:                        512 KiB
L3 cache:                        3 MiB
NUMA node0 CPU(s):               0-3

Debug information

Logs with trace flag running on local machine (and working without problems / not hanging):

[2021-04-14T08:55:28Z TRACE polling::epoll] wait: epoll_fd=3, timeout=Some(569.999073677s)
[2021-04-14T08:55:28Z TRACE polling::epoll] modify: epoll_fd=3, fd=5, ev=Event { key: 18446744073709551615, readable: true, writable: false }
[2021-04-14T08:55:28Z TRACE mio::poll] registering with poller
[2021-04-14T08:55:28Z TRACE hyper::proto::h1::conn] Conn::read_head
[2021-04-14T08:55:28Z TRACE hyper::proto::h1::conn] flushed({role=server}): State { reading: Init, writing: Init, keep_alive: Busy }
[2021-04-14T08:55:28Z TRACE hyper::proto::h1::conn] Conn::read_head
[2021-04-14T08:55:28Z DEBUG hyper::proto::h1::io] read 424 bytes
[2021-04-14T08:55:28Z TRACE hyper::proto::h1::role] parse_headers
[2021-04-14T08:55:28Z TRACE hyper::proto::h1::role] -> parse_headers
[2021-04-14T08:55:28Z TRACE hyper::proto::h1::role] Request.parse([Header; 100], [u8; 424])
[2021-04-14T08:55:28Z TRACE hyper::proto::h1::role] Request.parse Complete(132)
[2021-04-14T08:55:28Z TRACE hyper::proto::h1::role] <- parse_headers
[2021-04-14T08:55:28Z TRACE hyper::proto::h1::role] -- parse_headers
[2021-04-14T08:55:28Z DEBUG hyper::proto::h1::io] parsed 5 headers
[2021-04-14T08:55:28Z DEBUG hyper::proto::h1::conn] incoming body is content-length (292 bytes)
[2021-04-14T08:55:28Z TRACE hyper::proto::h1::decode] decode; state=Length(292)
[2021-04-14T08:55:28Z DEBUG hyper::proto::h1::conn] incoming body completed
[2021-04-14T08:55:28Z TRACE jsonrpc_core::io] Request: {"jsonrpc":"2.0","id":"2","method":"panda_getEntryArguments","params":{"author":"fddf32769f21f1cc9b32d91370aa8cdb598b51fb40732353cd3b8fe6cdad6ae8","schema":"0040cf94f6d605657e90c543b0c919070cdaaf7209c5e1ea58acb8f3568fa2114268dc9ac3bafe12af277d286fce7dc59b7c0c348973c4e9dacbe79485e56ac2a702"}}.
[2021-04-14T08:55:28Z DEBUG aquadoggo::rpc::api] get_entry_args
[2021-04-14T08:55:28Z DEBUG aquadoggo::rpc::api] validated! EntryArgsRequest { author: Author("fddf32769f21f1cc9b32d91370aa8cdb598b51fb40732353cd3b8fe6cdad6ae8"), schema: Hash("0040cf94f6d605657e90c543b0c919070cdaaf7209c5e1ea58acb8f3568fa2114268dc9ac3bafe12af277d286fce7dc59b7c0c348973c4e9dacbe79485e56ac2a702") }
[2021-04-14T08:55:28Z DEBUG aquadoggo::rpc::api] create back channel
[2021-04-14T08:55:28Z DEBUG aquadoggo::rpc::api] sent to backend
[2021-04-14T08:55:28Z TRACE hyper::proto::h1::conn] flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
[2021-04-14T08:55:28Z TRACE hyper::proto::h1::conn] flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
[2021-04-14T08:55:28Z TRACE async_io::driver] block_on: sleep until notification
[2021-04-14T08:55:28Z DEBUG aquadoggo::rpc::api] backend: inner task message=GetEntryArgs(EntryArgsRequest { author: Author("fddf32769f21f1cc9b32d91370aa8cdb598b51fb40732353cd3b8fe6cdad6ae8"), schema: Hash("0040cf94f6d605657e90c543b0c919070cdaaf7209c5e1ea58acb8f3568fa2114268dc9ac3bafe12af277d286fce7dc59b7c0c348973c4e9dacbe79485e56ac2a702") }, Sender { inner: Inner { complete: false, data: Lock { locked: false, data: UnsafeCell }, rx_task: Lock { locked: false, data: UnsafeCell }, tx_task: Lock { locked: false, data: UnsafeCell } } })
[2021-04-14T08:55:28Z DEBUG aquadoggo::rpc::api] backend: GetEntryArgs request
[2021-04-14T08:55:28Z DEBUG aquadoggo::rpc::methods::entry_args] method: start
[2021-04-14T08:55:28Z DEBUG aquadoggo::rpc::api] backend: GetEntryArgs request done
[2021-04-14T08:55:28Z TRACE async_io::driver] block_on: sleep until notification
[2021-04-14T08:55:28Z DEBUG aquadoggo::rpc::api] received to backend
[2021-04-14T08:55:28Z DEBUG jsonrpc_core::io] Response: {"jsonrpc":"2.0","result":{"entryHashBacklink":null,"entryHashSkiplink":null,"lastSeqNum":null,"logId":12345},"id":"2"}.
[2021-04-14T08:55:28Z TRACE hyper::proto::h1::role] encode_headers
[2021-04-14T08:55:28Z TRACE hyper::proto::h1::role] -> encode_headers
[2021-04-14T08:55:28Z TRACE hyper::proto::h1::role] Server::encode status=200, body=Some(Known(120)), req_method=Some(POST)
[2021-04-14T08:55:28Z TRACE hyper::proto::h1::role] <- encode_headers
[2021-04-14T08:55:28Z TRACE hyper::proto::h1::role] -- encode_headers
[2021-04-14T08:55:28Z DEBUG hyper::proto::h1::io] flushed 244 bytes
[2021-04-14T08:55:28Z TRACE hyper::proto::h1::conn] flushed({role=server}): State { reading: Init, writing: Init, keep_alive: Idle }
[2021-04-14T08:55:28Z TRACE hyper::proto::h1::conn] Conn::read_head
[2021-04-14T08:55:28Z DEBUG hyper::proto::h1::io] read 0 bytes
[2021-04-14T08:55:28Z TRACE hyper::proto::h1::io] parse eof
[2021-04-14T08:55:28Z TRACE hyper::proto::h1::conn] State::close_read()
[2021-04-14T08:55:28Z DEBUG hyper::proto::h1::conn] read eof
[2021-04-14T08:55:28Z TRACE hyper::proto::h1::conn] State::close_write()
[2021-04-14T08:55:28Z TRACE hyper::proto::h1::conn] State::close_read()
[2021-04-14T08:55:28Z TRACE hyper::proto::h1::conn] State::close_write()
[2021-04-14T08:55:28Z TRACE hyper::proto::h1::conn] flushed({role=server}): State { reading: Closed, writing: Closed, keep_alive: Disabled }
[2021-04-14T08:55:28Z TRACE hyper::proto::h1::conn] shut down IO complete
[2021-04-14T08:55:28Z TRACE mio::poll] deregistering handle with poller

Logs with trace flag running on remote machine (not working / hanging):

[2021-04-14T09:01:40Z TRACE polling::epoll] wait: epoll_fd=3, timeout=Some(599.948964038s)
[2021-04-14T09:01:40Z TRACE polling::epoll] modify: epoll_fd=3, fd=5, ev=Event { key: 18446744073709551615, readable: true, writable: false }
[2021-04-14T09:01:52Z TRACE mio::poll] registering with poller
[2021-04-14T09:01:52Z TRACE hyper::proto::h1::conn] Conn::read_head
[2021-04-14T09:01:52Z TRACE hyper::proto::h1::conn] flushed({role=server}): State { reading: Init, writing: Init, keep_alive: Busy }
[2021-04-14T09:01:52Z TRACE hyper::proto::h1::conn] Conn::read_head
[2021-04-14T09:01:52Z DEBUG hyper::proto::h1::io] read 424 bytes
[2021-04-14T09:01:52Z TRACE hyper::proto::h1::role] parse_headers
[2021-04-14T09:01:52Z TRACE hyper::proto::h1::role] -> parse_headers
[2021-04-14T09:01:52Z TRACE hyper::proto::h1::role] Request.parse([Header; 100], [u8; 424])
[2021-04-14T09:01:52Z TRACE hyper::proto::h1::role] Request.parse Complete(132)
[2021-04-14T09:01:52Z TRACE hyper::proto::h1::role] <- parse_headers
[2021-04-14T09:01:52Z TRACE hyper::proto::h1::role] -- parse_headers
[2021-04-14T09:01:52Z DEBUG hyper::proto::h1::io] parsed 5 headers
[2021-04-14T09:01:52Z DEBUG hyper::proto::h1::conn] incoming body is content-length (292 bytes)
[2021-04-14T09:01:52Z TRACE hyper::proto::h1::decode] decode; state=Length(292)
[2021-04-14T09:01:52Z DEBUG hyper::proto::h1::conn] incoming body completed
[2021-04-14T09:01:52Z TRACE jsonrpc_core::io] Request: {"jsonrpc":"2.0","id":"2","method":"panda_getEntryArguments","params":{"author":"fddf32769f21f1cc9b32d91370aa8cdb598b51fb40732353cd3b8fe6cdad6ae8","schema":"0040cf94f6d605657e90c543b0c919070cdaaf7209c5e1ea58acb8f3568fa2114268dc9ac3bafe12af277d286fce7dc59b7c0c348973c4e9dacbe79485e56ac2a702"}}.
[2021-04-14T09:01:52Z DEBUG aquadoggo::rpc::api] get_entry_args
[2021-04-14T09:01:52Z DEBUG aquadoggo::rpc::api] validated! EntryArgsRequest { author: Author("fddf32769f21f1cc9b32d91370aa8cdb598b51fb40732353cd3b8fe6cdad6ae8"), schema: Hash("0040cf94f6d605657e90c543b0c919070cdaaf7209c5e1ea58acb8f3568fa2114268dc9ac3bafe12af277d286fce7dc59b7c0c348973c4e9dacbe79485e56ac2a702") }
[2021-04-14T09:01:52Z DEBUG aquadoggo::rpc::api] create back channel
[2021-04-14T09:01:52Z DEBUG aquadoggo::rpc::api] sent to backend
[2021-04-14T09:01:52Z TRACE hyper::proto::h1::conn] flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
[2021-04-14T09:01:52Z TRACE hyper::proto::h1::conn] flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
// from here on the server hangs ..

Replace schema logs with instance logs

Language

This issue uses new language:

  • an operation (former message) is a data change encoded in a bamboo entry. It is identified by the hash of its encoding.
  • a document is the entirety of operations that are directed at a shared root CREATE operation. A document is identified by the hash of the root CREATE operation.
  • an instance is the result of applying any series of operations from a document. An instance is identified by the hash of the last operation applied.

Background

Bamboo let's us assign many logs to any public key. Up until now we have used this to organise p2panda entries by the schema they are using. The reasoning for this was that nodes in the p2panda network would be selecting data to replicate based on a selection of schemas they are configured for. We wanted to ease the replication process by storing data in the way it's needed later during replication.

During further specification of p2panda we have discovered the impact this decision has on our ability to delete entry payloads between skip links. Even when a document is tombstoned we can not delete entry payloads between skip links in a log that is organised by schema as long as at least one of those entries carries information required for a document that has not been tombstoned.

Organising operations by the document they are part of allows us to delete all entry payloads in those logs once the document author has tombstoned it.

Requirements

This issue asks to replace organising operations into logs by their association to a schema to organising operations into logs based on their association to a document.

  • replace schema logs with document logs
  • introduce a new way to retrieve all entries for a given schema

Use different JSON RPC crate

The currently used JSON RPC crate jsonrpc is getting deprecated, development of paritytech goes slowly over to https://github.com/paritytech/jsonrpsee which we should use at one point.

.. or this one looks cool, the developer has a panda profile picture: https://github.com/koushiro/async-jsonrpc - and it seems to use async-std as a runtime which is also cool.

Sadly both crates are under heavy development, so we might need to wait a little bit longer.

How about a combination of tide and jsonrpc-v2? https://github.com/kardeiz/jsonrpc-v2

Validate and resolve `AbstractQuery` with the help of `Schema` struct

When a node receives a GraphQL query this string is parsed as an AbstractQuery (see more here: https://github.com/p2panda/aquadoggo/pull/63/files) to then be further used for validation and then finally to be translated again into SQL.

We need something which:

  1. Validates the fields from the AbstractQuery matching them against the Schema
  2. Resolves relations as we don't know the schema hash from just looking at the query, but Schema knows about it

Remove `document_id` from `DocumentView` (and associated types)

Wondering if we actually need document_id on the DocumentView struct. When working with it in the context of creating SchemaView and SchemaFieldView for Schema, it feels quite redundant, aaaand, isn't validated in any meaningful way.... Arbitrary id's could be passed when constructing Schema and we wouldn't know as we are referencing SchemaView by PinnedRelation (DocumentViewId).

If it's not needed anywhere else I suggest removing it and just keeping document_id on Document.

Consider using `tokio` runtime

This will make it easier for us to use crates like qp2p or QUIC related things in the future. There are currently no frameworks using async_std runtime for QUIC.

Broken next entry args implementation

Next entry args selects log 2 as the next log as soon as 10 documents have been published by one author. This is because log IDs have now become stored as strings instead of integers, which causes SQL's SORT BY clause not to produce the correct order when querying sorted log ids.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.