GithubHelp home page GithubHelp logo

interledger / interledger-rs Goto Github PK

View Code? Open in Web Editor NEW
199.0 25.0 70.0 6.39 MB

An easy-to-use, high-performance Interledger implementation written in Rust

Home Page: http://interledger.rs

License: Other

Rust 96.89% Dockerfile 0.34% Shell 1.60% JavaScript 0.62% Lua 0.54%
interledger ilp payment rust streaming micropayments interoperability blockchain nanopayments ethereum

interledger-rs's People

Contributors

0xask avatar aphelionz avatar bstrie avatar calvinlauyh avatar chawlapalak avatar darentuzi avatar dependabot-preview[bot] avatar dora-gt avatar emschwartz avatar gakonst avatar kevinwmatthews avatar khoslaventures avatar kincaidoneil avatar koivunej avatar kristianwahlroos avatar ljedrz avatar mirko-von-leipzig avatar niklaslong avatar omertoast avatar pgarg-ripple avatar pradovic avatar sappenin avatar sentientwaffle avatar whalelephant avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

interledger-rs's Issues

Change order of streams

  • Replace IlpOrBtpPacket with tuple Enum of (request_id, IlpPacket) or Settlement(u64)
  • Make sure claim parser comes before ILP packet parser
  • Implement dummy settlement parser for BTP

Rename Interledger.rs?

This project could use a catchier name and one that rolls off the tongue better than "Interledger dot r s". Anyone have suggestions?

Change Plugin abstraction to use tower-service

ILP is a request/response protocol so it makes more sense for the core abstraction to be based around that flow, rather than tokio's Stream + Sink. Passing the request IDs around everywhere is a clunky way to match requests and responses, and most things that use the plugin will need to know what the request was to process the response. tower-service is built for this use case.

This will also make implementing the HTTP ledger plugin protocol more straightforward.

SPSP Receiver Rejects final STREAM packet

When making a STREAM payment to the SPSP receiver, it appears the SPSP receiver is rejecting the final STREAM packet (which is probably trying to close the STREAM).

For example, if an SPSP sender sends 1 units (in a 1-packet STREAM), then the first packet (Prepare with 1 units) works, and fulfills. But then there is a 2nd prepare (with amount=0), and the SPSP receiver always rejects this packet with an F99 error code.

Note also that the triggeredBy in this reject packet is empty, but should be the ILP address of the SPSP receiver.

Create IlpAsset type

Similar to the IlpAddress type from #69, it might be a good idea to define a new type, IlpAsset, which contains str: asset_code (or even better have some enum with defined assets, as well as u8: asset_scale which is well defined per asset.

Stream Client doesn't properly honor F08_AMOUNT_TOO_LARGE

In the general case, the Stream client doesn't seem to be properly honoring a Connector response of F08_AMOUNT_TOO_LARGE.

Even if the StreamClient gets a few Fulfillments, as soon as it exceeds the Connector's max-packet amount (i.e., an F08 is encountered), the StreamClient never reduces its packet size, which simply creates an infinite loop as the StreamClient keeps repeating packets with an amount too high.

For example, here is some output from the SPSP pay command (spsp pay '--receiver=... --amount=1000 where the Connectort has a max_packet amount of 10.

019-03-15T15:42:44Z INFO  interledger_stream::client] Sending packet 1 with amount: 5 and encrypted STREAM packet: StreamPacket { sequence: 1, ilp_packet_type: Prepare, prepare_amount: 0, frames: [ StreamMoneyFrame { stream_id: 1, shares: 1 }, ConnectionNewAddressFrame { source_account: test.alice.sender } ] }
[2019-03-15T15:42:45Z INFO  interledger_stream::client] Sending packet 2 with amount: 10 and encrypted STREAM packet: StreamPacket { sequence: 2, ilp_packet_type: Prepare, prepare_amount: 0, frames: [ StreamMoneyFrame { stream_id: 1, shares: 1 } ] }
[2019-03-15T15:42:45Z INFO  interledger_stream::client] Sending packet 3 with amount: 20 and encrypted STREAM packet: StreamPacket { sequence: 3, ilp_packet_type: Prepare, prepare_amount: 0, frames: [ StreamMoneyFrame { stream_id: 1, shares: 1 } ] }
[2019-03-15T15:42:45Z WARN  interledger_stream::congestion] Got F08: Amount Too Large Error without max packet amount details attached
[2019-03-15T15:42:45Z INFO  interledger_stream::client] Sending packet 4 with amount: 20 and encrypted STREAM packet: StreamPacket { sequence: 4, ilp_packet_type: Prepare, prepare_amount: 0, frames: [ StreamMoneyFrame { stream_id: 1, shares: 1 } ] }
[2019-03-15T15:42:45Z WARN  interledger_stream::congestion] Got F08: Amount Too Large Error without max packet amount details attached
[2019-03-15T15:42:45Z INFO  interledger_stream::client] Sending packet 5 with amount: 20 and encrypted STREAM packet: StreamPacket { sequence: 5, ilp_packet_type: Prepare, prepare_amount: 0, frames: [ StreamMoneyFrame { stream_id: 1, shares: 1 } ] }
[2019-03-15T15:42:45Z WARN  interledger_stream::congestion] Got F08: Amount Too Large Error without max packet amount details attached
[2019-03-15T15:42:45Z INFO  interledger_stream::client] Sending packet 6 with amount: 20 and encrypted STREAM packet: StreamPacket { sequence: 6, ilp_packet_type: Prepare, prepare_amount: 0, frames: [ StreamMoneyFrame { stream_id: 1, shares: 1 } ] }
[2019-03-15T15:42:45Z WARN  interledger_stream::congestion] Got F08: Amount Too Large Error without max packet amount details attached

...

[2019-03-15T15:42:59Z WARN  interledger_stream::congestion] Got F08: Amount Too Large Error without max packet amount details attached
[2019-03-15T15:42:59Z INFO  interledger_stream::client] Sending packet 713 with amount: 20 and encrypted STREAM packet: StreamPacket { sequence: 713, ilp_packet_type: Prepare, prepare_amount: 0, frames: [ StreamMoneyFrame { stream_id: 1, shares: 1 } ] }

The repetition above will continue forever.

Multi-path STREAM

Create an alternate version of the STREAM client that takes an OutgoingService and a list of Accounts, and decides in real-time which outgoing account to send packets through. It should try sending through all of them and then base the decision for subsequent packets on the latency, error rate, and exchange rate for each of them.

This should take inspiration from Multi-Path TCP.

Design for payment channel-based settlement engine

Current thinking:

  • Payment channel claims are sent in ILP packets addressed to peer.settle.<currency> (credit to @adrianhopebailie for this idea)
  • To support backwards compatibility with the JS plugins, the BTP service can convert claims sent in ILP packets to/from claims sent in BTP packets
  • Incoming claims are processed by an IncomingService written in Rust. Using an atomic Lua script, the latest claim is set and the balance updated accordingly in Redis
  • Opening and closing payment channels, as well as topping up and withdrawing is handled by a settlement engine written in Javascript (to take advantage of the ripple-lib SDK)
  • The settlement engine polls the account balances at a regular interval (could be as short as every 100-200ms) to check when outgoing claims should be sent out
  • The settlement engine signs payment channel claims and puts them in ILP packets to send to the relevant peer

Open questions:

  • If the BTP service is converting claims put in ILP packets to/from claims in BTP packets, how does it know whether it should do that conversion for a particular peer? Should that be a configuration option or based on something in the BTP handshake?
  • When using ILP-Over-HTTP, should the settlement engine make outgoing HTTP calls directly to the peer or should it always forward packets through a connector instance?
  • If the connector is supposed to forward outgoing payment channel claims to a peer, how does it know whether a packet addressed to peer.settle.<currency> is an outgoing or incoming claim? Should the router look at the from account to figure out whether the from is "internal" or not?
  • Should the same settlement engine do both payment channel and on-ledger settlement? If so, would the preference be configured on the settlement engine, per-account, or based on some factors like how often the engine has to do on-ledger operations with a particular account?

Error Handling Improvements

Hi everybody, I started some work on improving error handling in another WIP branch here.

Motivation

I began working on some tests, but found it hard to test some error cases. In an attempt to improve the creation of tests, I took it upon myself to improve error handling in the process.

Implementation

Due to the layered nature of ILP, I thought it would be useful to formally define the types of errors that could occur within each layer for each crate. The error handling and propagation scheme looks roughly like this:

error_handling
edit: I meant InvalidPacket and UnexpectedPacket in the diagram above...

Furthermore, each layer of the protocol is restricted to only emitting errors to the layer directly below it. Since no ILP packet handling is done explicitly at the BTP layer, I thought this made sense. Does anybody have different opinions on this?

Since the protocol stack is roughly ranked from bottom to top as: BTP, ILP, STREAM, SPSP, I thought it was appropriate to only allow the layers to communicate errors to the layer immediately beneath. (e.g. STREAM can propagate errors to ILP, which can be propagated to BTP, and handled at that layer.)

For now, I am purposefully omitting BTP Error packets and ILP Error packets, because the builder interface seems more appropriate for naming purposes, but it does seem to lead to some slight inconsistency. Should these cases be handled in the module-level error handling case described above?

Higher-level (sub)protocols and interfaces are still able to utilize error types found in these crates by importing the appropriate error. (e.g. use ilp::Error as IlpError)

Notes

This is also intended to reduce error handling dependencies. I think this is a more reasonable approach since each layer of the protocol is split into crates, therefore error types for each layer should be explicitly defined.


Let me know if you have any thoughts, concerns, or ideas, before I proceed.

Speed up Router

Warning: this is definitely a premature optimization.

Here are some possibilities to consider:

  • Change the routing table to use a tree structure for faster lookup, potentially along the lines of this link that @tarcieri shared
  • Cache the most recent destination -> next hop mappings to avoid looking them up for every packet if we expect to have many packets coming through for the same destination
  • Use a parallel iterator to iterate through the routing table
  • Add a Prefix type, along similar lines to the Address type, to avoid copying address prefixes

Interledger Node

Create a bundle of all of the components that runs a combined sender, receiver, connector, and expose an HTTP API to interact with it.

API

  • Send an SPSP payment (POST /spsp)
  • Create account (POST /accounts)
  • Query account details (GET /accounts/:id)
  • List accounts (GET /accounts)

+ more for managing settlement methods

Export PluginBTP

Helper module that combines the various streams that make up a PluginBtp

Add settlement

  • On-ledger XRP settlement engine (done in a7ed19b)
  • Modify API to include optional xrp_address, settle_threshold, settle_to in account record (make sure xrp_address isn't already linked to another account)
  • Docker container that includes the Interledger node and settlement engines
  • SE should reduce balance before sending settlement (and roll it back if the settlement fails)
  • Reconnect SE to ledger if it disconnects
  • Add support for XRP payment channels
  • On-ledger ETH
  • Lightning
  • ETH payment channels
  • Daemon for updating exchange rates

Fetch exchange rates from an external API

Right now the exchange rates have to be set via the API. It would be useful to have a component that fetches rates from external APIs like the default rate backend from the ilp-connector. That way, when you start up the node bundle, you wouldn't have to manually set the exchange rates if you have accounts denominated in different currencies.

  • Fetch exchange rates from external APIs
  • Create a service like the CCP routing manager that spawns a task that triggers an update to the rates on the configured interval
  • Include this service in the node bundle by default but include a configuration option to disable it

Node.js or wasm module

Bindings using neon or wasm-pack (which one is more performant / easier to get working?)

Create an IlpAddress type

This type should:

  • Wrap a Bytes
  • Check that the address conforms to the spec
  • Implement Debug, Display, FromStr
  • Implement Deref for the targets &str and &[u8]
  • Implement serde's Serialize, Deserialize

This will let us efficiently use the addresses and avoid copying them but also enable printing them without constantly re-parsing the bytes as utf8.

Credit to @sentientwaffle for this idea.

Change code coverage provider?

Right now we're using kcov / cargo-kcov (a C++ program) for collecting code coverage statistics. The advantage of kcov is that it seems to be fairly widely used for compiled languages. The disadvantage is that it's running into compilation issues right now and it's not designed specifically for rust so it sometimes tracks the coverage incorrectly.

As an alternative, we could switch to using tarpaulin instead. It is written in Rust and designed for Rust projects. The disadvantage is that it's not as widely used as kcov (as far as I can tell).

Thoughts?

Should the Service trait have methods for adding/removing accounts?

When the BTP client service starts up, it loads all of the accounts that are configured with outgoing btp_uris from the store (database) and tries to connect to them. Similarly, the CCP route broadcaster service looks in the store every 30 seconds to check what accounts it should send broadcasts to.

Right now, if an account is added via the API or CLI after the node has started up, nothing will be triggered on the services. The BTP client won't try to connect to the btp_uri if the account has one and the account will only get route broadcasts (if send_routes is set to true) the next time the broadcast interval hits.

Should the chain of services be notified when a new account is added? This could be done by having add_account and remove_account methods on the Service trait. The default implementation would be a no-op, but services that do want to add logic for this could do so.

(Note that adding methods for when accounts connect and disconnect would make the service interface nearly equivalent to the Ledger Plugin Interface, minus sendMoney)

STREAM Memory Leak

After a brief discussion with Evan, there appears to be a small memory leak in the current implementation.

When the connection closes, it leaves one future hanging on the client side, and at least one future hanging on the server side. It might be the result of splitting the plug-in into channels.

Tasks

  • Reproduce the issue.
  • Confirm/deny the issue is the result of channel usage.

Rename generic parameters for consistency

(all my comments are based on commit 500577

Since the codebase is heavy on generics, I suggest we rename (and even add some guidelines to a CONTRIBUTING.md) file for the following:

I for IncomingService
O for OutgoingService
IO in the struct declaration when it can be either (e.g. check the ValidatorService)
S for AccountStore and the store variants:
A for Account
F for functions

also try to be consistent with Outgoing and IncomingService definitions

let's target for:

pub struct Service<S,I,A> {
   ilp_address: Address,
   store: S,
   next: I // or O,
   account_type: PhantomData<A> // optional
}

Integration tests don't show up in coverage statistics

It seems like the coverage statistics only show the unit test coverage, rather than the integration tests. It's especially hard to unit test the redis-related functionality so it would be nice if the integration tests that do capture that showed up in the stats.

Add echo protocol handler

Implement the Echo protocol as an IncomingService. It should inspect the request to determine whether the packet corresponds to the Echo protocol and, if so, swap out the Prepare packet for one addressed to the destination address from the original packet data. As long as this service is inserted before the Router, nothing else should need to change to handle this protocol. The fulfillment will be returned as normal to the previous hop.

Add an SQL-backed Store

Possibly using diesel so we can use support SQLite, Postgres, and MySQL.

Also create ways for the settlement engine(s) to interact with that store.

Create Route type

Currently the routes inside ilp-router are Bytes. We were earlier talkign with @emschwartz about replacing them with the new Address type. However, this doesn't make much sense, as we want prefix matching. Some routes might be even empty.

We could create a new Route type to be used as the key of the hashmap alongside helper methods for it.

Is this worth putting effort into, or should we stick with the Bytes type here? Note that after merging #75, variables such as destination will be Address, and as such we'll have to convert them to Bytes for the starts_with etc. comparisons.

Update Documentation

I am currently going through the existing codebase and trying to add documentation wherever possible, using this as my reference.

  • ILP Packet
  • BTP Packet
  • STREAM Packet
  • Errors

Right now, I am currently focused on updating the lower-level primitives that will not be exposed to the bindings. If there is a specific area that you would like to see more documentation for, please let me know or contribute by submitting a PR.

Avoid repeatedly loading account details from Redis

Even though Redis is a fast store, we should avoid loading and parsing the Account details every time we handle a request. Especially if we need a copy of all of them in-memory for the routing table, we should use an in-memory cache of account details and clone the Account object instead of re-loading the details from Redis every time we need them.

Add required HTTP Response headers to SPSP Endpoint

Querying the SPSP endpoint returns a response that is missing the headers required by IL-RFC-0009, specifically: Content-Type and Cache-Control.

Here is the current output:

*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 3000 (#0)
> GET /.well-known/pay HTTP/1.1
> Host: localhost:3000
> User-Agent: curl/7.54.0
> Referer: rbose
> Accept: application/spsp4+json, application/spsp+json
> 
< HTTP/1.1 200 OK
< content-length: 206
< date: Sat, 09 Mar 2019 18:04:33 GMT
< 
{"destination_account":"private.local....","shared_secret":"..."}

Use redis::Script to cache lua scripts in Redis

For the Redis store, using EVAL with the full lua script causes that text to be sent to Redis each time. Instead of doing this, we can use the Script struct to handle automatically caching the script (and calling it by its hash instead of the full text).

The main blocker to doing this before was the lack of support for asynchronously invoking Scripts, but that was added in redis-rs/redis-rs#206 and included in versions >= 0.11.0-beta.1.

Add new settlement integration

Integrate the API described in Settlement Architecture thread conclusion.

  • Decide on the exact API calls
  • Send settlement (from Rust to the Settlement Engine)
  • Receive messages sent from peer Settlement Engine (from Rust to the Settlement Engine)
  • Send outgoing messages (from the Settlement Engine to Rust)
  • Receive settlement (from the Settlement Engine to Rust)

Kincaid from Kava started modifying the XRP paychan plugin to expose an HTTP API, though we may want to change the exact API it uses.

Fix sending from STREAM listener to client

This test fails right now:

    #[test]
    fn listener_pushes_money() {
        let _ = env_logger::init();
        let (plugin_a, plugin_b) = create_mock_plugins();
        let got_money = Arc::new(AtomicBool::new(false));
        let got_money_clone = Arc::clone(&got_money);

        let run = StreamListener::bind(plugin_a, crypto::random_condition())
            .map_err(|err| panic!(err))
            .and_then(|(listener, conn_generator)| {
                let (destination_account, shared_secret) =
                    conn_generator.generate_address_and_secret("test");
                let client = connect_async(plugin_b, destination_account, shared_secret)
                    .map_err(|err| panic!(err))
                    .and_then(|conn| {
                        let handle_streams = conn.for_each(move |stream| {
                            let got_money_clone = Arc::clone(&got_money_clone);
                            let handle_money =
                                stream.money.clone().collect().and_then(move |amounts| {
                                    assert_eq!(amounts, vec![100, 500]);
                                    got_money_clone.swap(true, Ordering::SeqCst);
                                    println!("got money");
                                    Ok(())
                                });
                            tokio::spawn(handle_money);

                            let data: Vec<u8> = Vec::new();
                            let handle_data = read_to_end(stream.data, data)
                                .map_err(|err| panic!(err))
                                .and_then(|(_, data)| {
                                    assert_eq!(data, b"here is some test data");
                                    Ok(())
                                });
                            tokio::spawn(handle_data);

                            Ok(())
                        });
                        tokio::spawn(handle_streams);
                        Ok(())
                    });
                tokio::spawn(client);

                listener
                    .into_future()
                    .map_err(|e| panic!(e))
                    .and_then(|(next, _listener)| {
                        let (_conn_id, conn) = next.unwrap();
                        let stream = conn.create_stream();

                        stream
                            .money
                            .clone()
                            .send(100)
                            .map_err(|err| panic!(err))
                            .and_then(|_| {
                                println!("sent money");
                                stream
                                    .money
                                    .clone()
                                    .send(500)
                                    .map_err(|err| panic!(err))
                                    .and_then(|_| {
                                        write_all(stream.data, b"here is some test data")
                                            .map_err(|err| panic!(err))
                                            .map(|_| ())
                                    })
                            })
                            .and_then(move |_| conn.close())
                    })
            });
        let mut runtime = Runtime::new().unwrap();
        runtime.block_on(run).unwrap();
        assert_eq!(got_money.load(Ordering::SeqCst), true);
    }

SPSP / STREAM Server Lambda Function

It would be cool to make a deconstructed SPSP / STREAM receiver designed for AWS Lambda (using the new Rust runtime).

To build this we would need:

  • An HTTP-based ledger plugin protocol (instead of BTP over WebSockets)
  • A standalone SPSP Server Lambda function that handles just the HTTPS requests
  • A STREAM receiver function that processes a single incoming ILP Prepare with a STREAM packet attached and returns the fulfillment + response packet (implemented in #29)

Other notes:

  • The STREAM receiver should probably write details of the money received into a database like DynamoDB (probably using rusoto)
  • The server secret used to generate the STREAM shared secret would probably be passed in via environment variable

HTTP-based ledger plugin

There are two ways to design this protocol. Either:

  1. One side is the HTTP client and the other is the HTTP server
  2. Both sides are HTTP clients and servers

The problem with 1 is how to handle incoming Prepare packets on the client side. It could poll the server, but (long) polling is kind of clunky. We would also need IDs to correlate request and response packets. If we wanted to have multiple instances of the plugin running concurrently, they would all need to be feeding the packets they get into the same database, because we couldn't guarantee that the same instance gets the response to its Prepare. Finally, WebSockets were specifically designed for bidirectional communication between a client and server, so BTP probably serves this use case better than a purely HTTP-based protocol would.

Option 2 is much easier. To send outgoing Prepare packets, the sender would POST /ilp with the packet in the HTTP request body and the HTTP response would contain the Fulfill or Reject packet (if we want idempotency, we could make it PUT /ilp/:uuid). This would work the same way in both directions. Every packet would carry the auth details (using the normal HTTP Authorization header), and we could optionally upgrade to HTTP/2 to speed the whole thing up. If there are other "sub protocols" like payment channel claims now sent over BTP, those could just use a different URL (POST /xrp-paychan for example).

Stream: Limit # of packets in flight

@sappenin ran into an issue where he was testing with a large total send amount and a small max packet amount. Right now, STREAM only limits the amount of money it puts in flight if it gets T04 errors. It doesn't limit the number of packets it sends.

The congestion controller API should return an array of packet amounts the sender should send (instead of the sender looping can just asking the controller how much it can put in flight). This would let the controller easily limit the number of packets in flight and let it try different values, for example, if it's trying to figure out the path's max packet amount.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.