interledger / interledger-rs Goto Github PK
View Code? Open in Web Editor NEWAn easy-to-use, high-performance Interledger implementation written in Rust
Home Page: http://interledger.rs
License: Other
An easy-to-use, high-performance Interledger implementation written in Rust
Home Page: http://interledger.rs
License: Other
IlpOrBtpPacket
with tuple Enum of (request_id, IlpPacket)
or Settlement(u64)
This project could use a catchier name and one that rolls off the tongue better than "Interledger dot r s". Anyone have suggestions?
ILP is a request/response protocol so it makes more sense for the core abstraction to be based around that flow, rather than tokio's Stream + Sink. Passing the request IDs around everywhere is a clunky way to match requests and responses, and most things that use the plugin will need to know what the request was to process the response. tower-service
is built for this use case.
This will also make implementing the HTTP ledger plugin protocol more straightforward.
When making a STREAM payment to the SPSP receiver, it appears the SPSP receiver is rejecting the final STREAM packet (which is probably trying to close the STREAM).
For example, if an SPSP sender sends 1 units (in a 1-packet STREAM), then the first packet (Prepare with 1 units) works, and fulfills. But then there is a 2nd prepare (with amount=0), and the SPSP receiver always rejects this packet with an F99
error code.
Note also that the triggeredBy
in this reject packet is empty, but should be the ILP address of the SPSP receiver.
Similar to the IlpAddress
type from #69, it might be a good idea to define a new type, IlpAsset
, which contains str: asset_code
(or even better have some enum with defined assets, as well as u8: asset_scale
which is well defined per asset.
In the general case, the Stream client doesn't seem to be properly honoring a Connector response of F08_AMOUNT_TOO_LARGE.
Even if the StreamClient gets a few Fulfillments, as soon as it exceeds the Connector's max-packet amount (i.e., an F08 is encountered), the StreamClient never reduces its packet size, which simply creates an infinite loop as the StreamClient keeps repeating packets with an amount too high.
For example, here is some output from the SPSP pay command (spsp pay '--receiver=... --amount=1000
where the Connectort has a max_packet amount of 10.
019-03-15T15:42:44Z INFO interledger_stream::client] Sending packet 1 with amount: 5 and encrypted STREAM packet: StreamPacket { sequence: 1, ilp_packet_type: Prepare, prepare_amount: 0, frames: [ StreamMoneyFrame { stream_id: 1, shares: 1 }, ConnectionNewAddressFrame { source_account: test.alice.sender } ] }
[2019-03-15T15:42:45Z INFO interledger_stream::client] Sending packet 2 with amount: 10 and encrypted STREAM packet: StreamPacket { sequence: 2, ilp_packet_type: Prepare, prepare_amount: 0, frames: [ StreamMoneyFrame { stream_id: 1, shares: 1 } ] }
[2019-03-15T15:42:45Z INFO interledger_stream::client] Sending packet 3 with amount: 20 and encrypted STREAM packet: StreamPacket { sequence: 3, ilp_packet_type: Prepare, prepare_amount: 0, frames: [ StreamMoneyFrame { stream_id: 1, shares: 1 } ] }
[2019-03-15T15:42:45Z WARN interledger_stream::congestion] Got F08: Amount Too Large Error without max packet amount details attached
[2019-03-15T15:42:45Z INFO interledger_stream::client] Sending packet 4 with amount: 20 and encrypted STREAM packet: StreamPacket { sequence: 4, ilp_packet_type: Prepare, prepare_amount: 0, frames: [ StreamMoneyFrame { stream_id: 1, shares: 1 } ] }
[2019-03-15T15:42:45Z WARN interledger_stream::congestion] Got F08: Amount Too Large Error without max packet amount details attached
[2019-03-15T15:42:45Z INFO interledger_stream::client] Sending packet 5 with amount: 20 and encrypted STREAM packet: StreamPacket { sequence: 5, ilp_packet_type: Prepare, prepare_amount: 0, frames: [ StreamMoneyFrame { stream_id: 1, shares: 1 } ] }
[2019-03-15T15:42:45Z WARN interledger_stream::congestion] Got F08: Amount Too Large Error without max packet amount details attached
[2019-03-15T15:42:45Z INFO interledger_stream::client] Sending packet 6 with amount: 20 and encrypted STREAM packet: StreamPacket { sequence: 6, ilp_packet_type: Prepare, prepare_amount: 0, frames: [ StreamMoneyFrame { stream_id: 1, shares: 1 } ] }
[2019-03-15T15:42:45Z WARN interledger_stream::congestion] Got F08: Amount Too Large Error without max packet amount details attached
...
[2019-03-15T15:42:59Z WARN interledger_stream::congestion] Got F08: Amount Too Large Error without max packet amount details attached
[2019-03-15T15:42:59Z INFO interledger_stream::client] Sending packet 713 with amount: 20 and encrypted STREAM packet: StreamPacket { sequence: 713, ilp_packet_type: Prepare, prepare_amount: 0, frames: [ StreamMoneyFrame { stream_id: 1, shares: 1 } ] }
The repetition above will continue forever.
The BTP Server should store a Vec of connections per account instead of just one
Create an alternate version of the STREAM client that takes an OutgoingService
and a list of Account
s, and decides in real-time which outgoing account to send packets through. It should try sending through all of them and then base the decision for subsequent packets on the latency, error rate, and exchange rate for each of them.
This should take inspiration from Multi-Path TCP.
Current thinking:
peer.settle.<currency>
(credit to @adrianhopebailie for this idea)IncomingService
written in Rust. Using an atomic Lua script, the latest claim is set and the balance updated accordingly in RedisOpen questions:
peer.settle.<currency>
is an outgoing or incoming claim? Should the router look at the from
account to figure out whether the from
is "internal" or not?Hi everybody, I started some work on improving error handling in another WIP branch here.
I began working on some tests, but found it hard to test some error cases. In an attempt to improve the creation of tests, I took it upon myself to improve error handling in the process.
Due to the layered nature of ILP, I thought it would be useful to formally define the types of errors that could occur within each layer for each crate. The error handling and propagation scheme looks roughly like this:
edit: I meant InvalidPacket
and UnexpectedPacket
in the diagram above...
Furthermore, each layer of the protocol is restricted to only emitting errors to the layer directly below it. Since no ILP packet handling is done explicitly at the BTP layer, I thought this made sense. Does anybody have different opinions on this?
Since the protocol stack is roughly ranked from bottom to top as: BTP, ILP, STREAM, SPSP, I thought it was appropriate to only allow the layers to communicate errors to the layer immediately beneath. (e.g. STREAM can propagate errors to ILP, which can be propagated to BTP, and handled at that layer.)
For now, I am purposefully omitting BTP Error packets and ILP Error packets, because the builder interface seems more appropriate for naming purposes, but it does seem to lead to some slight inconsistency. Should these cases be handled in the module-level error handling case described above?
Higher-level (sub)protocols and interfaces are still able to utilize error types found in these crates by importing the appropriate error. (e.g. use ilp::Error as IlpError
)
This is also intended to reduce error handling dependencies. I think this is a more reasonable approach since each layer of the protocol is split into crates, therefore error types for each layer should be explicitly defined.
Let me know if you have any thoughts, concerns, or ideas, before I proceed.
Warning: this is definitely a premature optimization.
Here are some possibilities to consider:
Prefix
type, along similar lines to the Address
type, to avoid copying address prefixesCreate a bundle of all of the components that runs a combined sender, receiver, connector, and expose an HTTP API to interact with it.
API
+ more for managing settlement methods
Helper module that combines the various streams that make up a PluginBtp
xrp_address
, settle_threshold
, settle_to
in account record (make sure xrp_address
isn't already linked to another account)Right now the exchange rates have to be set via the API. It would be useful to have a component that fetches rates from external APIs like the default rate backend from the ilp-connector
. That way, when you start up the node bundle, you wouldn't have to manually set the exchange rates if you have accounts denominated in different currencies.
Replace them with something smaller, since they are only used for SPSP
Bindings using neon or wasm-pack (which one is more performant / easier to get working?)
This type should:
Bytes
Debug
, Display
, FromStr
Deref
for the targets &str
and &[u8]
Serialize
, Deserialize
This will let us efficiently use the addresses and avoid copying them but also enable printing them without constantly re-parsing the bytes as utf8.
Credit to @sentientwaffle for this idea.
Right now we're using kcov
/ cargo-kcov
(a C++ program) for collecting code coverage statistics. The advantage of kcov
is that it seems to be fairly widely used for compiled languages. The disadvantage is that it's running into compilation issues right now and it's not designed specifically for rust so it sometimes tracks the coverage incorrectly.
As an alternative, we could switch to using tarpaulin
instead. It is written in Rust and designed for Rust projects. The disadvantage is that it's not as widely used as kcov (as far as I can tell).
Thoughts?
When the BTP client service starts up, it loads all of the accounts that are configured with outgoing btp_uri
s from the store (database) and tries to connect to them. Similarly, the CCP route broadcaster service looks in the store every 30 seconds to check what accounts it should send broadcasts to.
Right now, if an account is added via the API or CLI after the node has started up, nothing will be triggered on the services. The BTP client won't try to connect to the btp_uri
if the account has one and the account will only get route broadcasts (if send_routes
is set to true) the next time the broadcast interval hits.
Should the chain of services be notified when a new account is added? This could be done by having add_account
and remove_account
methods on the Service
trait. The default implementation would be a no-op, but services that do want to add logic for this could do so.
(Note that adding methods for when accounts connect and disconnect would make the service interface nearly equivalent to the Ledger Plugin Interface, minus sendMoney
)
After a brief discussion with Evan, there appears to be a small memory leak in the current implementation.
When the connection closes, it leaves one future hanging on the client side, and at least one future hanging on the server side. It might be the result of splitting the plug-in into channels.
Tasks
(all my comments are based on commit 500577
Since the codebase is heavy on generics, I suggest we rename (and even add some guidelines to a CONTRIBUTING.md) file for the following:
I
for IncomingService
O
for OutgoingService
IO
in the struct declaration when it can be either (e.g. check the ValidatorService)
S
for AccountStore
and the store variants:
A
for Account
F
for functions
also try to be consistent with Outgoing
and IncomingService
definitions
let's target for:
pub struct Service<S,I,A> {
ilp_address: Address,
store: S,
next: I // or O,
account_type: PhantomData<A> // optional
}
It seems like the coverage statistics only show the unit test coverage, rather than the integration tests. It's especially hard to unit test the redis-related functionality so it would be nice if the integration tests that do capture that showed up in the stats.
Implement the Echo protocol as an IncomingService
. It should inspect the request to determine whether the packet corresponds to the Echo protocol and, if so, swap out the Prepare packet for one addressed to the destination address from the original packet data. As long as this service is inserted before the Router, nothing else should need to change to handle this protocol. The fulfillment will be returned as normal to the previous hop.
Replace the ()
error types with a proper error type
https://boats.gitlab.io/blog/post/await-decision-ii/
Iโm working actively on drafting a stabilization report to propose stabilizing a minimum viable version of async/await in the 1.37 release of Rust. 1.37 will be released in mid-August, and branches on July 4.
Once that's out, we should get a PR that replaces all the futures handling with async/await calls.
Use optional features like the main tokio crate.
This also has the benefit of making all of the submodule types and functions available and searchable in the top-level crate documentation.
Possibly using diesel so we can use support SQLite, Postgres, and MySQL.
Also create ways for the settlement engine(s) to interact with that store.
Currently the routes inside ilp-router are Bytes
. We were earlier talkign with @emschwartz about replacing them with the new Address type. However, this doesn't make much sense, as we want prefix matching. Some routes might be even empty.
We could create a new Route
type to be used as the key of the hashmap alongside helper methods for it.
Is this worth putting effort into, or should we stick with the Bytes type here? Note that after merging #75, variables such as destination
will be Address
, and as such we'll have to convert them to Bytes
for the starts_with
etc. comparisons.
See https://github.com/codecov/example-rust/blob/master/.travis.yml for an example
I am currently going through the existing codebase and trying to add documentation wherever possible, using this as my reference.
Right now, I am currently focused on updating the lower-level primitives that will not be exposed to the bindings. If there is a specific area that you would like to see more documentation for, please let me know or contribute by submitting a PR.
Even though Redis is a fast store, we should avoid loading and parsing the Account
details every time we handle a request. Especially if we need a copy of all of them in-memory for the routing table, we should use an in-memory cache of account details and clone the Account
object instead of re-loading the details from Redis every time we need them.
Use bytes
to avoid copying data when (de)serializing packets
The BTP and HTTP servers both accept incoming packets/connections from some TCP listener. It should be possible to share the listener and expose one port for all incoming connections.
Querying the SPSP endpoint returns a response that is missing the headers required by IL-RFC-0009, specifically: Content-Type
and Cache-Control
.
Here is the current output:
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 3000 (#0)
> GET /.well-known/pay HTTP/1.1
> Host: localhost:3000
> User-Agent: curl/7.54.0
> Referer: rbose
> Accept: application/spsp4+json, application/spsp+json
>
< HTTP/1.1 200 OK
< content-length: 206
< date: Sat, 09 Mar 2019 18:04:33 GMT
<
{"destination_account":"private.local....","shared_secret":"..."}
It probably shouldn't be possible to create two different accounts that have the same ILP address, but the admin API currently allows this.
For the Redis store, using EVAL
with the full lua script causes that text to be sent to Redis each time. Instead of doing this, we can use the Script
struct to handle automatically caching the script (and calling it by its hash instead of the full text).
The main blocker to doing this before was the lack of support for asynchronously invoking Script
s, but that was added in redis-rs/redis-rs#206 and included in versions >= 0.11.0-beta.1.
It should probably error if it gets an FXX error (other than F08) and retry (maybe with some backoff) if it gets a TXX error
Based on comment from @sentientwaffle in #45 (comment)
futures::{Stream,Sink}
The packet parsing module should not accept packets whose data or address lengths exceed the max values
Integrate the API described in Settlement Architecture thread conclusion.
Kincaid from Kava started modifying the XRP paychan plugin to expose an HTTP API, though we may want to change the exact API it uses.
This test fails right now:
#[test]
fn listener_pushes_money() {
let _ = env_logger::init();
let (plugin_a, plugin_b) = create_mock_plugins();
let got_money = Arc::new(AtomicBool::new(false));
let got_money_clone = Arc::clone(&got_money);
let run = StreamListener::bind(plugin_a, crypto::random_condition())
.map_err(|err| panic!(err))
.and_then(|(listener, conn_generator)| {
let (destination_account, shared_secret) =
conn_generator.generate_address_and_secret("test");
let client = connect_async(plugin_b, destination_account, shared_secret)
.map_err(|err| panic!(err))
.and_then(|conn| {
let handle_streams = conn.for_each(move |stream| {
let got_money_clone = Arc::clone(&got_money_clone);
let handle_money =
stream.money.clone().collect().and_then(move |amounts| {
assert_eq!(amounts, vec![100, 500]);
got_money_clone.swap(true, Ordering::SeqCst);
println!("got money");
Ok(())
});
tokio::spawn(handle_money);
let data: Vec<u8> = Vec::new();
let handle_data = read_to_end(stream.data, data)
.map_err(|err| panic!(err))
.and_then(|(_, data)| {
assert_eq!(data, b"here is some test data");
Ok(())
});
tokio::spawn(handle_data);
Ok(())
});
tokio::spawn(handle_streams);
Ok(())
});
tokio::spawn(client);
listener
.into_future()
.map_err(|e| panic!(e))
.and_then(|(next, _listener)| {
let (_conn_id, conn) = next.unwrap();
let stream = conn.create_stream();
stream
.money
.clone()
.send(100)
.map_err(|err| panic!(err))
.and_then(|_| {
println!("sent money");
stream
.money
.clone()
.send(500)
.map_err(|err| panic!(err))
.and_then(|_| {
write_all(stream.data, b"here is some test data")
.map_err(|err| panic!(err))
.map(|_| ())
})
})
.and_then(move |_| conn.close())
})
});
let mut runtime = Runtime::new().unwrap();
runtime.block_on(run).unwrap();
assert_eq!(got_money.load(Ordering::SeqCst), true);
}
It would be cool to make a deconstructed SPSP / STREAM receiver designed for AWS Lambda (using the new Rust runtime).
To build this we would need:
Other notes:
There are two ways to design this protocol. Either:
The problem with 1 is how to handle incoming Prepare packets on the client side. It could poll the server, but (long) polling is kind of clunky. We would also need IDs to correlate request and response packets. If we wanted to have multiple instances of the plugin running concurrently, they would all need to be feeding the packets they get into the same database, because we couldn't guarantee that the same instance gets the response to its Prepare. Finally, WebSockets were specifically designed for bidirectional communication between a client and server, so BTP probably serves this use case better than a purely HTTP-based protocol would.
Option 2 is much easier. To send outgoing Prepare packets, the sender would POST /ilp
with the packet in the HTTP request body and the HTTP response would contain the Fulfill or Reject packet (if we want idempotency, we could make it PUT /ilp/:uuid
). This would work the same way in both directions. Every packet would carry the auth details (using the normal HTTP Authorization
header), and we could optionally upgrade to HTTP/2 to speed the whole thing up. If there are other "sub protocols" like payment channel claims now sent over BTP, those could just use a different URL (POST /xrp-paychan
for example).
@sappenin ran into an issue where he was testing with a large total send amount and a small max packet amount. Right now, STREAM only limits the amount of money it puts in flight if it gets T04 errors. It doesn't limit the number of packets it sends.
The congestion controller API should return an array of packet amounts the sender should send (instead of the sender looping can just asking the controller how much it can put in flight). This would let the controller easily limit the number of packets in flight and let it try different values, for example, if it's trying to figure out the path's max packet amount.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.