GithubHelp home page GithubHelp logo

awalanetwork / specs Goto Github PK

View Code? Open in Web Editor NEW
3.0 3.0 1.0 735 KB

Awala Protocol Suite Specifications

Home Page: https://specs.awala.network

License: Creative Commons Attribution Share Alike 4.0 International

Ruby 67.88% HTML 32.12%
delay-tolerant-network snearkernet messaging pki relaynet awala

specs's Introduction

Awala Protocol Suite Specifications

This is a Jekyll site for all the official Awala specifications. A live version of the site is available at specs.awala.network.

Run locally

If it's the first time you're going to generate the documentation, start by checking that Ruby and Bundler (bundle) are installed and available in your $PATH. Then install the Ruby dependencies for this project, including Jekyll:

bundle install --path vendor/bundle

You're now ready to serve the site locally:

  1. Start Jekyll:
    bundle exec jekyll serve
  2. Go to http://localhost:4000.

specs's People

Contributors

dependabot[bot] avatar gnarea avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar

Forkers

junming88

specs's Issues

Draft Service Message Multicast protocol

Executive summary

This protocol will allow a given endpoint to send the same parcel to multiple recipients, in such a way that (1) the sender only delivers a single message and its gateway will propagate it and (2) each recipient is able to decrypt the message with their corresponding key.

Technical details

We're already using CMS EnvelopedData values to serialise the ciphertext and we could just add recipients with minimal changes. However, it'd be good to consider other approaches just in case.

I've had a very superficial look at the Signal Private Group System but it seems very promising and I think we could use it to implement this functionality.

See also: https://blog.phnx.im/rfc-9420-mls/

See #43 for the broadcast counterpart.

Private gateways are unable to authorise public gateways to send them cargoes

Which means that, for example, the Android Gateway will refuse cargo from the Relaynet-Internet Gateway, even though cargo can still be sent in the opposite direction.

The solution should be to issue short-lived, single-use certificates to public gateways, and attach such certificates to the CCA. Public gateways won't have to store them, as they can simply extract it from the CCA and sign outgoing cargo.

In other words, we'll be using delivery authorisations like the ones we currently use to deliver parcels.

Draft protocol to allow endpoints to stream data via a public gateway

To support applications like VoIP, for example.

The public gateway will effectively act as a proxy. All the usual authentication and authorisation from Awala will apply, so only pre-authorised endpoints will be able to initiate the stream. Likewise, the stream will be end-to-end encrypted, although this would have to be implemented in a totally different way using stream ciphers (we can't use parcels or CMS EnvelopedData).

Unlike #47, peers won't be able to see each other's IP addresses.

Draft "CoEmail" binding to relay cargo via email

This would come in handy when a courier reaches a place where the Internet is censored, but people can access email.

There are two ways to achieve this:

  • My preferred approach would be to add support for "alternative public addresses" for public gateways, so they can optionally specify an email address.
  • Alternatively, the courier could send all cargoes to the same email address and the recipient there would be responsible for routing them to their corresponding public gateways.

Note that the device sending the cargo will have to have a unique email address to receive incoming cargo.

We may want to use steganography (#44). See also #40.

Provide method to retrieve public node certificate and initial session key certificate

To make it easy/scalable to distribute certificates for gateways and endpoints.

In PoHTTP, it could be a GET request with an Accept header like application/vnd.relaynet.node-certificate-set and its successful response could be a JSON document with the following properties:

  • node (required): The PEM-encoded serialization of the endpoint certificate.
  • session (optional): The PEM-encoded serialization of the endpoint's initial key certificate per the Channel Session Protocol.

Note that in Internet-based PDCs like PoHTTP it wouldn't make sense to retrieve the certificate of the gateway. It'd only apply to endpoints.

On the other hand, in internal PDCs like PoWebsSockets, Internet gateways could use this mechanism to expose their certificate.

Cargo Relay Binding is unnecessarily complicated, rigid and fragile

The cargo relay binding protocol per RS-000 and corresponding specialisation in RS-004 are overly complicated due to the requirement for the gateway/relayer to authenticate its peer so that cargo can be deleted once the receipt has been acknowledged. In hindsight, that requirement should actually be undesirable as explained below.

The requirement should be undesirable because even if the peer can be trusted, there's no guarantee the cargo will reach its destination. So the cargo -- or more precisely, the parcels in it -- must be retained regardless of whether the peer acknowledges to have safely stored the cargo. Instead, the parcels must only be removed from the origin gateway when the target gateway acknowledges the receipt of the cargo.

Dropping this requirement would drastically simplify the cargo relay binding, which will in turn make it easier to implement and more secure:

  • Gateways won't have to authenticate the relayers. Gateways would provide any relayer with the cargo they request, as long as they have a valid Cargo Collection Authorization (which remains unchanged).
  • Relayers won't have to authenticate the gateways they connect to. Any gateway connected to the relayer could deliver and collect any cargo (but any collected cargo will no longer be deleted); in other words, gateways can now add and read cargo from a relayer, but not update or delete.
    • This is a big privacy bonus: In the old world, an attacker could compromise a relayer app and log where/when each gateway collected/delivered their cargo, which could be combined with outside data to help identify the real user. By making these operations anonymous, we can take things to the next level and get gateways to collect/deliver random cargo that doesn't belong to them.
    • Relayers still have the means to prevent abuse: They can throttle/block abusive gateways.
  • There would be no need to do a handshake between a relayer and a gateway.
  • With no handshake, there's no need to generate and manage certificates and corresponding private keys for relayers. This alone is a huge simplification.
  • By not having to have a custom L7 protocol (CoSocket), we can leverage a standard L7 one like WebSocket or gRPC, which means we're now able place load balancers like GCP's or CloudFlare's -- which improves the security and availability of the service.

However, there will be some unintended side-effects -- although the advantages still outweigh them:

  • Relayers won't be able to free up disk space en-route when they're in the region disconnected from the Internet; in this case, cargo will only be deleted at the end of the route. On the other hand, cargo will continue to be deleted when going in the opposite direction, once the cargo has been delivered to the relaying gateway over the Internet; although that's simply because it's the end of the route anyway.
  • The network will no longer be store-and-forward strictly speaking, in the sense that the two paired gateways will have to keep in-transit parcels for (much) longer -- until the acknowledgement is received.

Define binding-level error handling

There are two sources of errors:

  • Issues detected by a relay in the middle of a PDC or CRC. For example:
    • A relaying gateway received a parcel form an unauthorised sender.
    • A local gateway received a malformed cargo (i.e., an invalid RAMF serialisation).
  • Issues detected after the PDC/CRC ends. For example:
    • A relaying gateway failed to deliver a parcel to a public endpoint because the server was never up or its TLS certificate was invalid.
    • Cargo/parcel expired by the time it reached its final destination.

I'll call the former "relay errors" and the latter "post-relay errors" in this issue.

We should:

  • Identify which relay errors can/should be detected and reported, vs which should be deferred even if they could be detected live.
  • Augment parcel/cargo acknowledgements so they include a field for the "rejection reason" in the case of relay errors.
  • Document how to handle post-relay errors.

Note that the PoHTTP interfaces in the Pong server and the relaying gateways already report relay errors informally, not as part of the PoHTTP spec.

This topic is known as dead letter channel in async messaging.

See also #38.

Draft Quality of Service X.509 extension for Delivery Authorization certificates

Executive summary

Since relays like couriers and public gateways have a limited amount of disk space they can allocate to each private gateway, they'll be forced to stop accepting messages for those gateways once their quota is reached. With a Quality of Service mechanism in place, messages could be prioritised so that relays can make room for messages with a higher priority by deleting less critical messages when necessary.

Describe the solution you'd like

I propose that we attach it to the sender's certificate in the form of a non-critical X.509 extension. And when we calculate the priority for a message, we take the lowest priority in the sender's certificate chain.

The priority value would be an integer in the range [-2, 2] and would default to 0.

We'll probably have to get the desktop/Android gateway to check with the user if an app requests to use the highest priority. At least when the Internet is unavailable.

Describe any alternatives you've considered

Generally speaking, the priority field could be attached to the RAMF message (as a new field), the sender or both. However, it'd be problematic to attach it to RAMF messages because it'd only be applicable to cargo and parcel messages, but it wouldn't apply to messages like a Parcel Collection Acknowledgement, for example.

Make Ed25519/X25519 support required and Ed448/X448 recommended

Awala uses RSA-PSS keys for digital signatures and ECDH with NIST curves (e.g., P-256) for encryption, but we want to migrate to 25519/448.

Digital signatures (RSA-PSS to EdDSA)

We want to drastically reduce the size of the certificates attached to RAMF messages. One drawback, however, is that verifying a signature takes a lot longer with Ed25519 vs RSA-2048 (signature production is faster, but it doesn't matter in Awala -- signatures are checked a lot more often).

RSA-PSS would still be supported but no longer recommended.

Encryption (ECDH with NIST curves to X25519/X448)

NIST curves are controversial, and I don't want that FUD to extend to Awala.

NIST curves would still be supported but no longer recommended.

Why we can't do it yet

These curves are natively supported across all the platforms we support today. However, we also need our third-party cryptographic libraries to support them in their CMS SignedData and EnvelopedData implementations:

  • BouncyCastle, used by the Awala Kotlin library on the JVM and Android, appears to support them.
  • PKI.js, used on Node.js, does not support them.

For the record, I've had public and private conversations with the Peculiar Ventures team about adding support for these curves in PKI.js, and I've also asked Google to support EdDSA and X25519/X448 on their Cloud KMS.

Draft protocol to allow two endpoints to stream data bidirectionally without servers

To support applications like VoIP, for example.

The private and public gateways would simply be responsible for helping establish that direct connection securely. This will most likely require using TCP simultaneous open.

Note that because the two peers would be connected directly, they'd be able to see each other's IP addresses, which may not be desirable.

See #48.

Support processing of large payloads in RAMF messages

Executive summary

Due to limitations in a 3rd-party library and to minimise complexity in the initial implementation of the protocol suite, messages are limited to payloads of up to 8 MiB to prevent larger messages from exhausting the memory available in couriers and gateways processing such messages. Until this is fixed, Relaynet service providers wishing to send larger messages will have to chunk the data and assemble it at the receiving end.

The objective of this issue to identify and implement a solution that makes it very easy to send and receive large messages, without requiring the service provider to do any data chunking -- it'd be done for them behind the scenes.

Description

This 8 MiB limit is partly arbitrary and could be slightly increased as a stopgap, but RAM and swap memory are limited resources, so no system can hold arbitrarily large values in memory. In addition to supporting payloads small enough that can be held in memory, we should support large payloads by streaming them.

At the moment, the message payload is contained in a CMS EnvelopedData value, which is in turn contained in a CMS SignedData value. We could do serialisation and deserialisation of CMS values using streams, which BouncyCastle supports but PKI.js doesn't, so generally speaking we'd have to either add streaming support to PKI.js (and other CMS libraries in platforms to be supported in the future) or simply detach the payload from CMS values.

Potential solutions

The underlying implementation will be irrelevant to service providers and couriers: They'll still get to produce, consume and transport potentially large messages. The options below explore how, at the network level, we could achieve that.

Option A: Process RAMF messages as streams

The payload size limit could be very large (like the original 4 GiB).

This would require the following changes to the RAMF spec:

  1. Detach the ciphertext (encryptedContentInfo.encryptedContent) from the CMS EnvelopedData value.

  2. Place the CMS EnvelopedData value before the ciphertext so the symmetric key can be available before reading the ciphertext.

  3. Reinstate the signature hashing algorithm as a RAMF message field so that a consumer can start computing the digest as the message is being received:

    The algorithm MUST be valid per RS-018. This value MUST be DER-encoded as an ASN.1 Object Identifier; for example, SHA-256 (OID 2.16.840.1.101.3.4.2.1) would be encoded as 06 09 60 86 48 01 65 03 04 02 01. It MUST also have a fixed length of 16 octets, right padded with 0x00.

    This will involve partially reverting d3bc4fa

We should also consider the implications of supporting large payloads when producing and verifying the digital signature. Algorithms like Ed25519/Ed448 can't work with streams, so we'll have to use their pre-hash variant instead.

This is the ideal solution in my opinion because I think it'd be easier to implement, but I know it's frown upon in the context of asynchronous messaging where messages are supposed to be small.

Option B: Chunk the RAMF messages

Endpoints and gateways would be responsible for splitting large messages into small chunks, in a way that's seamless to the service provider.

This is "better" or "more idiomatic" than Option A from an asynchronous messaging perspective, but I think will make it harder to implement due to the complexity in putting the pieces together at the receiving end.

Option C: Optionally Detach the Payload

When the payload is too big, the RAMF payload would be just a reference to an external value and its (SHA-256) digest. If the payload is to be encrypted, it'd be encrypted with a symmetric key that the recipient could decrypt from their RecipientInfo (as is always the case with EnvelopedData values).

This is "the idiomatic approach" from an asynchronous messaging perspective, and should be easier to implement than Option B because there are only two pieces to put together: The RAMF message and its detached payload.

Other considerations

  • PKI.js (used by relaynet-core-js) doesn't support stream-based plaintexts (see: PeculiarVentures/PKI.js#237). Bouncy Castle (used by awala-jvm) does support streams.
  • Google PubSub, which we use extensively at Relaycorp for our own deployment of Awala-related infrastructure, has a message limit of 10MB.
  • We may want to introduce an extension to the delivery authorization to specify the maximum size of the payload. Likewise, we could introduce another extension to control the bandwidth the sender is allowed to use in a given period of time (analogous to the rate limiting extension).
  • Bindings will generally have to do chunking to process large messages. That's definitely true with generic L7 protocols like plain old HTTP (PoHTTP), gRPC (CogRPC) and WebSockets (PoWebSockets), but shouldn't be a problem if we introduce purpose-built L7 protocols like PoSocket and CoSocket.
  • Gateways and couriers are also likely to have to do chunking behind the scenes. Especially gateways, as they use brokers like NATS to wrap/unwrap cargos.
  • Whilst gateways must distribute their payloads in as many cargoes as necessary anyway, they may benefit from this solution.

Require Encrypted SNI in locations where it isn't blocked

48f9bf5 introduced a recommendation to use ESNI, but I think we should take this further and require it in places where it isn't blocked (it's blocked in China and India, for example).

This should be straightforward, but we need to check support across all supported platforms and I'm also debating whether to hold off doing this until the ESNI spec is finalised/stable.

Define messaging-level error handling

Or more specifically, define error handling in endpoint and gateway messaging channels. The service provider should always be fully responsible for the error handling in its channel.

There are a number of things that could go wrong at this level. For example:

  • The sender might've used a channel session key that isn't recognised (it could've already expired).
  • The recipient may not recognise the type of service message encapsulated in a parcel.

This is known as invalid message channel in async messaging.

See also #37.

Including private gateway certificate in parcels could lead to fingerprinting

Including the full certificate chain in a parcel could allow someone to cross link a private gateway across two or more endpoints served by that gateway.

This represents a privacy issue as it could be used for fingerprinting. It wouldn't be easy, as the attacker would have to have access to 2+ different services used by the end user, but it'd be increasingly likely as Relaynet gains traction.

One solution could be not to include the private gateway certificate in the sender certificate chain of the parcel, which has the negative effect of requiring the public gateway to keep a mapping of (private) endpoints to their corresponding private gateways -- Which wouldn't be necessary if you can fully rely on the PKI. On the plus side, fewer certificates in the chain should make most parcels significantly smaller.

Related issues

Use Subject Key Identifier to reference the recipient's DH public key in EnvelopedData values

Instead of the issuerAndSerialNumber, which is more fiddly to use because:

  • You have to keep track of the issuer's DN.
  • The serial number is an integer, which makes it slightly more fiddly when you want the id to be the crypto digest of the public key.

The subjectKeyIdentifier choice solves those issues, but PKI.js doesn't support it even though RFC 3370 allows it. This is what we did originally but I had to move away from this in relaycorp/relaynet-core-js#32 due to this limitation in PKI.js.

See also 9ec73c7, where the originator's key id was changed to use the serial number instead of the SKI as a consequence of this limitation. We should probably switch to use the SKI there too when/if we implement this change.

Draft "PoScatternet" binding to relay parcels via scatternets (Bluetooth meshnets)

This would allow private gateways connected to the same scatternet to exchange parcels.

The main use case would be to support protesters, as they could use instant messaging apps -- amongst other apps -- during a protest. This would be even more powerful when combined with the broadcast functionality in #43.

The main risk I see is an attacker infiltrating to identify the origin of each parcel and/or refuse to relay them to other gateways, so this is going to involve a trade off between privacy, performance and availability. And obviously privacy is going to have to take precedence so I'm inclined to use a flooding algorithm.

See also #45.

Consider whether use of CMS (or EnvelopedData specifically) is necessary

As discussed on PeculiarVentures/PKI.js#255, the maintainers of PKI.js think that the channel session protocol shouldn't use CMS EnvelopedData:

it seems the use of CMS in this protocol is unneeded and wasteful bytes wise.

We didn't get into any specifics but I'm creating this issue to track this feedback, in case we find issues related to this or we get further feedback that this was a bad idea. For now, I remain convinced that the extra bytes to transfer/store/process are worthwhile.

See also #14.

Draft informational spec to offer guidelines on the use of steganography

Steganography could allow users and couriers to conceal Awala traffic (parcels and cargoes) in files that appear to be allowed by the local regime -- Like a video of a cat, which contains an E2E encrypted parcel, which in turns contains a tweet to be posted to twitter.com.

Pros:

  • In countries with national intranets, couriers -- or indeed gateways -- could leverage that intranet to transmit Awala traffic via email (#40, #41) or a local social network, for example.
  • This would make it safer to transport cargo in (unencrypted) external harddrives.
  • Even if the gateway or courier apps are uninstalled -- by hand or by a duress code mechanism -- the parcels/cargo could survive in the phone's gallery.

Cons:

  • The signal-to-noise ratio would be pretty bad.

We should ideally use encoding algorithms that take a series of parameters that are kept secret (only known to the sender and the recipient, like a shared key). This would allow us to use one generic algorithm with open source implementations, instead of many proprietary algorithms with closed source implementations that have to be replaced as censors uncover them.

Processing of Cargo Collection Authorizations (CCAs) is insufficiently defined

Summary

The core spec defines the structure of a CCA and the CogRPC spec defines how a courier should send it to a public gateway when collecting cargoes, but the process to send the CCA from a private gateway to a courier is not defined.

Proposed solution

Per the current specs, the client (e.g., a private gateway, a courier) initiates a request to collect cargo from the server (e.g., a courier, a public gateway) by including the CCA in the request headers. The server will then send zero or more cargoes to the client, each of which the client should acknowledge.

To solve this issue, we could change the cargo collection process to do the following instead:

The client should send the CCA as a gRPC message and the server should return zero or more cargoes in return. And instead of having the server send an acknowledgement, it'd send a "collection complete" message when no further cargoes are to be sent for a given CCA.

In other words, the CollectCargo RPC will support the following messages:

  • CargoCollectionAuthorization (1 or more per call), sent by the client.
  • CargoDelivery (0 or more per CargoCollectionAuthorization), sent by the server. This message encapsulates one cargo message.
  • CargoDeliveryAck (0 or 1 per CargoDelivery), sent by the client.
  • CargoDeliveryComplete (exactly 1 per CargoCollectionAuthorization), sent by the server.

Note that the cargo delivery RPC remains unchanged.

Positive side-effects

  • The implementation will be simpler, since there's now two RPCs per connection: One to deliver cargo and the other to collect cargo. Previously, we had one to deliver cargo and N to collect cargo (one for each CCA).
  • This will make it easier for courier apps to exchange cargo amongst them in the future: A courier app could connect to another courier as a private gateway.

Negative side-effects

  • The CollectCargo gRPC method will become slightly more complicated, since we'll have to mimic polymorphism to allow the client and the server to send different types of messages -- CargoCollectionAuthorization/CargoDeliveryAck and CargoDelivery/CargoDeliveryComplete, respectively.

Draft informational spec to offer guidelines for Relaynet service providers

Executive summary

Relaynet apps will look and feel like regular Internet apps when the Internet is available, but building them in such a way that they tolerate delays will have profound implications on the UX and, to a lesser extent, the implementation. For this reason, we should write an informational spec to guide Relaynet service providers -- Or, alternatively, document such guidelines on the website.

Is your feature request related to a problem?

Yes. Relaynet service providers can't rely on apps being able to communicate in near real-time, so they'll need guidance in addition to easy-to-use endpoint libraries.

Describe the solution you'd like

The following should be part of the guidelines:

  • All Relaynet apps are inherently offline first, so UX designers should be mindful of that and employ offline first best practices.
  • Developers should embrace asynchronous messaging and not try to shoehorn RPCs on top of Relaynet (that can still be done but should be avoided unless integrating a legacy system). Quoting an excerpt from the Enterprise Integration Patterns book (page 54):

    (...) thinking about the communication in an asynchronous manner forces developers to recognize that working with a remote application is slower, which encourages design of components with high cohesion (lots of work locally) and low adhesion (selective work remotely)

  • Authentication:
    • First-party applications can rely on Relaynet's built-in authentication, without the need for bearer tokens, etc.
    • MFA should be compatible with DTNs. Consequently, one-time passwords mustn't be time-based (OTP) or challenge-response-based, but they can be sequence-based (HOTP).

Record technical introduction to Relaynet

This will be a recording aimed at people who want to contribute to the specs and/or implement some specs.

Basically, a screencast with:

  • Diagrams to show how things work.
  • A demo of the PoC.
  • An analysis of the different components that were implemented for the PoC.
  • How the protocol suite documentation is structured.

The recording shouldn't be longer than 30 minutes.

Document threat of compromised NTP server

An attacker could operate/compromise a NTP server to send invalid times, thus manipulating the validity of certificates and RAMF messages.

This should be reflected in RS-019.

Make clock drift resistance configurable

The RAMF spec requires a 5-minute grace period when verifying the data and TTLs in messages in order to cope with clock drifts:

https://github.com/relaynet/specs/blob/2b85a08643dd6fbc9d86e3e222a87df2bb9772db/rs001-ramf.md#L56-L57
https://github.com/relaynet/specs/blob/2b85a08643dd6fbc9d86e3e222a87df2bb9772db/rs001-ramf.md#L63

I was thinking of people living in a place with a prolonged or permanent Internet blackout, where a (reliable) NTP server may not be available. That'd essentially extend a message's expiry by 10 minutes.

However, this shouldn't be necessary in most cases, or at least not with such long grace period. In fact, this 10-minute expiry extension could be problematic in time-sensitive applications.

Consequently, the grace period should be configurable within a given range (e.g., from 0 to 10 minutes) -- possibly by the user's gateway, with the option for endpoints to override it (in which case the lowest value should be taken).

See also #16.

Draft protocol to compensate couriers

I see this as a blocker to get Relaynet deployed on a large scale as I don't think it'd be realistic (or fair) to rely on unpaid volunteers and charities to act as Relaynet couriers for an entire region/country. And whilst I've been thinking about this from the very beginning, I decided to keep it outside the scope of the initial version of the protocol suite for three reasons:

  • The early drafts of the protocol (called "postage") were getting pretty complex, to the point it felt like a project in its own right.
  • I realised that the time wasn't right and I should defer it until we've run at least a few pilots. Otherwise, there would be far too many design assumptions that would most likely be far too detached from reality.
  • I didn't (and still don't) have access to someone with the legal expertise required to take this forward. I'm particularly concerned about taxation, AML compliance and US sanctions.

I won't be actively working on this in the short term but I want to create this issue collate my thoughts and those of others. Here are some high-level parameters:

  • We should to invite CSOs like Access Now to join this process, and we should invite them as soon as we start working on this.
  • It's OK if version 1 of the protocol is a steppingstone as opposed to the endgame. I prefer something acceptable that works sooner over something theoretically perfect that will take longer to design/build.
  • Consequently, I don't mind using centralised payment systems as long as couriers (just about) anywhere can get paid. In other words, cryptocurrency would be ideal but not required.
  • This is goes without saying but I'll say it because it might throw a spanner in the works: The solution has to be legal. So:
    • Anyone involved with its design, implementation and/or deployment (e.g., Relaycorp) can't violate US sanctions. So we have to seek legal counsel early on.
    • We can't enable couriers to do tax evasion. But how can we do that without disclosing to the repressive regime they live in that they are Relaynet couriers? Maybe we can help them pay tax in a different jurisdiction.
  • Where's the money coming from? Options are: People may pay for the delivery of each cargo (like postage), they may pay a flat fee, people/organisations abroad may contribute towards an "Internet blackout fund", etc.
    • If users have the option to pay the courier directly, there must be a mechanism in place to prevent people who can't afford to pay from being left out. It's OK for people to pay the courier for more capacity or a expedited delivery, but everybody should be able to have their cargo relayed. I think this is where an "Internet blackout fund" can help.
  • If feasible, part of the proceeds should go to legally challenge the blackout and future ones.
  • Incentivising couriers might be a double-edge sword: A small minority may be tempted to disrupt the Internet service so they can be compensated. How do we penalise that?
  • Couriers should not be allowed to put vulnerable people (e.g., children) or animals at risk. This may require partnerships with local charities so they can audit the methods employed by couriers -- at least the largest networks of couriers.
  • Couriers should also be compensated for periodic drills. This is very important to make sure they're actually prepared for a blackout.

To be clear: Relaycorp will not be taking a cut on these funds. If anything, we'll contribute to the Internet Blackout Fund once we start generating a profit from other sources.

Consider whether to restrict SignerIdentifier in RAMF signatures to be of type SubjectKeyIdentifier

The RAMF spec doesn't currently restrict the choice of SignerIdentifier in the SignerInfo, which means either of the two choices supported by the CMS spec would be supported: IssuerAndSerialNumber or SubjectKeyIdentifier.

However, it might be better restrict this to either option to simplify implementation and guarantee interoperability.

And it probably makes sense to stick to the SKI because it could be more easily derived from the public key (e.g., as its digest), which in turn negates the need for the CA to track the ids for the previously-issued certificates. The PKI spec already requires the AKI and SKI extensions anyway.

Draft "PoEmail" binding to relay parcels via email

This would come in handy in a scenario where the user's Internet connection is available but censored, yet email is allowed. So the user wouldn't be able to reach a public gateway, but they're able to connect to an SMTP/POP3/IMAP server.

From an architectural standpoint, the change should be minimal as we're just placing an email server between the private gateway and its corresponding public gateway.

We may want to use steganography (#44). See also #41.

(Inspired by SOAP bindings; Delta Chat does something similar)

Prevent a compromised courier from tracking cargo to/from private gateways

Executive summary

Even though cargoes have no geolocation metadata and the addresses of desktop/Android gateways are opaque, a compromised courier could alter the app to log additional metadata which -- if combined with external data sources -- could help reveal the identity of the user, their location and/or the services being used.

Consider this concrete example: Say Twitter supports Relaynet and a political dissident at large uses it via Relaynet to post a tweet. A repressive regime that managed to compromise the courier app used to send the cargo containing the tweet could, theoretically, use the creation time of the tweet to narrow down the private gateway where the cargo originated. This would be possible by having the courier app log the time and location of each private gateway it found in the route, and the time when each cargo was delivered to the public gateway (and to make the correlation more accurate, the courier could space out the delivery of each cargo).

Describe the solution you'd like

I don't think there can be a single change that solves this problem, but rather a series of changes. At the very least, the following should be done:

  • Public gateways should be required to batch the delivery of parcels that arrived in cargoes. Better yet, the batch should be assigned randomly to each parcel. We'd just need to make sure that we're not letting parcels expire due to this measure.
  • Gateways should be required to rotate their keys relatively often (the private address of a gateway is derived from its public key; much like Bitcoin addresses). But not too often as that'd make routing harder.
  • Courier apps should also be able to act as private gateways when connected to another courier, so they can deliver and collect cargo for other gateways. This would be addressed by relaycorp/relaynet-courier-android#495 and relaycorp/relayverse#12.

I'd previously considered requiring private gateways to deliver and collect cargo that doesn't belong to them whilst connected to a courier. However, this may not be a workable solution:

  • A repressive regime that compromises the courier app could also distribute "honeypot cargo" so they can identify private gateways collecting cargo besides their own. To counter this, we'd have to get public gateways to output some kind of "manifest" file where they list the private addresses of the gateways for which the courier is collecting cargo, but then this would cause routing issues in large-scale deployments where "sorting facilities" are used.
  • This could be problematic with #34, depending on its solution.

Related issues

Draft protocol to compensate vendors of decentralised services

Say a group of people create a Relaynet service meant to compete with email. Thanks to Relaynet, that service will be decentralised just like email, but unlike email it'd also be: Spam-free, phishing-free, end-to-end encrypted and serverless. Now, how can this group of people (collectively known as the Relaynet service vendor) be compensated?

I think Relaynet should offer a built-in mechanism to monetise decentralised services. Centralised services can easily achieve that by having a paywall, like they do today on the Internet, so we should help decentralised service vendors to monetise their work without resorting to ads or donations.

I've got some thoughts and some notes on this, but I don't have the time to write them up right now, so I'll be updating this issue in the future. In the meantime, comments from other people are welcome.

The example above is basically describing Letro, which will be built by Relaycorp, but the principle basically applies to any decentralised service built by someone other than Relaycorp. Especially when we implement message broadcasting, which can enable a new generation of social networks.

Use SRV records to resolve public node hosts at the binding level

This will allow private gateways (like the Android Gateway) to offer users a "unified" address like frankfurt.relaycorp.cloud, instead of having to manage three separate domains like poweb-frankfurt.relaycorp.cloud, pohttp-frankfurt.relaycorp.cloud and cogrpc-frankfurt.relaycorp.cloud.

Replace TTL with expiry date in RAMF messages

It was a TTL (integer) originally because RAMF was originally a custom binary format and I wanted to make sure the expiry date could not be before the creation date. However, this is making implementations more complex -- more so than just using two dates and making sure one isn't greater than the other.

Draft Service Message Broadcast protocol

Overview

Offer the ability to broadcast signed, unencrypted messages to anyone interested in such messages, without the sender knowing who's interested. Potential applications include:

  • Letro newsletters.
  • Distributing the latest articles from news organisations.
  • Software update notifications (just the notification, not the actual binary).
  • Announcing PKI certificate revocations.
  • Decentralised social networks.
  • Enabling protest organisers to make announcements within a meshnet (see #46).

Technical design

This specification will extend Awala to support the Publish-Subscribe pattern, where:

  • The publisher is an Awala endpoint, identified by their Awala private address (better privacy) or Vera id (better UX).
  • The subscribers are zero or more Awala endpoints.
  • The topic is determined by the publisher and must be tied to a specific Awala service.
  • Each message is an unencrypted Awala parcel, with the sender being the publisher and the recipient being the topic.
  • Each subscription filters messages produced by a specific publisher and/or matching a specific topic. If only the publisher is specified, the subscription must be limited to a specific service.

If we imagine a world where Twitter is an Awala service using this protocol, then every Twitter user is a potential publisher and subscriber. Following a Twitter user means subscribing to a publisher. Each tweet is a message. And there'd only be one topic in the whole service.

Awala Internet gateways will form a P2P network to propagate the messages. Private gateways will subscribe to topics on behalf of their endpoints via their respective Internet gateways, without disclosing which endpoint(s) require the subscription.

Avoiding hate speech and misinformation

We'll use a new PKI to avoid hate speech and misinformation at scale, in a decentralised manner.

The root certificates will belong to an "Oversight Board" (OB) that will set the policies that participants must adhere to. Each member of the OB will own a root certificate, and each of these will be included in the trusted key store of Awala-Internet Gateway providers like Relaycorp.

For an end user to broadcast messages, they'll have to be sponsored by an intermediate. The sponsor is a charity or company vetted by the Oversight Board; they get an intermediate certificate issued by a board member when they're accepted.

Sponsors get a lot of autonomy in determining their criteria to let people in, since the only requirement is that their sponsorees don't spread hate speech or misinformation. Some may require the real identity of their sponsorees, and others may accept anonymous users. Some may require a payment for the sponsorship.

Generally, the certificates issued by sponsors to end users should be short-lived when users are new, and subsequent certificate renewals will produce certificates that last longer. For example, a brand new user may get an initial certificate that lasts 3 days, and when it's automatically renewed, if they haven't broken any rules, the second certificate will last a week -- and so on.

Any member of the OB can revoke certificates issued to any user or sponsor, but sponsors can only revoke their own users' certificates. Gateway providers will monitor such revocations to drop messages signed with a revoked certificate, similar to certificate revocation lists (CRLs) in traditional PKIs.

Sneakernet bundles

Awala courier networks may optionally distribute a curated collection of broadcast parcels, to distribute information that may be generally relevant to Awala users in the region served. For privacy and safety reasons, private gateways would download the entire collection (or specific shards if we use sharding), to avoid disclosing to the courier what the user is interested in.

Such collections may include broadcasts from news organisations, humanitarian organisations, political dissidents, etc.

P2P network amongst Awala Internet gateways

It'd only be used to propagate messages. Messages will only be persisted for 5 minutes to help peers that get disconnected.

Internet gateways will identify themselves with their Vera ids, so that abusive peers can be blocked by domain name -- thus making it expensive to attack the network. Attackers mustn't be able to bypass this by creating subdomains.

We're likely to need sharding as popularity grows, although it'd be ideal to make the number of shards a function of throughput.

Changes to the existing protocol suite

Messaging Protocols

  • Broadcast parcels could be unencrypted (CMS type "data").
  • The Cargo Collection Authorization MUST include zero or more topic subscriptions. The resulting cargo will remain end-to-end encrypted, to prevent leaking subscription-related information to couriers.

Alternatives considered

  • IPFS PubSub/Gossipsub. It looks overly complicated given the many different use cases they want to support.

See also

Draft spec to establish a Courier PKI

Executive summary

Courier apps have to use self-issued, server-side certificates because they aren't Internet hosts and are therefore ineligible to get TLS certificates issued by a widely-trusted CA. This forces private gateways to trust any TLS certificate. Here we consider a "Relaynet Courier Certification Programme" to solve this issue.

(Note that Relaynet still guarantees properties like authentication, integrity and confidentiality -- TLS is only used to achieve confidentiality from other devices on the WiFi network)

Description

The lack of courier authentication poses some (minor) threats:

  • An unauthorised user might try to pass off as courier to track the addresses of the private gateway and its corresponding public gateway. See #54. The impact would be low, though: Even if the unauthorised user got hold of those addresses, the address of the private gateway is just a digest of its public key and the address of the public gateway will typically be widely used anyway.
  • An attacker might try to damage the reputation of Relaynet amongst end users by passing off as couriers but never actually relay any cargo.

Potential solution

We could establish a "Relaynet Courier PKI" by creating a registry of root CAs that are allowed to certify individual and/or organisational couriers -- The latter would be essentially intermediate CAs. This registry would be analogous to the Mozilla CA Certificate Program's list.

In addition to the registry, we should define the policy around it, including the criteria to add or remove CAs from the registry. The policy should also require end certificates to be valid for at most a week, which means couriers would have to renew them frequently -- So we'd need to build a back office system to support this.

Once this is in place, private gateways will be able to tell end users when they connect to a certified courier. Over time, we could make it more prominent to the user when they connect to a non-certified courier.

The registry should be (eventually) managed by a non-profit consortium.

Draft "CoScatternet" binding to relay cargo via scatternets (Bluetooth meshnets)

Executive summary

This Bluetooth-based protocol will allow end users to synchronise cargo with their courier or public gateway without having a direct connection to them. This is better for end users and couriers because the process to relay cargo will be more scalable and discreet.

For example, consider a scenario where a courier stops by a building to synchronise cargo with people living there. If enough people are connected at that time, the courier could synchronise cargo with everybody from the ground floor (depending, of course, on the building and the Bluetooth adapters). The courier would then go away, taking with them the cargo that should eventually be sent to the Internet (via Relaynet Internet Gateways).

Technical description

For privacy reasons, we may have to use a flooding algorithm with appropriate Quality of Service measures to prevent abuse. Couriers and end user's gateways will exchange cargo by broadcasting the cargo they're sending to the whole meshnet, with a semaphore preventing everybody from broadcasting at the same time.

Couriers will probably have to create a profile that specifies how end-user gateways will interact with the courier. The periodicity with which the relay will take place will be part of the profile (e.g., "every four hours from 2pm tomorrow"), and for the safety of the courier that periodicity should be more frequent than the actual schedule of the courier.

Every node in the meshnet will be required to generate and broadcast fake cargoes for real nodes in the meshnet during a relay, to prevent a man-in-the-middle from identifying the courier.

We may have to have two flavours of this protocol, depending on whether the nodes are stationary (e.g., the building example) or moving (e.g., a public place).

See also #46.

Alternative considered: WiFi ad-hoc meshnet

WiFi has better range and bandwidth, but unfortunately unrooted Android devices can only join ad-hoc networks (not create them). So a WiFi-based approach would work best with PCs.

I still haven't ruled out using WiFi instead as I'm particularly concerned about the effective range we'd get from using Bluetooth through ceilings and walls.

CogRPC: Consider using binary metadata for CCA instead of text-based Authorization metadata

The CCA is currently specified in the Authorization header/metadata because it belongs there from a semantic perspective, but we're having to serialise the CCA in base64 so we can use it in that text-based metadata.

By contrast, if we take a more pragmatic, performance-focused view, we should be using a binary metadata value, which would in turn force us to use a custom field.

This is a minor consideration in the large scheme of things but I thought it'd still be good to track it somewhere.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.