GithubHelp home page GithubHelp logo

off-narrative-labs / tuxedo Goto Github PK

View Code? Open in Web Editor NEW
52.0 52.0 15.0 13.06 MB

Write UTXO-based Substrate Runtimes

Home Page: https://off-narrative-labs.github.io/Tuxedo/

License: Apache License 2.0

Rust 99.13% Dockerfile 0.83% Shell 0.04%
rust-lang substrate utxo

tuxedo's People

Contributors

coax1d avatar joshorndorff avatar muraca avatar nadigeramit avatar semuelle avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

tuxedo's Issues

Options for how to store the relay parent number

This issue serves mostly to document my thinking around where to store information about the relay chain, specifically its block number, in the parachain. This question came up as I worked on #130 and I want to record some thoughts about my decision.

The point is that we need the relay parent number at the end of the block the set the HRMP high water mark. In FRAME it goes in some storage item, and is read back later. But in Tuxedo, the trouble is... you guessed it... there are no storage items. So following are some things I considered.

Note that while there is a solution in place for now, I still consider this design point open for debate at least until XCM and parachain upgrades are implemented.

Solution 1: Use an accumulator a la #105

I didn't want to introduce accumulators just for this one task. They also have a shortcoming. They are cleared at the end of the block. But we need this info to stick around after the block so it is still in storage during the collation api call.

Solution 2: Reach out and grab arbitrary storage

This is the simplest possible thing and is what I did for the initial PoC in 06aefa4 The problem is that it sets bad precedent for Tuxedo pieces to use storage arbitrarily and it also means externalities are necessary for the tests.

Solution 3: Controlled interface around dedicated storage

** This is the one I've chosen for now. **

This is basically a slightly more structured alternative to option 2. Rather than directly using storage in the piece, we provide a mutator method around dedicated storage to the piece through a config trait. This allows us to mock the storage calls in tests and no longer requires externalities.

Solution 4: Header digest

In this solution we don't use storage at all. Instead we annotate the parachain block header with a digest log that contains the relay parent header. This totally avoids all the storage-item hacks in the previous three solutions. But it remains to be seen if this info is needed anywhere else, and how much more info we need to retain once XCM is implemented. We need to keep the header concise.

The header method would represent a divergence from FRAME (which is neither inherently positive nor negative).

Revisit soundness of timestmp inherent in parachain context

The big question here is What guarantees can parachain runtime's rely on the relay chain validators to enforce with regards to inherent data.

I like the timestamp example because it is concrete. Is it sound for a parachain to have it's own timestamp inherent? You don't want the relay chain validators to accept a block that the collators will reject because of a bad timestamp.

I argue that Tuxedo should offer the parachain runtime the guarantee that its inherents will be valid.

This issue is to assess whether Tuxedo is doing that, and if not to start doing that.
And more broadly, this issue to to follow up with the Substrate community and the FRAME developers to see whether FRAME is doing this correctly.


Relevant Cumulus PR and SE question.

paritytech/cumulus#2658

Peeks

In the original grant proposal I sketched a transaction type that included peeks alongside traditional UTXO inputs.

/// A UTXO transaction specifies some inputs to be consumed, and some new outputs to be created.
struct Transaction {
  /// The inputs refer to currently existing unspent outputs that will be consumed by this transaction
  inputs: Vec<Input>,
  /// Similar to inputs, Peeks refer to currently existing utxos, but they will be read only, and not consumed
  peeks: Vec<OutputRef>,
  /// The new outputs to be created by this transaction.
  outputs: Vec<Output>,
}

When writing that, I conceived that peeks would help with concurrency for transaction that needed to read state from existing UTXOs but didn't need to modify it and therefore would prefer not to consume it. Having worked on the PoC, we now have a concrete use case for peeks as well.

Duplicate output check is unnecessary and incorrect

Currently the tuxedo executive checks that there are no duplicate outputs in a given transaction. https://github.com/Off-Narrative-Labs/Tuxedo/blob/main/tuxedo-core/src/executive.rs#L53-L60

This logic is also present in the earlier utxo workshop. https://github.com/JoshOrndorff/utxo-workshop/blob/joshy-update-deps-may-2022/runtime/src/utxo.rs#L188-L191

But why is it necessary? It is perfectly valid for a transaction to consume a 20 token input and create two new outputs each worth 5 tokens that belong to the same pubkey.

Digging back through the git history, it seems this check has been present since a much older 2019 edition of the utxo-workshop where each output contained a user-provided salt. The user-provided salt was never realistic and was a hack to get something working in the early days. @nczhu noticed this and solved it in substrate-developer-hub/utxo-workshop#45 (still an impressive PR after all this time!).

With the salt hack long-gone, I think it is safe to remove the duplicate output check.

Wallet: Need to persist data directory between docker runs

The wallet's data directory holds keys in the local filesystem. This works fine when running natively, but when using docker, the keys are stored in the container and are not available on later runs.

I think this is sketched out already in the docker file. Just have to enable it and test it.

End-to-end CI for wallet

Tuxedo strives for excellent test coverage. And we do have pretty good coverage in the core. However the wallet is mostly tested manually.

This issue is narrowly scoped for a concrete starting point. After this is achieved, the test suite can be expanded as much as we need.

This issue is complete when we have a CI job that starts up a Tuxedo template node, starts up a wallet, sends a token transfer transaction to the node, and confirms that the final state is as expected.

I want to avoid a complex CI setup because nobody who works on this project regularly is a CI expert. For example, I'd like to avoid introducing new languages like Typescript (even though it works very very well for Moonbeam, we are not TS experts and we want simplicity. Ideally this can be done with nothing more than github actions and popular Rust crates. If some bash scripting is necessary, that may be acceptable, but is not preferable.

End of Block Inherents (to tip block authors)

Many UTXO chains, going all the way back to Bitcoin itself, give the block author a tip that includes the imbalanced amounts from all other transaction in that block.

Tuxedo and it's money piece have not done this yet because of two difficulties.

  1. Tracking a cumulative imbalance throughout the block is a challenge without storage items. Putting the sum in a UTXO is a no-go because users would need to include the imbalance output of the immediately previous transaction as an input. This would destroy the tx pool and fee market. We've solved this problem in #105

  2. The second problem is creating the reward UTXO in a real transaction with a real output. It would be reasonably simple to add a reward UTXO to state at the close of a block, but it is important to also have that UTXO in a block body. All other UTXOs are in the block body, and wallets indexers etc rely on this to understand the state of the chain.

This issue is to create a custom block authorship worker, and probably a custom block builder trait. We will not need huge changes to the default Substrate infrastructure here, we just need a way to call into the runtime at the end of the block and get any end-of-block extrinsics just like the default does for beginning of block extrinsics.

Side quest: While we are making a custom block builder trait, we may choose to pass in the previous block in a more obvious way than the inherent data approach introduced in #100.

Piece Configuration and Coupling

In our current PoC, all of the pieces are independent with no coupling or configuration whatsoever. Imminently, we will need to design and implement a system for configuring pieces which will also allow coupling them together by passing information such as verifiers or data types from one piece to another.

The most obvious use cases are coupling to the money piece so that all transactions can pay a tip and get prioritized appropriately. Similarly, many pieces will need a notion of money to take a security deposit for various piece-specific actions or otherwise provide economic incentives.

In talking with @coax1d, our best idea so far is to use a configuration trait system inspired by FRAME's, although we hope to keep it simpler either by design or just by convention. In such a system, any Verifier that needs configuration can be generic over a type that is bounded by its configuration trait. Foe example:

pub trait VotingConfig {
  type Money: Money;
  const deposit_amount: u128;
}
pub struct VotingVerifier<Config: VotingConfig> {
  // --snip--
}

In the snippet above we see that this pattern is nearly identical to FRAME's but is made simpler by a few soft conventions:

  1. Only verifiers that actually need configuration take the config type parameter.
  2. We call it Config instead of just T
  3. We use associated constants rather than the Get trait.

None of these would be enforced, and developers who want to could still do it the FRAME way, but we should encourage only doing so when there is a good reason so as to keep Tuxedo code as simple and readable as possible.

The design may be shifted entirely.

Tests for PoE

The Proof of Existence piece does not currently have tests, but it needs them.

Full Tuxedo App Implementation

We have tested building on Tuxedo in multiple ways by creating new and various Tuxedo pieces. Such as Money, POE, Kitties, Dex etc.

In the spirit of a UTXO chain we of course only verify that Input to Output transitions are valid. This allows the chain to verify constraints and check to make sure state transitions are valid.

Now what we dont have..

We dont have an application which actually runs and generates these Input Output states and sends requests to a Tuxedo based Runtime for verification.

All we have to see some of this in action is some Unit tests and barely any integration testing besides some things in the wallet.

The task

Either take an existing Tuxedo piece and implement some application(Such as a payment application) which sends live transactions to a Tuxedo based chain or create an entirely new Tuxedo piece and build the application on top of it.

Why?

In order to demonstrate that Tuxedo is indeed a framework worth building the underlying blockchain infrastructure, demonstrating this is critical to showing application developers that this is:

  • Possible
  • Feasible for an end user application to get sufficient blockchain attestations to their transactions.

Consider static Config trait for Tuxedo Pieces

This would further abstract on this #69.

The strongly typed design seems like the desirable approach given that it reduces on-chain computation requirements to further slim down the computational overhead in the runtime. However this does put a further burden on the developer to design these things carefully and also to frankly use more advanced Rust to achieve the functionality they are seeking.

This would propose something similar to a FRAME Config which would look something like..

pub trait OrderConfig {
  type CoinA: Cash;
  type CoinB: Cash;
  type CoinC: Cash;
  ...
}

pub Struct Order<V, C>(PhantomData<(V, C)>);

impl<C: OrderConfig> OrderConfig for Order<V, C> {
  type CoinA = Coin<0>;
  type CoinB = Coin<1>;
  type CoinC = Coin<2>;
  ...
}

Obviously this list gets long which can be simplified with the help of some macros. The key to success here I think
is how we design those macros.

The question is I dont know how we can avoid this problem. If it is a Strongly typed Config my intuition is this is just what we have to deal with these types have to be defined and aggregated somehow..

Let me know what are your thoughts.

Wallet: Add command to clear all data

I'm lazy and I use the wallet in tests without specifying a custom data path.
I need to manually go to my platform-specific folder and delete manually each time I need to restart the chain.
I think we should ad a command to clear all wallet data.
It should either have a long name, so people can't run it by mistake, or be conditionally compiled, if we only want to keep it for testing purposes.

Fees / Tips

Currently only the money piece has a way to specify a fee / tip, which is to have the input value exceed the input value. It is conceivable that other pieces may have similar inherent ways to express a tip such as sacrificing an NFT or something. But most pieces will probably want to somehow couple with a money piece instance to offer some tokens as a fee.

Currently I have two ideas for this that are similar, but "inside out" from each other, and a third idea that is a larger redesign that I don't particularly like

Piece verifier calls money verifier
The verifier that the piece developer is writing ts coupled to the money piece through a method like described in #15. The piece verifier makes its own computations then passes a few of the inputs over to the money verifier to get a priority which it then uses as its own priority.

impl Verifier for PieceVerifier<Config: PieceConfig> {
    type Error = VerifierError;

    fn verify(...) -> Result<TransactionPriority, Self::Error> {
        // Do the piece specific verification work first

        // Now pass the remaining inputs over to the money verifier for a priority
        Config::Money.verify(input_data[4..]
    }
}

A variation on this idea is that the call to the money piece doesn't necessarily have to be to a literal Verifier but could be to some helper function.

Priority verifier wraps piece verifier

A similar but "inside out" idea is to have the money piece provide a verifier that does the prioritization, and is generic over an inner verifier which it then calls into after the prioritization is done. I scribbled this idea into code comments when developing the amoeba piece.

This idea does not require any coupling.

/// A wrapper that calculates priority and deducts tip / fee then calls the inner verifier.
pub struct PriorityVerifierWrapper<Inner: Verifier>{
    /// The number of inputs that should be treated as tip
    num_tip_inputs: u8,
    /// The inner verifier that should be called after the tip is processed
    inner: Inner,
};

impl<Inner: Verifier> Verifier for PriorityVerifierWrapper<Inner> {
    fn verify(...) -> Result<TransactionPriority, Self::Error> {
        // Calculate the tip first
        let priority = todo!("sum the value of the coins");

        // Now call the inner verifier
        self.inner.verify(input_data[self.num_input_tips..], output_data];

        priority
    }
}

Native token baked into Tuxedo core

The final idea, which is a bigger redesign and that I don't like as much is to pivot to including a native token at the Tuxedo executive level. Then the executive itself could prioritize the transactions and not make it the problem of the piece developer. This is more similar to Definition 11 from https://eprint.iacr.org/2018/469.pdf

Eliminate dependency on `frame-metadata`

Followup to #82

Now that paritytech/substrate#14398 landed in Substrate, we should be able to remove frame-metadata, the last frame-specific logic from our dependency tree.

This is a pretty small change, we just need to do the corresponding Substrate update (probably next monthly release) and we will magically get the smaller dependency graph.

Implement Evictions

Background: #52

UPDATE:
I have pivoted from what I described in the original issue description, toward an earlier and simpler design as described in the first comment.

Original Description:

Currently the tuxedo executive checks the verifiers on all inputs of all transactions. We previously considered the notion of "eviction" which was a way for a transaction to consume a UTXO without its verifier being satisfied.

As I described in the comment on the background issue, I think a better approach is to make the notion of Verifier less fundamental in the tuxedo executive. Rather than always checking verifiers in the core, instead we never check verifiers there. Instead we generalize the notion of ConstraintChecker like we did in #41 so that there are three levels like follows.

/// The simplest and most common way to express transactions logic.
/// Verifiers are checked for you automatically.
/// Only the input data is passed along to the implementer.
/// The same one we've had from the beginning
trait SimpleConstraintChecker { /* --snip-- */ }

/// The more expressive but still reasonably safe way to express transaction logic.
/// Verifiers are checked for you automatically.
/// The entire UTXO including the already-checked Verifier are passed along to the implementer.
/// The one that was introduced in #41
trait ConstraintChecker { /* --snip-- */ }

/// The most expressive and dangerous way to express transaction logic.
/// Nothing is checked for you.
/// Implementers get the raw input data and do as they please.
/// Checking Verifiers is entirely optional.
/// This one is new and should be used sparingly.
/// If you want to write an eviction, this is the way to do it.
trait FullPowerConstraintChecker { /*--snip--*/ }

// Two blanket implementations instead of the one we currently have.

frame-specific crates are still in the dependency tree

$ cargo tree -i frame-support

frame-support v4.0.0-dev (https://github.com/paritytech/substrate.git?tag=monthly-2023-06#6ef184e3)
โ”œโ”€โ”€ frame-system v4.0.0-dev (https://github.com/paritytech/substrate.git?tag=monthly-2023-06#6ef184e3)
โ”‚   โ””โ”€โ”€ pallet-transaction-payment v4.0.0-dev (https://github.com/paritytech/substrate.git?tag=monthly-2023-06#6ef184e3)
โ”‚       โ”œโ”€โ”€ node-template v4.0.0-dev (/home/joshy/ProgrammingProjects/tuxedo/node)
โ”‚       โ””โ”€โ”€ pallet-transaction-payment-rpc-runtime-api v4.0.0-dev (https://github.com/paritytech/substrate.git?tag=monthly-2023-06#6ef184e3)
โ”‚           โ””โ”€โ”€ pallet-transaction-payment-rpc v4.0.0-dev (https://github.com/paritytech/substrate.git?tag=monthly-2023-06#6ef184e3)
โ”‚               โ””โ”€โ”€ node-template v4.0.0-dev (/home/joshy/ProgrammingProjects/tuxedo/node)
โ””โ”€โ”€ pallet-transaction-payment v4.0.0-dev (https://github.com/paritytech/substrate.git?tag=monthly-2023-06#6ef184e3) (*)

It looks like at least one reason this happens is because we have the rpc for pallet transaction payment. We definitely don't want that, so a first step is to rip that out.

Multisig

For milestone 2 of the original grant proposal we need to implement a multisig wallet. As described in Section 2 of the EUTXO paper there are two ways to do a multisig, and I propose we implement both.

Verifier-Based Approach

The simple way is to create a Varifier implementation that is similar to SigCheck but requires a certain number of valid signatures rather than just a single one. This kind of multi-sig has been around since bitcoin. It is also somewhat inconvenient for users because it requires aggregating all of the signatures through off-chain communication between the signatories.

/// A Threshold multisignature. Some number of member signatories collectively own inputs
/// guarded by this verifier. A valid redeemer must supply valid signatures by at least
/// `threshold` of the signatories. If the threshold is greater than the number of signatories
/// the input can never be consumed.
struct ThresholdMultiSignature {
  /// The minimum number of valid signatures needed to consume this input
  threshold: u8,
  /// All the member signatories, some (or all depending on the threshold) of whom must
  /// produce signatures over the transaction that will consume this input.
  signatories: Vec<H256>,
}

ConstraintChecker-Based Approach

The second way, as outlined in EUTXO, is to allow aggregating the signatures on-chain in a smart contract like way. This would be implemented as a ConstraintChecker. A signatory makes a proposal by consuming the original multisig and creating a new one with all the same data plus their proposal plus their own signature. Then other signatories would consume that utxo and replace it with one with all the same data plus their own signature. And eventually the final signatory would consume the utxo and replace it with just the proposal.

Genesis config

The current genesis config could be a little nicer.

https://github.com/Off-Narrative-Labs/Tuxedo/blob/7f1dabe37ce8621deb9df47d102ea775bbe2f763/frameless-runtime/src/lib.rs#L107-L109

In the UTXO model the state is (basically) just the utxo set. In this sense, the current design is perfect with its single Vec of outputs. But there are still a few things that may be desired.

  1. Including a money in the default genesis config is unusual. The default should be empty and the output that goes to SHAWN_PUB_KEY should be expressed in one or more of the specs in chain_spec.rs.
  2. We should figure out the best-practice way of expressing the genesis utxo set in the chain_spec.rs file and provide helper functions or builder pattern or something if necessary. Off the top of my head, maybe something like.
let genesis_utxo_set = vec![
  (Coin(5), SigCheck(ALICE_PUB_KEY)).into(),
  (Amoeba{ generation: 5, ..Default::default() }, UpForGrabs).into()
];

Aggregation Macros

Writing a complete Tuxedo runtime will typically involve composing several individual pieces together. In the current design this means creating aggregate verifier and redeemer enums that have a variant for each of the individual verifiers and redeemer. Currently there is no macro to make this job easier, and all of the code must be written manually, which is long and redundant.

Simple Aggregation

At minimum we will want a macro to provide all of the From implementations. I've prototyped such a macro https://github.com/JoshOrndorff/proc-macro-experiment/blob/main/src/lib.rs. This is a good start. In fact this same macro could be used to aggregate the verifier and the redeemer (and any other types where we want this pattern in Tuxedo or elsewhere).

Implementing Verifier on the aggregate enum
The simple aggregator I've sketched is a good start, however we may want it to do more. For example, we may want it to implement the Verifier trait on the aggregate verifier enum. This implementation is mechanical because each of the inner types are already Verifier so we just have to match on the variant and then dispatch to the inner type. This would require dedicated macros for the verifier and the redeemer because the traits have different methods with different signatures.

Syntax
Assuming we want the traits implemented by the macros, there is the question of syntax. I can imagine two possibilities and there may well be more.

Option 1: Single macro with parameters used as follows

#[aggregate(Verifier)]
pub enum AggregateVerifier {
  Money(money::Verifier),
  Amoeba(amoeba::Verifier),
}

#[aggregate(ConstraintChecker)]
pub enum AggregateChecker {
  SigCheck(Sigcheck),
  UpForGrabs(UpForGrabs),
}

Option 2: Separate macros for the verifier and the redeemer

#[aggregate_verifier]
pub enum AggregateVerifier {
  Money(money::Verifier),
  Amoeba(amoeba::Verifier),
}

#[aggregate_redeemer]
pub enum AggregateRedeemer {
  SigCheck(Sigcheck),
  UpForGrabs(UpForGrabs),
}

In the second case, both macros may call out to a third macro that is similar to my prototyped helper for the aggregation alone. Currently I'm leaning toward option 1.

Additional Design Considerations

It is possible that the scope of the macros' jobs may still change. So even when we decide something here, there is no guarantee that we will stick with it forever and ever amen.

`Extrinsics::is_signed` method

// TODO what are the consequences of returning Some(false) vs None?

In method Extrinsics::is_signed, what are the consequences of returning Some(false) vs None (default)?
This research was made on Polkadot SDK at a56fd32.

Generally speaking, None should be used when no information about the signature is available, so the Some(false) should be preferred in this case.

All the FRAME-related usages are skipped, since Tuxedo is not designed to work on FRAME.

Inherents procedural macro

Inherents are placed in a block before any other extrinsic. This method is used to break out of a loop that iterates on the extrinsics of a block, as soon as a signed extrinsic is found.
Here, when None, a transaction is assumed to be unsigned, and is later on checked using Pallet::is_inherent, so thereโ€™s no difference if we return None or Some(false).

Cumulus validate block implementation

This is another case where the method is used to look for inherents. In contrast to the previous, here a None result will be considered as true.

Substrate client transaction pool

Method BasicPool::handle_enactment:

Handles enactment and retraction of blocks, prunes stale transactions (that have already been enacted) and resubmits transactions that were retracted.

We can find only one usage, where a None result is considered to be true.
This allows explicitly signed transactions, or transactions where the pool doesnโ€™t know wether the transaction is signed or not, to be resubmitted when the block that contains it is pruned.

I went back through multiple refactorings of the code, and I found out this was initially set up to avoid resubmitting inherents in Substrate PR 3660. The behavior was different, since it initially defaulted the value to false, and it was changed later on in Substrate PR 4740.

Conclusions

At the moment I think it should be None, but I have to figure out why an extrinsic is not signed in Tuxedo. @JoshOrndorff any explanation?

Test the wallet CLI

There is now a good amount of code in the wallet, and it will continue to grow. It needs to be tested in an automated way.

I'm not exactly sure how to test CLI app code. This wallet interacts with user input, a filesystem, and a running node via RPC connection. That seems harder to test than simple functions.

Tests for Runtime Upgrade

The runtime upgrade piece does not currently have tests but it needs them. The tests for this piece will be somewhat more complicated than a typical piece because of the special-case direct storage writing in this piece. Therefore, this piece will need to use externalities like a FRAME pallet typically does.

Better in-code branding

Some parts of the code are copied from various templates and need to be tuxedofied. Some that come to mind:

  • In the outer node: node-template -> tuxedo-node-template
  • In the runtime frameless-runtime -> tuxedo-template-runtime
  • Copyright years
  • Issues url

Proper general way to extract parachain inherent data

Our validate block implementation relies on a function called extract_parachain_inherent_data.

Currently this function assumes that the parachain inherent will be first. This should be improved so that it can be anywhere in the block.

The trick is that we need some way to determine whether we have the right constraint checker variant even though the aggregate constraint checker is generic. Perhaps something like IsSubType would be useful.

Example Piece: On-chain light client

I just got back from PBA3 where I gave students an assignment to write a smart contract that works as on on-chain light client for another chain. Basically a btc-relay like thing. We were using solidity and ink! so of course the solutions were all account based. Got me thinking about how to implement the same thing in UTXOs. I think each foreign block header could be a separate utxo.

Readme

Improve the readme to contains:

  • What's in the repo
  • How to build, test, and run the node
  • How to test the prototype wallet

Standards and utilities for wrapping and unwrapping the aggregate constraint checker onion

A typical Tuxedo runtime is aggregated from individual pieces. This aggregation may even be recursive if #116 works out).

It is common that you need to convert between an inner and outer constraint checker or vise versa. The aggregation macro automatically implements From<inner> for Outer but there is no convenient way to go from outer to inner. One idea is to have the macro implement methods like this.

// This is an expansion of what would be macro-generated code.
// I've elided the generic verifier for simplicity.
pub enum OuterConstraintChecker {
  Money(money::Money),
  Amoeba(amoeba::Amoeba),
}

// An assumption about both directions of conversion is that each
// constituent constraint checker is a unique type.

// These impls for wrapping layers onto the onion already exist.
impl From(money::Money) for OuterConstraintChecker { ... }
impl From(amoeba::Amoeba) for OuterConstraintChecker { ... }

// These ones could be added
impl From<OuterConstraintChecker> for money::Money { ... }
impl From<OuterConstraintChecker> for amoeba::Amoeba { ... }

A related issue is transforming a Transaction<OuterConstraintChecker> into a Transaction<InnerConstraintChecker> of vise versa. @muraca has given the idea of a transform method in 42f475c#diff-41e8e33684d0c2b86e827e306c10a1ce0787a681d8d9caf9191f178c09dfbe5eR23-R32 Previously I could only make that method work for wrapping the onion, but I think with the new From impls it would work both directions.

Standardize Verifier names

I noticed that SigCheck and ThresholdMultiSignature follow two fundamentally different naming standards.
I think we should either push for the long extensive name, converting the SigCheck (where I think the word "Check" is superfluous) to something like SimpleSignature, or change the second one to some ugly contraption like ThreshMultiSig, which is still well understandable but less elegant.

@JoshOrndorff said:

I slightly prefer the long extensive names. [...] If you find it worth the effort afterwards, I trust your judgement to rename things, but I'm also fine leaving it as it is.

Timestamp Inherent

As we begin to prepare for parachain support, I want to master handling inherents first. Much of the parachain logic is in handling the parachains inherent, so we should be sure we have the fundamentals right. Plus many chains will find a timestamp piece useful; just look how ubiquitous pallet timestamp is in the accounts worlds.

My understanding is that we will be able to rely entirely on the existing client-side timestamp inherent worker which is already installed on the tuxedo template's client side (importing, and authoring). This machinery is currently vestigial, but harmless as the runtime just ignores any provided inherent data at the moment.

Since we already have the inherent data coming in, we will just need to do some massaging (possibly including decoding or encoding) and then ultimately return a transaction that takes the corresponding action. In our case it will need to be a utxo style transaction.

Then the client side block authoring machinery inserts those inherents somewhere. The default implementation is to put them first which is perfect for timestamp inherent. But as a side note, I hope to get creative about that when tackling block author rewards based on transaction fees properly.

I can imagine two strategies for how the runtime might expect the transaction to handle the timestamp data

  1. Very UTXO - Each inherent consumes the previous timestamp and produces a new one. Each tx must have exactly one input and one output. There would necessarily need to be a genesis timestamp then. One drawback here is that user transactions that need to peek at the timestamp (and therefore specify it by utxo id) would have a fundamental lifetime expiry on their transactions. This is unfortunate because you usually need to prove that it is later than x time, not immediately after x moment.
    1a. UTXO - Same as above except that you don't clean up the previous one. Maybe there is an economic spam prevention mechanism like costing money to add a new timestamp and giving a reward for cleaning old ones up. Actually the economics could have a curve. Like the more you're updating the time by, the bigger the reward. This disincentivizes making small increment updates. same about rewarding more for cleaning up older ones. It might even cost you to clean up recent ones. If we are not cleaning up the UTXOs immediately, we may need a special one that points the most recent one. Then to update you really are racing for that tag.
  2. Stateful - The tx has no inputs or outputs, it just has attached data which is the timestamp. That timestamp is then stored in a well-known location. This is very accounts. Not preferred.

I'll probably start by implementing option 1 because it is utxo friendly and simple. Maybe I'll do 1a if I'm on a roll, because I think it sounds better designed overall. Otherwise I'll make a followup issue about it.

One thing I'm not yet sure about is the notion of mandatory inherents. Is that a FRAME concept? If so we'll need to decide whether we duplicate it or not. I prefer simplicity, but if it is necessary, then it is necessary. I think it would be fine to have a block where the clock didn't tick because updating it wasn't economically feasible. OTOH, you may argue that keeping things like the clock or the parachain related info up to date is a top priority public good and should have dedicated blockspace.

Runtime should select Consensus Authorities

The Tuxedo Template currently uses Aura and Grandpa consensus exactly like the FRAME node template. However, the runtime does not do anything interesting in terms of choosing the authorities. The template runtime just has a hard-coded list of authorities.

For a real-world chain we need some better method of choosing authorities. At minimum a simple PoA scheme, and Ideally even some DPoS at some point. (But for the sake of scoping this issue, let's say simple PoA where you can add and remove authorities via transactions).

The problem, as ever, is that there are no well-known storage locations. In FRAME, there is just a specific storage location where the authority IDs are stored, and the aura api is implemented simply by reading that storage. With UTXOs this is not possible. Soo here are some options.

Regarding Author Selection

Continue With Aura

I honestly don't have a good idea of how to do this. The best thing I can think of is to make a "special case" storage location like we do with the header and runtime code. This is not very UTXO-y and not ideal in my opinion.

Switch to Nimbus

I wrote nimbus when I was working on Moonbeam, so I probably have some inherent bias toward using it. That being said, it does allow nicer compatibility with the UTXO model.

The main difference between nimbus and Parity's consensus implementations (aura, babe, sasafrass) is that nimbus does very little on the client side, and leaves most of the checking to the runtime. Consequently, the runtime does not have to specify a complete authority set to the client-side. In fact, in nimbus, the authority set can be unbounded. Here is how the Nimbus/Tuxedo authoring-import-execution flow would look.

  1. Authorities register by submitting a transaction. This stores their registration in a UTXO. Later they can de-register by submitting a transaction that consumes that same UTXO.
  2. To begin authoring, they include a pre-runtime digests that contains their public authority ID.
  3. As part of extrinsic inclusion, they include a nimbus inherent that peeks at their on-chain registration.
  4. To complete authoring, they attach a seal digest that contains a signature over the entire pre-block (the entire block except for the seal that we are constructing).
  5. When a foreign node imports the block, they pop the seal digest and check that the signature is valid by the identity specified in the pre-digest. This is the extent of the client-side checks. They do not (yet) verify that the author is the correct authority for this slot.
  6. When executing the block (specifically the nimbus inherent) we read the on-chain registration, check that it contains the same public authority ID as the pre-digest, and then decide if it is the correct authority for this slot.

Regarding Finality

None of what I sketched above has anything to do with Grandpa or finality in general. One observation is that Parachain's won't need a finality gadget because they just follow the relay chain. It is also possible to run a solo chain without finality. Perhaps we could make a modified version of grandpa that somehow.... Not a fully baked idea.

Anyway, this first issue is complete when authoring is addressed, and we can handle finality in a followup.

Opaque Extrinsic

We don't currently have an opaque extrinsic type (I think). Honestly this part was never very clear to me. I think we need a separate opaque extrinsic type to support runtime upgrades. Currently there is still a comment from the frameless runtime template

pub mod opaque {
    use super::*;
    // TODO: eventually you will have to change this.
    type OpaqueExtrinsic = Transaction;
    // type OpaqueExtrinsic = Vec<u8>;

    // --snip--
}

Reconsider the strong `Redeemer` / `Verifier` separation

In the current design, Tuxedo has a very strong separation between the logic that allows a single input to be consumed and the logic that checks whether a transaction is valid. This issue is to consider the pros and cons of that separation and decide to what extent it should be kept.

Note on Terminology

Before going on, I'll observe that I accidentally used the same terms as IOHK's abstract model with different meanings ๐Ÿคฆ

Tuxedo Term Abstract Model Term Definition
Verifier None - token logic hard coded The logic used to determine whether the consumed inputs and created outputs meet all necessary constraints
Redeemer Verifier The logic used to determine whether an individual input can be consumed
Witness Redeemer The data, likely a signature, that satisfies the logic to consume an input

Arguments for the strong separation

Pro 1: Writing pieces is very simple

With the current separation in place, the author of a piece doesn't need to care anything about whether individual inputs can be consumed. This is entirely handled by tuxedo executive. By the time the execution reaches the individual piece all that is left is typed data for the piece to check.

Arguments against the strong separation

Con 1: Pieces can't forcefully consume inputs

In some cases it is useful to consume a UTXO for housekeeping purposes.

Example 1: Consider a coupon with an expiration date. This UTXO would likely be guarded by a CheckSig so only the owner can use it. But if the expiration date passes, anyone should be able to clean up the state.

Example 2: In the proof of existence piece, two users could claim the same hash. When these competing claims are discovered anyone should be able to clean up the latter claim because the former is more legitimate. This is encountered in the current codebase

This can be addressed with a notion of evictions. See below.

Con 2: Dynamic ownership is not immediately intuitive

In FRAME, pallet sudo is commonly used to guard functionality behind a signature by the sudo key. This is similar to hoe SigCheck works except that the key is dynamic. In Tuxedo this cannot be implemented as a redeemer because the sudo key is stored in state (a utxo) but redeemers do not have access to other state.

This could also be solved with evictions. (Again, see below).

Con 3: Paying a private user

In many cases, a Tuxedo piece may want to pay assets to a specific private user. Consider a a treasury that wants to make a payment to a user, or a NFT game that charges some fees and allocates them to the original developer. When the output redeemers are not available, this cannot be done.

Possible Design Changes

Evictions

The idea of evictions is to add a field to the transaction that is similar to inputs or peeks (see #16), but the evicted inputs will be consumed regardless of whether their redeemer is satisfied. They will be forcefully removed.

This solves Con 1 by allowing anyone to call, for example, a CleanupExpiredCoupon verifier or a ResolvePoeDispute verifier which will forcefully remove the outdated state.

This also solves Con 2 with a little creative design. For an output that is intended to only be spent by root, make the redeemer NobodyCanEverSpend, then write a SpendByRoot that verifier evicts it as long as the payload to the verifier contains a signature by the sudo key.

Pass full outputs to verifier

Another alternative is to change the interface so that entire Outputs, including the redeemer, are passed to the verifiers. This solves Con 3 but has the disadvantage that the Verifier must now know which redeemers are available to the aggregate runtime.

Pass full transaction to verifier

We could weaken the separation even more by passing the entire transaction to the verifier. This has the all of the disadvantages of passing just the full outputs. It also has the disadvantage that it is the verifiers responsibility to check the redeemers. It has the advantage that evictions are no longer necessary because any verifier can simply choose to ignore the redeemer.

Both of the above

It may be possible, to expose all of these APIs to piece developers, letting them choose which to implement. Pieces that can get by with a simple API can use it, and ones that require more powerful but more complex and dangerous APIs can use them. This may be possible with blanket implementations. Something like.

trait SimpleVerifier {
  // --snip--
}

trait PowerfulVerifier {
  // --snip--
}

impl<T: SimpleVerifier> PowerfulVerifier for T {
  // --snip--
}

POC Recursive tuxedo piece aggregation

These slacks and jacket came together from my tailor. And this tie, belt, and shoes were a gift from my wife. Tonight I'm wearing the tailor's suit with my wife's accessories and this shirt I got from the thrift shop.

One goal I've had in mind for Tuxedo, which is probably not mission critical but would facilitate a lot of composability and also be really elegant, is the ability to aggregate a Tuxedo ConstraintChecker out of constituent checkers that are themselves aggregated from their own constituents ๐Ÿข ๐Ÿข ๐Ÿข ...

When designing the aggregation macro I've tried my best to make sure that this recursive aggregation is possible, even going to great lengths during #100 to preserve the property. However I have never actually tried it.

This issue is to create a POC crate that contains a Tuxedo runtime using a recursively aggregated constraint checker, and the write some basic functional tests showing that it works as expected (even with inherents ๐Ÿคž).

The crux of the problem will be something like this pseudocode

#[tuxedo_constraint_checker(Verifier)]
pub enum MoneyAndTime {
    /// Checks monetary transactions in a basic fungible cryptocurrency
    Money(money::MoneyConstraintChecker<0>),
    /// Set the block's timestamp via an inherent extrinsic.
    SetTimestamp(timestamp::SetTimestamp<Runtime>),
}

#[tuxedo_constraint_checker(Verifier)]
pub enum Biology {
    /// Checks Free Kitty transactions
    FreeKittyConstraintChecker(kitties::FreeKittyConstraintChecker),
    /// Checks that an amoeba can split into two new amoebas
    AmoebaMitosis(amoeba::AmoebaMitosis),
    /// Checks that a single amoeba is simply removed from the state
    AmoebaDeath(amoeba::AmoebaDeath),
    /// Checks that a single amoeba is simply created from the void... and it is good
    AmoebaCreation(amoeba::AmoebaCreation),
}

#[tuxedo_constraint_checker(Verifier)]
pub enum RecursivelyAggregatedRuntime {
    /// Business related transactions related to time and money
    MoneyAndTime(MoneyAndTime),
    /// Nature related transactions related to population regeneration
    Biology(Biology),
    
    // TODO Find a way to make business and nature coexist and interact holistically.
    // This comment is both a jab capitalism's handling of climate change and also a real programming task
    // For this to be really compelling, we need a way to expose configs for the intermediate checkers
    // and do things like use the business currency to pay for amoeba minting atomically.
    
    /// Upgrade the Wasm Runtime
    RuntimeUpgrade(runtime_upgrade::RuntimeUpgrade),
}

Wallet: Support transactions to multiple recipients

Currently the wallet allows sending inputs from multiple owners in a single transaction, but only allows a single recipient to all specified outputs.

To support specifying a recipient on a per-output basis we will need to expand the cli somehow.

Spaces or Tabs

Yep, this is seriously the first issue on the repo ๐Ÿš€

I don't want to dive into the long history of which is fundamentally better. Because the answer is "neither; they're both fine; stop fussing about it and get back to coding". I just want to decide what is better in the context of this project in the Substrate ecosystem.

In #11 I added some CI, but cargo fmt is currently disabled because we have a combination of tabs and spaces and need to decide.

In favor of tabs

  • This is what the rest of the Substrate ecosystem uses, because this is what Gavin likes. Following suit will make it easier to merge changes from the upstream node template.

In favor of spaces

  • This is the default for cargo fmt and sticking with the default means we don't need to have a config file for cargo fmt. The repo is slightly simpler.
  • Tabs are used in the rest of the Substrate ecosystem because Gavin likes them. And we are Off Narrative Labs ๐Ÿคฃ

Tests for executive

Most of tuxedo executive needs better test coverage. Although we have some end-to-end testing through the wallet, and we have pretty good test coverage on the redeemers, we still need unit tests for the transaction checking and storage updating logic.

Custom `GenesisBlockBuilder`

Related to my question on the Substrate Stack Exchange: Extrinsics in Genesis Block.

Substrate chains by default do not have any extrinsics in the genesis block. This is fine in many cases, but we desire to have extrinsics in the genesis block. The most concrete use case I have in mind is putting a timestamp there. That would allow us to remove the first block hack introduced in #100. It would also be useful for initial token distributions and other usecases as explained in the stack exchange question.

To achieve this we need to implement the GeneisBlockBuilder trait. The defacto standard implementation in Substrate chains is the GenesisBlockBuilder struct and I propose we take inspiration from it but just add some genesis extrinsics.

Just braindumping, the implementatin might look something like this?

/// A Genesis block builder similar to the default in sc_chain_spec, but this one
/// allows extrinsics in the genesis block
pub struct TuxedoGenesisBlockBuilder {
  // Fields TBD
}

// Inspired by https://paritytech.github.io/polkadot-sdk/master/src/sc_chain_spec/genesis.rs.html#124
impl<Block: BlockT, B: Backend<Block>, E: RuntimeVersionOf> BuildGenesisBlock<Block>
	for TuxedoGenesisBlockBuilder<Block, B, E>
{
	type BlockImportOperation = <B as Backend<Block>>::BlockImportOperation;

	fn build_genesis_block(self) -> sp_blockchain::Result<(Block, Self::BlockImportOperation)> {
		let Self { genesis_pre_state, genesis_extrinsics, commit_genesis_state, backend, executor, _phantom } = self;

		let genesis_state_version = resolve_state_version_from_wasm(&genesis_storage, &executor)?;
		let mut op = backend.begin_operation()?;
		let mut genesis_block = construct_genesis_block::<Block>(state_root, genesis_state_version);

		// TODO push extrinsics to block
		// TODO calculate the new post-state. Maybe we can actually 
		// execute the transactions somehow; that would be ideal
		// Or maybe we can just manually construct the correct output state.

		let state_root =
			op.set_genesis_state(genesis_storage, commit_genesis_state, genesis_state_version)?;

		Ok((genesis_block, op))
	}
}

Perhaps we could also clean up the in-runtime aspects of the broader genesis by only allowing extrinsics to create state. Like when you fill in the genesis config in the chain spec, you actually just fill in some json like this.

genesis: {
  code: ...,
  timestamp: ...,
  genesis_extrinsics: [Mint(100, Alice), Mint(50, Bob)]
}

Wallet: Store local database of UTXOs

Currently the wallet requires the user to manually remember what OutputRef's belong to them. A real wallet will maintain a local database of UTXOs belonging to the keys in its keystore.

General Syncing Strategy

The wallet will need to scan the blockchain and check each transaction to see if any new outputs are available for any of the keys it controls (or more generally, if it knows how to produce a redeemer that satisfies the corresponding validator). This process is kind of roughly described in https://bitcoin.stackexchange.com/q/75840

To sync, we could add a subcommand Sync. When given this command, the wallet will look up the last block to which it synced from the filesystem, and query the node for all blocks from there to the tip, noting any new UTXOs along the way.

New Keys

When a new key is added, the user should specify from which block the wallet should resync. If no block is provided no resyncing happens. This is what a user would want when inserting never-before-used keys which couldn't have any possible existing UTXOs.

If the key has been used before and is just being restored to this wallet, the wallet will query the node for all blocks starting at the block the user provides (possibly genesis) to check for owned utxos.

Manually specifying owned UTXOs

We could consider letting the user skip the syncing, and instead manually specify the UTXOs that they already know are relevant. Maybe they are exported from another wallet. Although this may be premature optimization.

Storage Optimization

Users may choose to delete keys at any time. When this happens the wallet should also prune the now-useless database of UTXOs owned by the deleted keys. To make this operation performant, the wallet should store the database on a per-key basis.

Forks and Reorgs

We have to handle the reality of re-orgs. If the wallet syncs its local database to the tip of the chain, and then a few blocks are orphaned, the wallet will have inconsistent data. I haven't fully thought this through yet.

One strategy would be to rely on finality. But not all chains provide finality, and even when finality is available, this may lead to an unacceptable lag for the user.

Another strategy is to keep the local database stored on a per-block basis so that the wallet can actually revert the affected portion easily. When starting a sync, the wallet would query the node to see if its best known block is currently canonical. If so great, just do a linear sync. If not, find a common ancestor, revert local state after the fork block, and then do a linear resync. We'll have to see what RPCs the node provides to make this possible. There may also be some hints in Polkadot JS for a good strategy.

Instantiable Pieces

FRAME pallets can be made instantiable which allows multiple copies of the same pallet to be instantiated in a single runtime. For example, there could be two tokens in a single runtime.

Tuxedo currently does not have this notion. On the one hand this may be easier and more natural in Tuxedo because we could simple add another variant to the aggregate verifier like so:

#[aggregate(Verifier)]
pub enum AggregateVerifier {
  TokenA(money::Verifier),
  TokenB(money::Verifier),
  // --snip--
}

On the other hand this alone is not sufficient because there is no way to tell which token a given UTXO represents or prevent the two pieces from consuming each others tokens. This would be disastrous if, for example, a user paid fees in the wrong token.

Perhaps one way to do this would be to make the piece configurable over a token id which is stored in the output. Then the aggregation macro enforces that if there are multiple instances of a piece they must have different configuration types.

Replace `expect` with Error in `node/src/service.rs`

I was trying earlier to make this return an error when needed, instead of panicking. I wanted to do someting like:

let parent_block = client_for_cidp
    .clone()
    .block(parent_hash)?
    .ok_or(sp_blockchain::Error::UnknownBlock(parent_hash))?
    .block;

There was some issues with the closure being async, and I'm not familiar with that, so please try to make this work without spending too much time on it, otherwise it's fine to leave it like this.

Originally posted by @muraca in #100 (comment)

Wallet cannot sync blocks with Transactions

At some point the wallet broke on main. Most likely when we did one of the dependency updates like #95 for example.

I fixed part of the issue in #119 but there is still a bigger problem. This is the underlying issue that is making the CI fail on #100 we just don't see it on main because there are no transactions in most block on main.

To reproduce on main:

# Run the node in one terminal
./target/release/node-template --dev

# Sync the wallet a few times and notice that it works fine when blocks are empty.
./target/release/tuxedo-template-wallet show-balance
./target/release/tuxedo-template-wallet show-balance

# Send a transaction that goes through successfully on chain
./target/release/tuxedo-template-wallet spend-coins -o 90

# Try to sync the wallet again, but now it fails to sync past the block that contains a transaction.
./target/release/tuxedo-template-wallet show-balance

@muraca did some debugging and got this output, although I wasn't 100% sure how.

image

Use Workspace Dependencies

A lot of Substrate projects are now specifying dependencies in the main Cargo.toml.

Tuxedo should do that too.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.