GithubHelp home page GithubHelp logo

keep-network / keep-core Goto Github PK

View Code? Open in Web Editor NEW
107.0 18.0 72.0 77.88 MB

The smart contracts and reference client behind the Keep network

Home Page: https://keep.network

License: MIT License

JavaScript 26.34% Go 37.62% Shell 0.74% Makefile 0.16% CSS 0.03% Dockerfile 0.11% HTML 0.03% HCL 0.75% Solidity 16.95% Less 1.34% TypeScript 15.37% Python 0.48% TeX 0.06% Handlebars 0.02%
cryptocurrency privacy interoperability

keep-core's Introduction

keep-core

ECDSA contracts build status Random Beacon contracts build status Go client build status Docs Chat with us on Discord

The core contracts and reference client implementation behind the Keep network, a privacy, interoperability, and censorship-resistance toolkit for developers on Ethereum.

What’s a keep?

The network offers application developers keeps, small off-chain data containers for private storage and computation that can be opened, closed, and managed by smart contracts autonomously.

Keeps are maintained by stakers, actors who run nodes and have skin in the game, and collect fees for operating the network. When a new keep is opened, the requisite number of stakers are chosen via a BLS-based random beacon to maintain the keep, using a process called sortition.

The first type of keep launching with the network is the BondedECDSAKeep, allowing smart contracts to generate private keys and sign messages without endangering key material. ECDSA keeps mean decentralized signing, cross-chain applications, and new tools for custodial applications — from Solidity. This capability is used heavily by tBTC.

To learn more about ECDSA keeps, check out keep-ecdsa.

Getting Started

A good place to start is the docs directory.

Running a Node

To run your own node in the Keep Network, follow the Run Keep Node doc. Feedback on this process and the documentation is appreciated!

Moving to a new random beacon

The legacy core contracts of the random beacon are moved to the solidity-v1/ directory which can be referred as "v1". The newest "v2" random beacon contracts can be found in solidity/random-beacon directory. The full specification of the "v2" random beacon is written in rfc-19-random-beacon-v2.adoc.

dApp Developers

dApp developers will be most interested in the smart contracts exposing Keep’s on-chain facilities.

The core contracts can be found in the solidity-v1/ directory. They can be used to request miner-resistant random numbers, as well as creating and managing keeps. To generate new ECDSA key material and request signatures, the contracts can be found in keep-ecdsa.

Client Developers

Client developers will be most interested in the reference Keep Go client and CONTRIBUTORS file, as well as the RFCs and repo directory structure 👇

Directory structure

The directory structure used in this repository is very similar to that used in other Go projects:

keep-core/
  Dockerfile
  main.go, *.go
  docs/
  solidity/ (1)
    ecdsa/
    random-beacon/
  solidity-v1/ (2)
  cmd/ (3)
  pkg/ (4)
    net/
      net.go, *.go (5)
      libp2p/
    chain/
      chain.go, *.go (5)
      ethereum/
        gen/
          gen.go (6)
    relay/
      relay.go, *.go
  1. Core contracts of the Keep contracts. Random beacon contracts are stored under /solidity/random-beacon whereas ECDSA under /solidity/ecdsa.

  2. Legacy core contracts of the random beacon (v1). While the Keep network only uses Solidity at the moment, the directory structure allows for other contract languages.

  3. Keep client subcommands are implemented here, though they should be minimal and deal solely with user interaction. The meat of the commands should exist in a package fit for the appropriate purpose.

  4. All additional packages live in pkg/.

  5. The high-level interfaces for a package mypackage live in mypackage.go. net and chain are interface packages that expose a common interface to network and blockchain layers. Their subpackages provide particular implementations of these common interfaces. Only cmd/ and the main package should interact with the implementations directly.

  6. When a package requires generated code, it should have a subpackage named gen/. This subpackage should contain a single file, gen.go, with a // go:generate annotation to trigger appropriate code generation. All code generation is done with a single invocation of go generate at build time.

keep-core's People

Contributors

battenfield avatar decanus avatar dependabot[bot] avatar dimpar avatar elderorb avatar eth-r avatar k0kk0k avatar kampleforth avatar kb0rg avatar l3x avatar liamzebedee avatar lispmeister avatar ljk662 avatar lukasz-zimnoch avatar mhluongo avatar michalinacienciala avatar michalsmiarowski avatar navneetlakra avatar ngrinkevich avatar nicholasdotsol avatar nkuba avatar omahs avatar pdyraga avatar pschlump avatar r-czajkowski avatar rargulati avatar shadowfiend avatar starsitar avatar thevops avatar tomaszslabon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

keep-core's Issues

Beacon group member choice strategy

When the beacon triggers new group creation, how do we choose the members of the new group?

Some potential goals:

  • Do we want to have a higher stake = more groups or more likelihood of being in more groups?
  • Do we want to give stakers that are in no or fewer groups higher likelihood of being included in a new group?

The most basic strategy, and the one employed by the Dfinity relay, is a simple Fisher-Yates shuffle over all candidates. Go's math/rand package has a Shuffle function that does this over a container, though it's worth noting that we'll need to be able to implement this in Solidity at minimum, and only optionally in Go.

Another possibility is to create an exponential distribution over a sorted list of the possible members, sorted by the parameters we want to select for. There are some challenges to this, including the fact that it will be annoying to implement in Solidity, and we need the chain to know who is supposed to be in a group.

Yet another possibility, mentioned by @pschlump, is to create buckets for the preferred-treatment groups, e.g. allocate a members for high stakers and b members for stakers with low group membership, and then allocate n-a-b members via Fisher-Yates, where a << n and b << n.

This should capture most of our discussion from Flowdock.

To Makefile or not to Makefile, that is the question

Pulling this discussion into here so we don't get too far off topic in @rargulati's PR.

Quick breakdown from #26 :

@mhluongo:

Am I the only one who wants to use the Docker build in place of eg make? It's already standardized and having two build systems seems strange.

@pschlump:

I have used both. IMHO it is much harder to get docker do do simple
tasks. Before we are done we should create a Docker base build because it
captures all the dependencies and is fully reproducible.

@mhluongo:

Agreed that make is simpler, but we already have a Dockerfile and are using it to do CI and (eventually) reproducible builds- why would we maintain two? It'd lead to some people updating the Makefile and others updating the Dockerfile, I expect.

@Shadowfiend:

I don't consider docker build a build system for our application, I consider it the bit that sets up our docker image. The Dockerfile should just run make IMO.

I see make as answering “how do we build our project”.

I see docker as answering one variation of “how do we distribute our project”.

@mhluongo:

... I get the appeal, I really do. But I've seen this show before. The Dockerfile needs to understand as much as a Makefile to properly cache, and we already have all make-like functionality we need in CircleCI. Using both will lead to a dual build system.

🙏 spend a little time cozying up with Docker and CircleCI before we pull the trigger on adding make to the mix.

@Shadowfiend:

to properly cache

Can you explain some more about this/point to relevant docs? Not sure I follow.

@keep-network/go

libp2p network identity management

The exposure to the application layer here is giving a BroadcastChannel consumer access to an opaque net.TransportIdentifier, which in libp2p will likely be implemented by a peer.ID. We should also allow the BroadcastChannel to associate this identifier with an equivalent net.ProtocolIdentifier, which in DKG and relay operation will be a bls.ID.

The interface for this is already defined in BroadcastChannel as RegisterIdentifier:

// RegisterIdentifier associates the given network identifier with a
// protocol-specific identifier that will be passed to the receiving code
// in HandleMessageFunc.
//
// Returns an error if either identifier already has an association for
// this channel.
RegisterIdentifier(
networkIdentifier TransportIdentifier,
protocolIdentifier ProtocolIdentifier,
) error

Vesting-compatible staking token

KEEP is an ERC-20 token based on OpenZeppelin.

The token needs to support vesting grants as well as staking. Basically, we want any vested tokens to be stake-able. If the vesting grant is revoked, the stake should be withdrawn.

We can combine vesting and staking into one contract to make this easier, or find a way to make it compatible with OpenZeppelin's existing vesting contract (safer).

The staking registry should

  • Include a collection of each staker's public address and how much they've staked.
  • Make it easy / efficient to refer to a staker outside the staking contract
  • Require a delay when stakers want to withdraw their stake. This should be configurable at contract deploy time (in the contract constructor).

If it's easy / cheap to refer to a staker & check their status in an external contract, we can move add any other metadata we need outside this contract and keep this simple / auditable.

Keep command interface

In the form of --help docs:

keep-client [<option> ...] <subcommand>

<option> are:
  --config: pointer to the TOML config file
  --...: overrides for config settings?

Subcommands:
  start [<option> ...]
    Starts the Keep client in the foreground. Currently this consists of the
    threshold relay client for the Keep random beacon and the validator client
    for the Keep random beacon.

    --[no-]relay: Enables or disables the relay client; enabled by default.
    --[no-]provider: Enables or disables the Keep provider client; enabled
       by default.

  contract [<option> ...] <contract> [<arg> ...]
    Invokes the named <contract>, passing it the given <arg>s.

    <option> are:
      --call: default, calls the contract without submitting a tx, returns result
      --submit: submits a tx, does not return result
      --nonce <number>: force a nonce
      --address: specify an address to override existing config

  relay <subcommand>

    Subcommands:

    request [<option> ...]:
      issues a relay request and outputs the request id, then waits for
      relay entry to be returned and prints that as well

      <option> are:
        --request-only: only prints request id
        --entry-only: only prints entry
    entry <id>:
      looks up the entry for the given id and prints it

  stake <address> <amount>:
    Stakes the given amount of KEEP from the given address.
    can think about whether we want this one.

Want to build a quick prototype of this soon so we can actually run it and see how it feels.

See this discussion in Flowdock.

Environment variable for Ethereum account password

Let's investigate other projects with clients that connect to Ethereum and see if there is some sort of emerging standard env variable that stores account passwords. For now we're using KEEP_ETHEREUM_PASSWORD, but if there's a standard or semi-standard approach we should probably switch to it.

libp2p message signing + verifying

All messages sent over a net.BroadcastChannel via the libp2p transport should be signed on the sender end and verified on the receiver end. This isn't something the application layer should have to worry about, it should just happen transparently.

A message that is received but not verifiably signed can be logged and dropped.

Threshold group registration

Pushing the threshold group public key + metadata onto the chain. For now, this is just about calling whatever interface we settle on.

libp2p network encryption

When a BroadcastChannel's SendTo method is used, the libp2p transport will need to ensure that the message payload is encrypted so that only the sender and receiver are in the clear[1]. The sender side libp2p layer should encrypt, and the receiver side libp2p layer should decrypt, such that the application layer only needs to care that it is sending or receiving a direct message, and everything else is handled transparently.

[1]- In a perfect world even the sender would probably be encrypted, but we can worry about that at a later time when we approach the question of anonymity.

Pricing/rewards for threshold relay

We need a concrete plan for how entries will be priced for the relay, if and how that price might adjust, and how we want to handle the block reward.

libp2p provider initialization

Allow the application to connect to a libp2p network and receive back a net.Provider implementor. net.Provider is currently defined in a very limited fashion:

keep-core/pkg/net/net.go

Lines 67 to 75 in 0925eac

// Provider represents an entity that can provide network access.
//
// Currently only two methods are exposed by providers: the ability to get a
// named BroadcastChannel, and the ability to return a provider type, which is
// an informational string indicating what type of provider this is.
type Provider interface {
ChannelFor(name string) BroadcastChannel
Type() string
}

Flag/CLI implementation thoughts

Throwing open a thread for thoughts on the CLI and how we manage the flags/subcommands/etc. Raghav mentioned in this comment a few possibilities, but I'd prefer concrete thoughts and reasoned preferences here in addition to pointers to library possibilities.

I'd also like us to consider how far we can get using Go's builtin stuff, mostly so we can minimize external dependencies (since external dependencies are simply additional surface area for audits, etc).

Verifying group threshold keys

We've kicked around a few ideas for how best to do this. After researching #31 we know BLS aggregated signatures aren't an option due to the expensive pairing check scaling with the group size.

Keep client versioning

How should we bake the version and git repo hash number into the client app?
How should that solution interact with the Dockerfile and our build process?

Here's one partial solution ( makefile-with-version-scripts branch ) that includes a version and revision scripts and a Makefile.

Implement validation for config file

As suggested by @l3x, let's go beyond the basic “is this valid TOML” validation and do some deeper validation of entries in the config file as well. Split off from #95 and its initial implementations in #126.

Providing a config file on the command-line

Just a --config flag that can let us start feeding a config file into the system. It should route the config file through the new readConfig function.

See #52 for more, though this is quite literally just a flag that takes a path :)

libp2p authentication

  • Prove stake to join libp2p network
  • Block unproven clients

This will be done in a bootstrap node.

Proving stake here can start with sending the staking address to the bootstrap node, signed by the staking key, perhaps… This should allow the bootstrap node to verify that the signature on the staking address is correct for that address, and then check the network to ensure the address in question is staked.

We need to figure out whether the above approach is correct and implementable, or if we need a different one instead.

Let's use this issue to discuss implementation, and then spin up separate issues for proving stake and blocking unproven clients when we're ready to start working on them.

Identity and Authenticator

The only "Identity" level information we should export is the PrivateAuthn and PublicAuthn methods. We defined them as so:

For actions that involve a public key

type PublicAuthentoricator interface {
    Verify(data []byte, sig []byte, net.TransportIdentifier pubkey [KeySize]byte) bool
    Encrypt(pubkey [KeySize]byte, msg []byte) ([]byte, error)
}

and for actions that require a private key

type PrivateAuthenticator interface {
    Sign(data []byte) ([]byte, error)
    Decrypt(msg []byte) ([]byte, error)
}

This allows us to remote the Identity interface (as presented in #69 :

type Identity interface {
	ID() peer.ID
	AddIdentityToStore() (pstore.Peerstore, error)
	PubKey() ci.PubKey // TODO: keep or not?
	PubKeyFromID(peer.ID) (ci.PubKey, error)
}

As all of the Authenticator business happens in the net package, we ensure the identity being shipped around in the envelope is the peer.ID. We then define a PubKeyFromId which is:

func (pi PeerIdentity) PubKeyFromID(peer.ID) (ci.PubKey, error) {
	return pi.ID().ExtractPublicKey()
}

Handling relay entry request

  • Monitor the chain for a relay entry request in a block.
  • Determine if node is eligible to handle request.
  • Perform threshold signature to generate entry.
  • Submit relay entry to the chain.

libp2p rejection/DoS avoidance

Current thought here is a blacklist, perhaps IP-based, perhaps with exponential backoff. May depend on libp2p authentication, as ideally we would start rejecting connections from a blacklisted client at a very low level once they fail to prove their stake.

Nodes receive duplicate events from geth

What a time to be alive!

Occasionally (during testing), we'll receive duplicate stake events (for a given staker, all of the nodes will receive the stake event >1 times). This could be happening for a variety of reasons (likely geth retrying the messages), but, more importantly, we need to ensure we're de-duplicating these chain events!

Client `start` command

Implement the start command on the command-line, allowing the client to start up and join the relay, checking for necessary network and chain config.

See #52 for details. No need to implement --provider or --no-provider, since there's no Keep provider code yet. --no-relay should just spit out an error that no clients were selected for startup, so the program is exiting.

Group joining and distributed key generation

  • Triggers when new relay entry comes in; (stable) algorithm for deciding the members selected by the new relay entry
  • Perform distributed key generation.
    • Tests.
  • Register public key on chain.

Joining libp2p network

  • Discover the network.
  • Join the network.
  • Register to be part of the next signing group.

Settle on a Go dependency management solution

This started off in a playground repo... @keep-network/go please sound off/discuss :)

Currently two dimensions to this:

  • Which tool do we use? @l3x has a strong preference for Glide.
  • Do we commit vendored content? Alternatives include just committing lock files (or equivalents), and depending on keep-network clones of repos (so we are in complete control of the repos).

@Shadowfiend @ https://github.com/keep-network/go-experiments/pull/1#issuecomment-356955855:

I would say our user is not the person who builds the Go code, it's the person who runs the Go code. If someone has issues building the Go code, that is directly our problem, and we should seek to fix it.

I'm interested in this argument:

it guards against upstream renames, deletes and commit history overwrites

I'm not sure I want to be guarded against that, insofar as I want to know when an upstream of ours is misbehaving in this fashion if we're depending on them. This immediately makes them less reliable, and should make us reconsider using them.

Tl;dr: the cons feel worse than the pros for me… It feels like a package manager that requires you to commit your vendor directory has simply failed as a package manager. But if everyone else feels like this is the way to go, let's do it. I will swallow my annoyance and live with 1.7 million LOC PRs that are secretly 78LOC PRs ;)

@keep-network/go Interested in thoughts from more of ya!

For the purposes of this PR, this discussion is moot. This is an experiments repo anyway, so it's the wild west and can just serve to inform the broader discussion. Merging!

@rargulati @ https://github.com/keep-network/go-experiments/pull/1#issuecomment-357039589:

On the vendor thing - I agree with your annoyance. Go is a total disaster when it comes to package management - in the entirety of the thing. That being said, I’ve found that going with the conventions when starting out (vs going against them) saves a lot of headache. Buuuutttt it’s important to talk about these tradeoffs, because some conventions early on bite you really bad later (ie. ignoring package layout, monorepo vs many repo, etc)

The vendoring thing may bite us later - I don’t know, enough to confidently say it will. I've seen one Go vendoring solution at scale, and it ended up being bad - so dep is most definitely an improvement over that (bmizerany/vendor). I’ve got experience with Ruby (gems and bundling were ok) and Go (annoying, but not the worst). Friends point to cargo in Rust as a great solution.

For our client applications, it could make sense to revisit this.

@mhluongo @ https://github.com/keep-network/go-experiments/pull/1#issuecomment-357116927:

From an in-person convo with @rargulati:

I wonder if we can get the best of both worlds by forking / hosting all dependencies, and only allowing vendoring from those forks? It means needing to touch another repo to update a dependency, but it also means we can't get leftpad'd or have someone else break the build, and it keeps PR's looking good.

Reproducible builds are important for this project, and that approach would kick it up a notch and make us focus on upstream changes.

Going to let Lex summarize his position from this post, which provides a fairly detailed exploration of the problem for reference.

@lex @ https://github.com/keep-network/go-experiments/pull/1#issuecomment-364503887:

After a bit more effort managing keep-core dependencies I created this Go dependency management document. It has screen shots and details. Below is the summary:

  • gx leverages ipfs, which is cool but it's in alpha and not ready for production use.
  • dep is slightly better, but not by much. Feels like a beta and is time consuming.
  • glide just works.

Relationship between keys and bls.ID

We've got a few public keys and keylike material swimming around, so let's clarify if they are related or derived or what:

  • bls.ID represents a member's identity in a BLS group. It is keylike.
  • A client is associated with a staking public key, which refers to the parent chain account that staked.
  • A client has a public key in libp2p.

bls.ID could conceivably be derived from either the staking public key or the libp2p public key. Do we want to do this? Or do we want BLS IDs to be per group rather than having one BLS ID for all groups per client? If group members can be derived in a stable order from chain data (see #58, but almost certainly this will be true), the BLS IDs can simply be indices from 1 to group size based on the staker's place in the member list for the group.

Can we derive or reuse the staking keys for libp2p communication? Do we want to (probably not, but want to record that)?

@mhluongo looking for your thoughts here to pull this conversation out from #56.

Chain-beacon/relay interface

More in the relay's concern than the chain's concern, this is about how the chain and beacon/relay interact. Some open questions:

  • Use promises.
  • Need handlers for group registration, relay entry appearance, and relay request appearance.
  • Need to be able to call relay entry submission (triggering relay entry appearance above).

Threshold ECDSA: Research/prototype implementation

ECDSA is the primary signature scheme used by many cryptocurrencies, including most Bitcoin variants and Ethereum. ECDSA doesn't support a threshold or aggregate configuration by default (versus newer schemes like BLS and Schnorr).

MPC can be used to make a threshold-supporting variant of ECDSA that's interoperable with the signatures on today's chains (on the libsecp256k1 curve). Goldfeder et al proposed such a construction, and provided an initial implementation and a revision in Java.

Goldfeder's implementation doesn't include a distributed key generation implementation- luckily, we should be able to use our existing work ported to secp256k1.

For our first pass, let's port ThresholdECDSA to Go, plugging in existing Go crypto libraries where possible. In particular, we'll make use of geth's secp265k1 package. We also need something for Pedersen commitments (or we need to write it ourselves) and a Paillier HME implementation (this looks like the most promising).

The outcome we're looking for here is a package that implements the threshold ECDSA protocol, without concern for networking. We'll cover networking and incentives in future work.

Client `relay` command

Implement the relay command on the command-line, with its subcommands for request and entry, allowing the client to submit a relay request.

See #52 for some details.

Implement Ethereum DKG-related chain interfaces

Currently specified in relay.go as ChainInterface:

// ChainInterface represents the interface that the relay expects to interact
// with the anchoring blockchain on.
type ChainInterface interface {
// SubmitGroupPublicKey submits a 96-byte BLS public key to the blockchain,
// associated with a string groupID. An error is generally only returned in
// case of connectivity issues; on-chain errors are reported through event
// callbacks.
SubmitGroupPublicKey(groupID string, key [96]byte) error
// OnGroupPublicKeySubmissionFailed takes a callback that is invoked when
// an attempted group public key submission has failed. The provided groupID
// is the id of the group for which the public key submission was attempted,
// while the errorMsg is the on-chain error message indicating what went
// wrong.
OnGroupPublicKeySubmissionFailed(func(groupID string, errorMsg string)) error
// OnGroupPublicKeySubmitted takes a callback that is invoked when a group
// public key is submitted successfully. The provided groupID is the id of
// the group for which the public key was submitted, and the activationBlock
// is the block at which the group will be considered active in the relay.
//
// TODO activation delay may be unnecessary, we'll see.
OnGroupPublicKeySubmitted(func(groupID string, activationBlock *big.Int)) error
}

Implement the underlying ties to the chain in the pkg/chain/ethereum/ package.

System upgrade handling

Some of the questions this issue can track (until we break it down further, if needed):

  • What conditions trigger a “hard fork”-style whole-system upgrade?
  • What does a whole-system upgrade consist of? e.g., what happens to the relay and its relay groups? [1] What happens to keeps?
  • What do softer upgrades look like (e.g., client upgrades)?

[1]- One concrete question for the relay is, say we want to change the curve on which we run BLS key generation and signing. How do we manage this? In particular, do we require all active relay groups to regenerate their keys post-upgrade? Do we support two curves in parallel? Do we drop all groups and restart the relay? Working out some of the implications of these decisions would be good. Several of them will likely apply to other places as well (e.g., Threshold ECDSA likely will have similar questions about crypto upgrades).

Ethereum threshold relay config

Allow a client to fetch, on startup, the current relay configuration from the contract. Need the contract to expose access + the Go interface implementation.

Relay configuration currently is group size + threshold. See beacon.ChainInterface:

// Config contains the config data needed for the beacon to operate.
type Config struct {
GroupSize int
Threshold int
}
// ChainInterface represents the interface that the beacon expects to interact
// with the anchoring blockchain on.
type ChainInterface interface {
GetConfig() Config
}

Ethereum provider initialization

Allow the application to connect to the Ethereum network and receive back a chain.Provider implementor. chain.Provider is currently defined as chain.Handle:

// Handle represents a handle to a blockchain that provides access to the core
// functionality needed for Keep network interactions.
type Handle interface {
BlockCounter() BlockCounter
RandomBeacon() beacon.ChainInterface
ThresholdRelay() relay.ChainInterface
}

Explore network level protocol enforcement

Per convo that @Shadowfiend and I had on Friday.

Our groups will be able to engage in a number of protocols. For example, a protocol to create groups (DKG), a protocol to create a signature (BLS, ECDSA, etc), and potentially other protocol. It's helpful if we can leverage some machinery that libp2p has around protocol enforcement at the transport layer: clients switch to speaking a certain wireprotocol and . Unfortunately, it's not immediately clear how to make StreamHandlers (see go-libp2p-host) work/stack with StreamHandlers set with go-libp2p-pubsub and our yamux transport, both which define a StreamHandler.

For now, the thought is to put a string in our Protobuf Envelope type that resembles the libp2p wire structure: ie. /dkg/join/0.0.1, /dkg/accuse/0.0.1. This let's us know the following:

  1. What protocol we are executing
  2. What phase of the protocol are we in
  3. Semantic version of the protocol phase as implemented in the executing client.

We can compare these identifiers in the client, further enforcing communication. Later, we can implement the go-libp2p-host interface and create a custom transport that melds yamux and our own protocol enforcement. Or Something.

TODO: links in the above.

Consider hardening client against local attacks

Catch-all issue to capture things we want to worry about if we take on hardening the client against an attacker who has local access, as well as discussion on whether we want to worry about it and such.

On-chain BLS signature verification

We need to be able to verify single and threshold signatures on-chain. Aggregated signature verification would also be great, though it appears to be too expensive for practical use without new precompiled contracts.

Getting this to done, I'd like to see

  • Detailed documentation
  • Go signing proof-of-concept
  • Solidity verification proof-of-concept
  • Naive gas cost analysis (mostly the pairings check)
  • Test vectors

Once we're there, the BLS verification implementation can be pulled out into its own lib.

Client `smoke-test` command

This is really about moving the current main function, which does a smoke test by doing a simulated distributed key generation between 10 members with threshold 4 and then verifying that the resulting members can do a threshold signature correctly, into a subcommand until we've got the other pieces of the system up and running.

I think this would look like:

  • Setting up the subcommand infrastructure (per #52) with just the smoke-test command.
  • Moving the current contents of main.go (except for the bls.Init) to cmd/smoke_test.go, and invoke it from main when the smoke-test command is selected.

Pulling out a peerID from an Ed25519 Public Key results in a no-oop hashing function.

Flowdock conversation for context: https://www.flowdock.com/app/cardforcoin/tech/threads/45_UNoQkkzpufvVD48mdsHulGVD

Per the above, our public keys are Ed25519 keys generated from libp2p/go-libp2p-crypto, which is really just a thin wrapper around github.com/agl/ed25519.

More importantly, this means our corresponding peer.ID, generated by multiformats/go-multihash, is simply the concatenation of metadata + parts of the public key, but no hashing. Why?

When we attempt to pull out the peer.ID from our PublicKey, the following happens:

hash, err := mh.Sum(append(Ed25519PubMultiCodec, b[len(b)-32:]...), mh.ID, 34)
if err != nil {
	return "", err
}

Where mh.ID is const ID = 0x00, which results in the no-op "hash" case in multiformats/go-multihash/sum.go

Let's revisit if this is a good or bad thing, and what the potential impact of this decision is.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.