keep-network / keep-core Goto Github PK
View Code? Open in Web Editor NEWThe smart contracts and reference client behind the Keep network
Home Page: https://keep.network
License: MIT License
The smart contracts and reference client behind the Keep network
Home Page: https://keep.network
License: MIT License
How should we bake the version and git repo hash number into the client app?
How should that solution interact with the Dockerfile and our build process?
Here's one partial solution ( makefile-with-version-scripts branch ) that includes a version and revision scripts and a Makefile.
We've got a few public keys and keylike material swimming around, so let's clarify if they are related or derived or what:
bls.ID
represents a member's identity in a BLS group. It is keylike.bls.ID
could conceivably be derived from either the staking public key or the libp2p public key. Do we want to do this? Or do we want BLS IDs to be per group rather than having one BLS ID for all groups per client? If group members can be derived in a stable order from chain data (see #58, but almost certainly this will be true), the BLS IDs can simply be indices from 1 to group size based on the staker's place in the member list for the group.
Can we derive or reuse the staking keys for libp2p communication? Do we want to (probably not, but want to record that)?
@mhluongo looking for your thoughts here to pull this conversation out from #56.
Pushing the threshold group public key + metadata onto the chain. For now, this is just about calling whatever interface we settle on.
This started off in a playground repo... @keep-network/go please sound off/discuss :)
Currently two dimensions to this:
@Shadowfiend @ https://github.com/keep-network/go-experiments/pull/1#issuecomment-356955855:
I would say our user is not the person who builds the Go code, it's the person who runs the Go code. If someone has issues building the Go code, that is directly our problem, and we should seek to fix it.
I'm interested in this argument:
it guards against upstream renames, deletes and commit history overwrites
I'm not sure I want to be guarded against that, insofar as I want to know when an upstream of ours is misbehaving in this fashion if we're depending on them. This immediately makes them less reliable, and should make us reconsider using them.
Tl;dr: the cons feel worse than the pros for me… It feels like a package manager that requires you to commit your vendor directory has simply failed as a package manager. But if everyone else feels like this is the way to go, let's do it. I will swallow my annoyance and live with 1.7 million LOC PRs that are secretly 78LOC PRs ;)
@keep-network/go Interested in thoughts from more of ya!
For the purposes of this PR, this discussion is moot. This is an experiments repo anyway, so it's the wild west and can just serve to inform the broader discussion. Merging!
@rargulati @ https://github.com/keep-network/go-experiments/pull/1#issuecomment-357039589:
On the vendor thing - I agree with your annoyance. Go is a total disaster when it comes to package management - in the entirety of the thing. That being said, I’ve found that going with the conventions when starting out (vs going against them) saves a lot of headache. Buuuutttt it’s important to talk about these tradeoffs, because some conventions early on bite you really bad later (ie. ignoring package layout, monorepo vs many repo, etc)
The vendoring thing may bite us later - I don’t know, enough to confidently say it will. I've seen one Go vendoring solution at scale, and it ended up being bad - so dep is most definitely an improvement over that (bmizerany/vendor). I’ve got experience with Ruby (gems and bundling were ok) and Go (annoying, but not the worst). Friends point to cargo in Rust as a great solution.
For our client applications, it could make sense to revisit this.
@mhluongo @ https://github.com/keep-network/go-experiments/pull/1#issuecomment-357116927:
From an in-person convo with @rargulati:
I wonder if we can get the best of both worlds by forking / hosting all dependencies, and only allowing vendoring from those forks? It means needing to touch another repo to update a dependency, but it also means we can't get leftpad'd or have someone else break the build, and it keeps PR's looking good.
Reproducible builds are important for this project, and that approach would kick it up a notch and make us focus on upstream changes.
Going to let Lex summarize his position from this post, which provides a fairly detailed exploration of the problem for reference.
@lex @ https://github.com/keep-network/go-experiments/pull/1#issuecomment-364503887:
After a bit more effort managing keep-core dependencies I created this Go dependency management document. It has screen shots and details. Below is the summary:
- gx leverages ipfs, which is cool but it's in alpha and not ready for production use.
- dep is slightly better, but not by much. Feels like a beta and is time consuming.
- glide just works.
We'll need something other than https://github.com/thomasWeise/docker-texlive and should pin circle/node:latest
in CI
Some of the questions this issue can track (until we break it down further, if needed):
[1]- One concrete question for the relay is, say we want to change the curve on which we run BLS key generation and signing. How do we manage this? In particular, do we require all active relay groups to regenerate their keys post-upgrade? Do we support two curves in parallel? Do we drop all groups and restart the relay? Working out some of the implications of these decisions would be good. Several of them will likely apply to other places as well (e.g., Threshold ECDSA likely will have similar questions about crypto upgrades).
We need to know how to authenticate messages from nodes off-chain
Refs #10
We need a concrete plan for how entries will be priced for the relay, if and how that price might adjust, and how we want to handle the block reward.
They're currently hard coded in the init() function.
Having the build process auto-populate the Version and Revision numbers in main.go will compile that info into the keep client binary, which can later be reported on by passing --version
to the CLI.
All messages sent over a net.BroadcastChannel
via the libp2p transport should be signed on the sender end and verified on the receiver end. This isn't something the application layer should have to worry about, it should just happen transparently.
A message that is received but not verifiably signed can be logged and dropped.
Let's investigate other projects with clients that connect to Ethereum and see if there is some sort of emerging standard env variable that stores account passwords. For now we're using KEEP_ETHEREUM_PASSWORD
, but if there's a standard or semi-standard approach we should probably switch to it.
The only "Identity" level information we should export is the PrivateAuthn and PublicAuthn methods. We defined them as so:
For actions that involve a public key
type PublicAuthentoricator interface {
Verify(data []byte, sig []byte, net.TransportIdentifier pubkey [KeySize]byte) bool
Encrypt(pubkey [KeySize]byte, msg []byte) ([]byte, error)
}
and for actions that require a private key
type PrivateAuthenticator interface {
Sign(data []byte) ([]byte, error)
Decrypt(msg []byte) ([]byte, error)
}
This allows us to remote the Identity
interface (as presented in #69 :
type Identity interface {
ID() peer.ID
AddIdentityToStore() (pstore.Peerstore, error)
PubKey() ci.PubKey // TODO: keep or not?
PubKeyFromID(peer.ID) (ci.PubKey, error)
}
As all of the Authenticator business happens in the net
package, we ensure the identity being shipped around in the envelope is the peer.ID
. We then define a PubKeyFromId
which is:
func (pi PeerIdentity) PubKeyFromID(peer.ID) (ci.PubKey, error) {
return pi.ID().ExtractPublicKey()
}
When the beacon triggers new group creation, how do we choose the members of the new group?
Some potential goals:
The most basic strategy, and the one employed by the Dfinity relay, is a simple Fisher-Yates shuffle over all candidates. Go's math/rand
package has a Shuffle
function that does this over a container, though it's worth noting that we'll need to be able to implement this in Solidity at minimum, and only optionally in Go.
Another possibility is to create an exponential distribution over a sorted list of the possible members, sorted by the parameters we want to select for. There are some challenges to this, including the fact that it will be annoying to implement in Solidity, and we need the chain to know who is supposed to be in a group.
Yet another possibility, mentioned by @pschlump, is to create buckets for the preferred-treatment groups, e.g. allocate a
members for high stakers and b
members for stakers with low group membership, and then allocate n-a-b
members via Fisher-Yates, where a << n
and b << n
.
This should capture most of our discussion from Flowdock.
In the form of --help
docs:
keep-client [<option> ...] <subcommand>
<option> are:
--config: pointer to the TOML config file
--...: overrides for config settings?
Subcommands:
start [<option> ...]
Starts the Keep client in the foreground. Currently this consists of the
threshold relay client for the Keep random beacon and the validator client
for the Keep random beacon.
--[no-]relay: Enables or disables the relay client; enabled by default.
--[no-]provider: Enables or disables the Keep provider client; enabled
by default.
contract [<option> ...] <contract> [<arg> ...]
Invokes the named <contract>, passing it the given <arg>s.
<option> are:
--call: default, calls the contract without submitting a tx, returns result
--submit: submits a tx, does not return result
--nonce <number>: force a nonce
--address: specify an address to override existing config
relay <subcommand>
Subcommands:
request [<option> ...]:
issues a relay request and outputs the request id, then waits for
relay entry to be returned and prints that as well
<option> are:
--request-only: only prints request id
--entry-only: only prints entry
entry <id>:
looks up the entry for the given id and prints it
stake <address> <amount>:
Stakes the given amount of KEEP from the given address.
can think about whether we want this one.
Want to build a quick prototype of this soon so we can actually run it and see how it feels.
Current thought here is a blacklist, perhaps IP-based, perhaps with exponential backoff. May depend on libp2p authentication, as ideally we would start rejecting connections from a blacklisted client at a very low level once they fail to prove their stake.
Contract-based ability to associate a staking key with an operating key that has delegated authority to participate in the Keep network.
Not the higher priority repo security practice, but it's low hanging fruit
https://github.com/NebulousLabs/glyphcheck looks good.
The exposure to the application layer here is giving a BroadcastChannel
consumer access to an opaque net.TransportIdentifier
, which in libp2p will likely be implemented by a peer.ID
. We should also allow the BroadcastChannel
to associate this identifier with an equivalent net.ProtocolIdentifier
, which in DKG and relay operation will be a bls.ID
.
The interface for this is already defined in BroadcastChannel
as RegisterIdentifier
:
keep-core/pkg/net/interface.go
Lines 89 to 98 in 95aed3b
When a BroadcastChannel
's SendTo
method is used, the libp2p transport will need to ensure that the message payload is encrypted so that only the sender and receiver are in the clear[1]. The sender side libp2p layer should encrypt, and the receiver side libp2p layer should decrypt, such that the application layer only needs to care that it is sending or receiving a direct message, and everything else is handled transparently.
[1]- In a perfect world even the sender would probably be encrypted, but we can worry about that at a later time when we approach the question of anonymity.
Allow a client to fetch, on startup, the current relay configuration from the contract. Need the contract to expose access + the Go interface implementation.
Relay configuration currently is group size + threshold. See beacon.ChainInterface
:
keep-core/pkg/beacon/beacon.go
Lines 16 to 26 in 90979a0
We need to be able to verify single and threshold signatures on-chain. Aggregated signature verification would also be great, though it appears to be too expensive for practical use without new precompiled contracts.
Getting this to done, I'd like to see
Once we're there, the BLS verification implementation can be pulled out into its own lib.
Flowdock conversation for context: https://www.flowdock.com/app/cardforcoin/tech/threads/45_UNoQkkzpufvVD48mdsHulGVD
Per the above, our public keys are Ed25519
keys generated from libp2p/go-libp2p-crypto
, which is really just a thin wrapper around github.com/agl/ed25519
.
More importantly, this means our corresponding peer.ID
, generated by multiformats/go-multihash
, is simply the concatenation of metadata + parts of the public key, but no hashing. Why?
When we attempt to pull out the peer.ID
from our PublicKey
, the following happens:
hash, err := mh.Sum(append(Ed25519PubMultiCodec, b[len(b)-32:]...), mh.ID, 34)
if err != nil {
return "", err
}
Where mh.ID
is const ID = 0x00
, which results in the no-op "hash" case in multiformats/go-multihash/sum.go
Let's revisit if this is a good or bad thing, and what the potential impact of this decision is.
Current idea is shipping to GCP.
Catch-all issue to capture things we want to worry about if we take on hardening the client against an attacker who has local access, as well as discussion on whether we want to worry about it and such.
More information in the following thread:
https://www.flowdock.com/app/cardforcoin/tech/threads/HIz11LPnkNO-qRlmJXOWtL6TzS_
Allow the application to connect to a libp2p network and receive back a net.Provider
implementor. net.Provider
is currently defined in a very limited fashion:
Lines 67 to 75 in 0925eac
As found in #17, dynamic dependencies required by the BLS implementation make builds and end-user deploys a PITA.
In particular, https://github.com/dfinity/go-dfinity-crypto/ depends on https://github.com/dfinity/bn, which is intended to be installed or included pre-built as a dynamic lib.
We need to either get to a single static executable, or package the client up with any dependencies in a platform-specific way for at least macOS and reasonable Debians.
This is really about moving the current main
function, which does a smoke test by doing a simulated distributed key generation between 10 members with threshold 4 and then verifying that the resulting members can do a threshold signature correctly, into a subcommand until we've got the other pieces of the system up and running.
I think this would look like:
smoke-test
command.main.go
(except for the bls.Init
) to cmd/smoke_test.go
, and invoke it from main
when the smoke-test
command is selected.Implement the start
command on the command-line, allowing the client to start up and join the relay, checking for necessary network and chain config.
See #52 for details. No need to implement --provider
or --no-provider
, since there's no Keep provider code yet. --no-relay
should just spit out an error that no clients were selected for startup, so the program is exiting.
What a time to be alive!
Occasionally (during testing), we'll receive duplicate stake events (for a given staker, all of the nodes will receive the stake event >1 times). This could be happening for a variety of reasons (likely geth retrying the messages), but, more importantly, we need to ensure we're de-duplicating these chain events!
This will be done in a bootstrap node.
Proving stake here can start with sending the staking address to the bootstrap node, signed by the staking key, perhaps… This should allow the bootstrap node to verify that the signature on the staking address is correct for that address, and then check the network to ensure the address in question is staked.
We need to figure out whether the above approach is correct and implementable, or if we need a different one instead.
Let's use this issue to discuss implementation, and then spin up separate issues for proving stake and blocking unproven clients when we're ready to start working on them.
Currently we initialize our Network (*swarm.Network
) without the libp2p Protector. To encrypt our connections, we should explore implementing this interface and injecting it into the instantiation of the libp2p network.
Just a --config
flag that can let us start feeding a config file into the system. It should route the config file through the new readConfig
function.
See #52 for more, though this is quite literally just a flag that takes a path :)
Currently specified in relay.go
as ChainInterface
:
keep-core/pkg/beacon/relay/relay.go
Lines 9 to 30 in 90979a0
Implement the underlying ties to the chain in the pkg/chain/ethereum/
package.
Pulling this discussion into here so we don't get too far off topic in @rargulati's PR.
Quick breakdown from #26 :
Am I the only one who wants to use the Docker build in place of eg make? It's already standardized and having two build systems seems strange.
I have used both. IMHO it is much harder to get docker do do simple
tasks. Before we are done we should create a Docker base build because it
captures all the dependencies and is fully reproducible.
Agreed that make is simpler, but we already have a Dockerfile and are using it to do CI and (eventually) reproducible builds- why would we maintain two? It'd lead to some people updating the Makefile and others updating the Dockerfile, I expect.
I don't consider docker build a build system for our application, I consider it the bit that sets up our docker image. The Dockerfile should just run make IMO.
I see make as answering “how do we build our project”.
I see docker as answering one variation of “how do we distribute our project”.
... I get the appeal, I really do. But I've seen this show before. The Dockerfile needs to understand as much as a Makefile to properly cache, and we already have all make-like functionality we need in CircleCI. Using both will lead to a dual build system.
🙏 spend a little time cozying up with Docker and CircleCI before we pull the trigger on adding make to the mix.
to properly cache
Can you explain some more about this/point to relevant docs? Not sure I follow.
@keep-network/go
Having a go:generate
setup in pkg/chain/gen/
that will generate our Go contract bindings in an automated fashion.
Implement the relay
command on the command-line, with its subcommands for request and entry, allowing the client to submit a relay request.
See #52 for some details.
ECDSA is the primary signature scheme used by many cryptocurrencies, including most Bitcoin variants and Ethereum. ECDSA doesn't support a threshold or aggregate configuration by default (versus newer schemes like BLS and Schnorr).
MPC can be used to make a threshold-supporting variant of ECDSA that's interoperable with the signatures on today's chains (on the libsecp256k1 curve). Goldfeder et al proposed such a construction, and provided an initial implementation and a revision in Java.
Goldfeder's implementation doesn't include a distributed key generation implementation- luckily, we should be able to use our existing work ported to secp256k1.
For our first pass, let's port ThresholdECDSA to Go, plugging in existing Go crypto libraries where possible. In particular, we'll make use of geth's secp265k1 package. We also need something for Pedersen commitments (or we need to write it ourselves) and a Paillier HME implementation (this looks like the most promising).
The outcome we're looking for here is a package that implements the threshold ECDSA protocol, without concern for networking. We'll cover networking and incentives in future work.
Allow the application to connect to the Ethereum network and receive back a chain.Provider
implementor. chain.Provider
is currently defined as chain.Handle
:
Lines 22 to 28 in 90979a0
More in the relay's concern than the chain's concern, this is about how the chain and beacon/relay interact. Some open questions:
We've kicked around a few ideas for how best to do this. After researching #31 we know BLS aggregated signatures aren't an option due to the expensive pairing check scaling with the group size.
Throwing open a thread for thoughts on the CLI and how we manage the flags/subcommands/etc. Raghav mentioned in this comment a few possibilities, but I'd prefer concrete thoughts and reasoned preferences here in addition to pointers to library possibilities.
I'd also like us to consider how far we can get using Go's builtin stuff, mostly so we can minimize external dependencies (since external dependencies are simply additional surface area for audits, etc).
Per convo that @Shadowfiend and I had on Friday.
Our groups will be able to engage in a number of protocols. For example, a protocol to create groups (DKG), a protocol to create a signature (BLS, ECDSA, etc), and potentially other protocol. It's helpful if we can leverage some machinery that libp2p has around protocol enforcement at the transport layer: clients switch to speaking a certain wireprotocol and . Unfortunately, it's not immediately clear how to make StreamHandlers
(see go-libp2p-host
) work/stack with StreamHandlers
set with go-libp2p-pubsub
and our yamux
transport, both which define a StreamHandler
.
For now, the thought is to put a string in our Protobuf Envelope type that resembles the libp2p wire structure: ie. /dkg/join/0.0.1
, /dkg/accuse/0.0.1
. This let's us know the following:
We can compare these identifiers in the client, further enforcing communication. Later, we can implement the go-libp2p-host interface and create a custom transport that melds yamux and our own protocol enforcement. Or Something.
TODO: links in the above.
KEEP is an ERC-20 token based on OpenZeppelin.
The token needs to support vesting grants as well as staking. Basically, we want any vested tokens to be stake-able. If the vesting grant is revoked, the stake should be withdrawn.
We can combine vesting and staking into one contract to make this easier, or find a way to make it compatible with OpenZeppelin's existing vesting contract (safer).
The staking registry should
If it's easy / cheap to refer to a staker & check their status in an external contract, we can move add any other metadata we need outside this contract and keep this simple / auditable.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.