lightningnetwork / lnd Goto Github PK
View Code? Open in Web Editor NEWLightning Network Daemon ⚡️
License: MIT License
Lightning Network Daemon ⚡️
License: MIT License
When adding an HTLC to a commitment transaction, care must be taken to ensure that the HTLC value isn't below the dust-limit. If so, then either party may find themselves in an undesirable situation wherein their current commitment transaction isn't relayed nor accepted by nodes on the network due to the policy around dust-limits.
Therefore, the logic around accepting/forwarding/clearing HTLC's needs to be cognizant of these limits in order to maintain the invariant that the current commitment transaction is eligible to be broadcast and included within the next block.
At the Lightning Summit in Milan we agreed on a mechanism to allow sub-dust HTLC's on the network:
The above scheme gets around the issue by introducing local consensus on what constitutes dust. With this scheme we can safely support HTLC's down to a single satoshi.
The state machine (lnwallet/channel.go
) should be modified to internally implement the above logic:
.AddHTLC/ReceiveHTLC
) a bool should be set if the value is below the current dust-limit.An RPC should be added, similar to walletbalance
which shows the total available payment bandwidth across all open channels.
Steps to complete this issue:
lnrpc/rpc.proto
.gen-protos.sh
tool to re-compile the latest definitions.channelbalance
RPC within the rpcServer. This will involve obtaining channel snapshots from all the peers and tallying the total available channel capacity within each channel.channelbalance
RPC call to cmd/lncli
.In order to generate new protobuf files using lnrpc/gen_protos.sh, the grpc-ecosystem repo must be cloned into $GOPATH/src/github.com.
Alternatively, the include path in gen_protos.sh could be changed to point to $GOPATH/src/github.com/lightningnetwork/lnd/vendor/github.com.
Currently there's a hard-coded fee of 5k satoshis on the commitment transaction. This sufficed for initial tests but needs to be made dynamic for real-world use in order to maintain the invariant that the current commitment transaction is able to be included in the next block.
In the Lighting Summit, we agreed that for now the initiator pays all fees on the commitment transaction. Therefore any additional fees due to added HTLC's should be subtracted from the initiator's balance if necessary. Additionally messages need to be added to the wire protocol to allow the initiator to signal that they wish to change their desired fee rate on the commitment transaction.
This section within the v1 specification is still being fleshed out. As the specification becomes more concrete, this issue will be updated to detail the requirements as laid out by the spec.
A related issue if the lack of any dynamic fee estimation in lnd
currently. In order to robustly implement dynamic commitment fees, we'll also need an internal model which predicts the fee required to make it into the Nth block.
type FeeEstimator interface {
// EstimateFee takes in a target for the number of blocks
// until an initial confirmation and returns the estimated fee
// expressed in satoshis/byte.
EstimateFee(numBlocks uint32) uint64
EstimateConfirmation(satPerByte int64) uint32
}
As a placeholder, the above interface can be created with a concrete implementation which simply returns our current hard-coded fee "5000 satoshis". This will allow for a two-step migration, with the first step being simply a move-only refactoring.
See these relevant areas within the spec:
Relevant btcd PR:
Currently cooperative channel closure assumes that there are no outstanding HTLC's before closing. This is a naive assumption in practice and can result in a potential loss of funds for one side due to a "vanishing" HTLC.
Instead, once a cooperative channel closure has been initiated, both side should wait until the channel has been "drained" before proceeding with the final closure process (with a possible timeout to perform a unilateral close). It was discussed at the Lightning Summit that the initiator of the channel pays the closure fees. Our implementation currently implements this, but the fee itself is hard-coded.
The fix this issue, when processing a request for a cooperative channel closure from the htlcSwitch
the channelManager
should signal to the hltcManager
for that channel to reject any incoming HTLC's, wait for complete channel draining, then finally complete the cooperative closure.
Switch to spec messages
Add graceful shutdown
When I tried to move bitcoins from p2pkh
address to p2wkh
, I found that our decode method can't recognise p2wkh
address format.
$ lncli walletbalance
{
"balance": 1000
}
$ lncli newaddress p2wkh
{
"address": "4NyXCayeJqEvbNXeNTfjGMzDC55r3q8sA1xPa"
}
$ lncli sendcoins 4NyXCayeJqEvbNXeNTfjGMzDC55r3q8sA1xPa 1000
[lncli] rpc error: code = 2 desc = decoded address is of unknown format
For your reference I posting the error I am getting when trying to build on Mac OS X
$ go get github.com/LightningNetwork/lnd
# github.com/lightningnetwork/lnd/lnwire
src/github.com/lightningnetwork/lnd/lnwire/lnwire.go:566: too many arguments in call to wire.NewTxIn
# github.com/lightningnetwork/lnd/uspv
src/github.com/lightningnetwork/lnd/uspv/eight333.go:308: undefined: wire.InvTypeWitnessBlock
src/github.com/lightningnetwork/lnd/uspv/eight333.go:386: undefined: wire.InvTypeWitnessBlock
src/github.com/lightningnetwork/lnd/uspv/eight333.go:388: undefined: wire.InvTypeFilteredWitnessBlock
src/github.com/lightningnetwork/lnd/uspv/hardmode.go:29: tx.WTxSha undefined (type *wire.MsgTx has no field or method WTxSha)
src/github.com/lightningnetwork/lnd/uspv/hardmode.go:62: cb.TxIn[0].Witness undefined (type *wire.TxIn has no field or method Witness)
src/github.com/lightningnetwork/lnd/uspv/hardmode.go:64: cb.TxIn[0].Witness undefined (type *wire.TxIn has no field or method Witness)
src/github.com/lightningnetwork/lnd/uspv/hardmode.go:68: cb.TxIn[0].Witness undefined (type *wire.TxIn has no field or method Witness)
src/github.com/lightningnetwork/lnd/uspv/hardmode.go:70: cb.TxIn[0].Witness undefined (type *wire.TxIn has no field or method Witness)
src/github.com/lightningnetwork/lnd/uspv/hardmode.go:74: cb.TxIn[0].Witness undefined (type *wire.TxIn has no field or method Witness)
src/github.com/lightningnetwork/lnd/uspv/init.go:53: undefined: wire.SFNodeWitness
src/github.com/lightningnetwork/lnd/uspv/init.go:53: too many errors
# github.com/lightningnetwork/lnd/lnwallet
src/github.com/lightningnetwork/lnd/lnwallet/channel.go:89: too many arguments in call to wire.NewTxIn
src/github.com/lightningnetwork/lnd/lnwallet/channel.go:198: commitTx.TxIn[0].Witness undefined (type *wire.TxIn has no field or method Witness)
src/github.com/lightningnetwork/lnd/lnwallet/channel.go:200: too many arguments in call to txscript.NewEngine
src/github.com/lightningnetwork/lnd/lnwallet/wallet.go:263: undefined: waddrmgr.WitnessPubKey
src/github.com/lightningnetwork/lnd/lnwallet/wallet.go:263: too many arguments in call to wallet.Manager.NextInternalAddresses
src/github.com/lightningnetwork/lnd/lnwallet/wallet.go:519: too many arguments in call to wire.NewTxIn
src/github.com/lightningnetwork/lnd/lnwallet/wallet.go:531: undefined: waddrmgr.WitnessPubKey
src/github.com/lightningnetwork/lnd/lnwallet/wallet.go:531: too many arguments in call to l.Wallet.NewChangeAddress
src/github.com/lightningnetwork/lnd/lnwallet/wallet.go:575: undefined: waddrmgr.WitnessPubKey
src/github.com/lightningnetwork/lnd/lnwallet/wallet.go:575: too many arguments in call to l.Wallet.NewAddress
src/github.com/lightningnetwork/lnd/lnwallet/wallet.go:575: too many errors
2016-07-07 12:07:22 dnsseed thread start
2016-07-07 12:07:22 Loading addresses from DNS seeds (could take a while)
2016-07-07 12:07:22 0 addresses found from DNS seeds
2016-07-07 12:07:22 dnsseed thread exit
.......
2016-07-07 12:08:23 Adding fixed seed nodes as DNS doesn't seem to be available.
2016-07-07 12:08:23 connect() to 37.34.48.17:28901 failed after select(): Connection refused (111)
2016-07-07 12:08:24 connect() to 37.34.48.17:28901 failed after select(): Connection refused (111)
2016-07-07 12:08:25 connect() to 37.34.48.17:28901 failed after select(): Connection refused (111)
and keep refused
so Could I use other blockchain instead of segnet4?
Currently, when setting up lnd
users are required to manually enter the RPC credentials for btcd
. This can be a bit cumbersome and makes setting up lnd
initially a bit clunky.
As an enhancement alternative, lnd
can attempt to lookup the app data directory for btcd
and parse out the current RPC credentials. The daemon would then default to this route for automatically configuring the btcd
web-sockets RPC clients currently within lnd
.
gRPC allows us to easily create fully complete libraries within the list of supported languages using the protoc
compilation tool. Currently the compiled protos for Go
reside within the lnrpc
package. After the beta release, we should provide pre-compiled code for various other languages in order allow developers to easily start to experiment/develop against the API without having to install protoc
and compile the stubs for their language by hand.
I propose we make another repository outside of this one (but within the organization) which will house the pre-compiled language stubs (client only however).
Currently within the codebase, elkrem
is used as a revocation tree to derive the revocation hashes we use during the commitment updates. In the current draft of the spec, shachain
is used instead as it's a bit more efficient and is a bit more generalized.
There's currently a shachain
package within the project, however it is incomplete and doesn't have any tests .
The shachain
package within the project should be finalized. The implementation should be finished with tests exercising the important cases like derivation, tree folding (as leaves are exposed), and quickly deriving a particular state number.
All usage of elkrem
within the codebase should be replaced with usage of shachain
channeldb
. The elkrem
package should be removed.
Total number, min/max average size for each type of message. This may be useful for determination of routing performance for different routing options.
Currently within the daemon, we use lndc
for establishing an confidential+authenticated link with outgoing/incoming peers. The scheme implemented by lndc
is very simple, based off of solely ECDH
followed by a hash-based proof of identity.
Ultimately, we should instead use an existing peer-review scheme for our enc+auth protocols. One such scheme which looks promising is Noise. Noise is similar to the current protocol implemented within lndc
, but rather than sending the hash-based proof over the link, completion/termination is indicated by the ability (or inability) to properly decrypt (or just check the MAC of) an encrypted payload which is encrypted with a key derived from an incremental Triple DH based key derivation function. Additionally, the framework is very flexible and supports several handshakes with varying levels of security and tradeoffs. There's an existing implementation of Noise in golang that we may want to use. Alternatively, we can implement a stripped-down version of Noise that only supports our target handshake and cipher-suite.
(all handshakes assume the key of the responder is pre-transmitted)
Currently the fundingManager
, and the rpcServer
will happily allow either a caller or remote peer to create multiple pending channels at a time. This behavior should instead be restricted to only allow a single pending channel at a time per-peer. Such a constraint acts as a defense against a slow-loris like DoS attack wherein a peer creates hundreds of thousands of pending channels, never intending to complete the funding workflow for them.
Steps to completion:
fundingMgr
should reject all requests to either process or initialize a new channel funding workflow if one already exists for the targeted peer.lnwire.ErrorGeneric
messagesshould be sent to the offending peer if the constraint is violatedCurrently, the only concrete instantiation of the ChainNotifier
interface is BtcdNotifier
an implementation which relies on a websockets connection to btcd's RPC interface. In the future we'll be adding an implementation/modifying the WalletController
and related interfaces into implementations that don't necessarily speak to a btcd
full-node directly over the RPC interface. One such future implementation are forms of the interfaces that interface directly with the p2p network. Such variants of the interfaces are necessary once we integrate our SPV wallet into the daemon as a more light weight option.
Apart from integration our SPV wallet into the daemon, future implementations of the core interfaces may not neccesrily communicate directly with btcd
, meaning one won't be available for the daemon to query. Therefore a pure p2p version of the ChainNotifier
is necessary.
The implementation should be carried out in two phases:
A new sub-package should be added within the chainnotifier
package, named something along the lines of p2pnetwork
(open to suggestions). This new package should implement the entire ChainNotifier
interface using nothing more than a connection to the p2p network, and possibly a small amount of on-disk storage.
The following libraries will prove to be integral during implementation:
btcd
's peer package:
peer
package provides full programatic access to the p2p network. It handles parsing the wire protocol, the version dance, and provides a set of callback to allow users to asynchronously drive a connection to a Bitcoin peer.btcd
's connmgr package:
peer
package by providing a means to handle persistent connections, create new connections, use the DNS seed, etc. This package will be used to maintain a set of connections to discovered peers within the p2p network.In order to implement confirmation notifications:
MSG_BLOCK
inv flag, rather than the MSG_WITNESS_BLOCK
. This saves bandwidth as we have no need for the witness data contained in blocks.In order to implement spentness notifications:
Hi all,
I've pulled out the upsv package and I'm working to beef it up hopefully to use in OpenBazaar. I'll report bugs as I find them but the first major one I've come across is transactions being sent by the remote peer before the merkle bock is processed.
IngestMerkleBlock
parses the block and adds the txid to OKTxids
and then TxHandler
checks that txid exists in the map before ingesting the tx. If it doesn't exist the tx is rejected. What I'm seeing is TxHandler
fires before IngestMerkleBlock
is finished causing some txs to be rejected.
I've set it to lock OKMutex
when processing the merkle block, which prevents most transactions from being rejected and also prevents the blocks out of order problem. But I'm still seeing some txs being rejected, which I'm not sure yet the cause.
Currently within the ChainNotifier
's notification system, a subtle race-condition can cause a notification to never be dispatched.
A scenario which can lead to a "zombie" notification is as follows:
txid
reaches a confirmationChainNotifier
receives the request, a block is connected to the main-chain which confirms the transactionChainNotifier
and added to its list of watched txids
As a result of the above order, the notification would then never be dispatched. I haven't seen this happen in the wild yet, but it's definitely a possibility. Therefore, in order to ensure that something like this can never happen, the ChainNotifier
should consult historical data to see if the notificaiton can immediately be dispatched.
Within the ChainNotifier
interface, two current types of notificaiton exists: spend notifications, and confirmation notifications. The fix should be rather straight forward for the current btcd
backed (btcdnotify
) default implementation since the btcd
node is assumed to have a utxo index, and access to the entire historical blockchain.
Fix for confirmation notifications:
BtcdNotifier
should use the GetRawTransactionVerbose
method on the btcrpcclient
to check if the transaction has already been confirmed (based on the number of requested confirmations). If so, then a goroutine should be launched to dispatch the notification immediately.Fix for spentness notifications:
BtcdNotifier
should use the GetTxOut
to check if the output is a member of the utxo set. If not, then the notification should be dispatched immediately.Currently within the state machine lnwallet/channel.go
, only the current settled balances are updated before committing the channel to disk. Ultimately, the other fields such as sat/sent received, total fees paid, etc, should be updated as well.
Currently, they are hardcoded.
Within the daemon, proper dispatch and handling of events/notifications can be crucial in order to ensure that funds aren't lost/suspended due to a missed event.
Currently in the default implementation of the ChainNotifier
, any events which are registered but not dispatched are lost during restarts as they're only persisted in memory. In order to reconcile this, a persistent notification queue should be added to the database which tracks any outstanding notifications which have ben registered to the ChainNotifier
. Additionally, the notification queue should stored the block hash+height in which the notification was initially registered.
As notifications are dispatched, the notification queue should be updated after the notification is dispatched in order to ensure at least once delivery of notifications. After a restart, before accepting any new channel updates or incoming channel fundings, the queue height/hash of the queue should be checked against the current best tip to decide if a manual scan and manual notification dispatch is required. As a result, the ChainNotifier
interface should be extended to allow manual triggering of notifications
An alternative to this approach is to push the bulk of the logic into the contractual agreement between the ChainNotifier
and the rest of the sub-systems. However, it may be desirable to keep the ChainNotifier
stateless.
Similarly, the utxoNursery
's state needs to be persisted in order to ensure that all time-locked outputs are sweeped in a timely manner. As implemented now, if the daemon restarts after a unilateral channel closure, then this state is lost and if a prior state was broadcast, a counter-party may be able to steal funds.
Action items in this issue:
utxoNursery
is persisted between restarts.ChainNotifier
, or add an at-least-once queue to ensure reliable delivery and also re-registration of notifications.Just a Lightning newbie trying to wrap my head around the work that's going in this space.... What's the relationship between lnd and ElementsProject/lightning? Thanks!
The daemon currently support open/closing and conducting payments over several channels per-peer. However, link-layer payment fragmentation has yet to be implemented.
Link-layer payment fragmentation arises when an HTLC is to be forwarded on an out-going interface, but neither of the links (channels) have sufficient capacity. In order to clear+settle the payment, the payment hash may have to be distributed amongst several links.
Such logic would need to be inserted both when sending out locally initiated payments, and when forwarding multi-hop HTLC's within the htlcSwitch
.
Currently routing table is lost when lnd is restarted.
Many of the currently exported packages within lnd
likely aren't' meant for public consumption. Therefore we may want to investigate switching to using internal packages for these packages. Internal packages cannot be consumed by code outside this repo, but is visible to all packages within the repo.
Packages to consider making internal:
shachain
elkrem
lndc
channeldb
(insert others here)
Within the daemon there exists an base-wallet interface called the WalletController
which the LightningWallet
uses in a composite manner to create a fully-fledged Lightning enabled wallet. The interface itself is rather minimal and should be able to accommodate easily dropping in an alternative wallet into the daemon.
The current, and only (at the time of writing this issue) concrete implementation of the WalletController
is BtcWallet
which is implemented by an embedded instance of btcwallet
. Due to the current architecture of btcwallet
an active btcd
instance is required for the wallet to function correctly.
Due to popularity, a WalletController
interface implementation for Bitcoin Core's wallet should be added. The implementation of the interface should be possible entirely over Bitcion Core's RPC interface. btcrpcclient
is capable of connecting to Bitcoin Core instances and should be sufficient in helping to implement the interface. A simple layer of persistent storage within the WalletController
implementation may be required in order to fully implement all the features of the interface.
This issue is dependent on #17 as the PR implements the necessary refactoring within the daemon to allow dropping in multiple/alternative wallets.
Not a big deal, but figured I might as well post this issue.
Currently the uspv
package has these build errors:
uspv/eight333.go:21: undefined: wire.TestNetL
uspv/eight333.go:26: undefined: chaincfg.TestNetLParams
Other than that, only other package build issues I think others would run into when trying to do a go get
of the .../...
variety is the too many errors
flavor of error in lnstate
Cross-tracking here, see lightningnetwork/lightning-onion#4
Currently the RPC server is completely unauthenticated, responding to any and all requests sent to the server. Such behavior is fine for the current pre-alpha state the daemon is in, however future release should introduce a mechanism for authenticating privileged peers to the RPC server.
Two possible paths forward are first a simple password-based authentication mechanism, and a more advanced finer grained auth system which uses macaroons. The first path leads to ACL based security policies, while the second path leads to security policies implemented via bearer credentials.
In either case, gRPC's credentials package will need to be consulted in order to discern exactly how we'd like to integrate a proposed authentication mechanism into the daemon.
A tutorial for adding authentication into gRPC can be found here, and may prove useful in fixing this issue.
The password based mechanism is likely the simplest option to start out with initially. Fields within the configuration would be added for one, or many rpcuser
s each with a rpcpasshash
field belonging to it. Rather than storing the password in plaintext within the configuration, a hash of the password should instead be stored. This will likely utilize gRPC's transport based security, rather than the per-RPC auth methods.
The second proposed option is more involved, but is much more advanced and provides a very high degree of flexibility. Macaroons are decentralized bearer credentials with support for delegation, attenuation, and several other useful features. In this model, the per-RPC auth method would be used, and created macaroons can have fine-grained access policies. For example, a macaroon could be created that only allows sending 50,000 satoshis
each day, over a particular channel, to a set of white-listed peers. Continuing, that macaroon can then be given to a friend, with a modification restricting it to only 10,000 satoshis
a day. There's an existing macaroon implementation written in Go we may want to use. However it's a bit "heavy", therefore we may want to use a lightweight custom implementation for our purposes.
The current gRPC API for both opening and closing transactions allows for the initiator to receive notifications with progress updates. These updates may indicate that the channel has N
confirmations left before it can be considered open or closed. Such updates may be useful for rendering UI updates, or prioritizing resources within larger applications that build a layer above LN.
In order to implement this feature within the ChainNotifier
, the existing interface/struct definitions should be modified to also include an updates
channel for the ConfirmationEvent
struct. The definition would then be modified to be:
type ConfirmationEvent struct {
Confirmed chan int32 // MUST be buffered.
Updates chan int32 // <-- the field to be added
NegativeConf chan int32 // MUST be buffered.
}
The Updates
channel would then be sent upon in the main dispatch loop of the ChainNotifier
as reach new block comes in.
At the Lightning Summit, we decided to switch the design of the commitment transaction's to support the proposed blinded channel outsourcing scheme. The commitment transaction itself, the secrets generated as part of a new channel, the funding workflow messages, and the channel state-machine itself need to be updated as a result.
Currently within the daemon, the htlcSwitch
manages the teardown/setup of onion-circuits generated during the forwarding of multi-hop payments. However, as implemented currently, a well timed daemon restart may lead to possibly loss of funds. This is due the fact that the onion circuits are currently kept in volatile memory, and not persisted to disk before forwarding.
Onion circuits are uniquely defined with the following struct:
type onionCircuit struct {
rHash [32]byte
clearNode *btcec.PublicKey
settleNode *btcec.PublicKey
}
where:
clearNode
is the node that originally forwarded the HTLC to us, clearing it on the incoming channelsettleNode
is the node we sent the outgoing HTLC toOnce the settleNode
sends us an HTLC settle message, we the switch needs to forward the settle message to the clearNode
in order to claim the funds and in the process gain the fee related to the HTLC.
Persist the htlcManager's
paymentCircuits
map to disk. Ideally, the map should be removed all together and modifications to the prior map should instead take the form of disk accesses in order to side-step any potential consistency issues related to maintain the state both in memory and on disk.
In order to generate and compile the protobufs in lnrpc (using gen_protos.sh), versions of protoc-gen-go and grpc must be compatible. This should be rarely encountered, but will be necessary for developers looking to extend the lnd rpcs.
protoc-gen-go at commit df1d3ca07d2d07 is compatible with grpc version 1.0.0, as both use support package version 3.
protoc-gen-go at commit 8ee79997227b is compatible with grpc version 1.0.4, as both use support package version 4.
protoc version 3.0.0 works in both scenarios.
I originally proposed that we use Sphinx as a base of the onion routing within the network. I then created a working implementation of Sphinx which can be found here. Since then, Christian Decker has began drafting an initial specification for the mix-header format we'll use initially. The current proposal modifies my original version slightly to include a per-hop payload amongst other things. At the time of writing this issue, he's created a fork of my original repo modifying the OG code to implement latest version of the draft specification.
The remaining steps to fully integrate onion routing are the following:
routingMgr
to create a mix-header and place it within the outgoing htlcAdd
packet.HTLCAddRequest
message to switch to a fixed size field for the onion blob since we now know the initial size (modified by max graph diameter, security parameter, etc).CommitRevocation
's from the upstream peer, the dest
field on the htlcPkt
to the htlcSwitch
should be set to the parsed from the mix-header. Additionally, the fee information parsed from the per-hop payload should be examined in order to set the amount the next hop should forward.Currently as implemented within bolt
both appending to the revocation log for the counter-party and updating our "best state" for the current channel are done in distinct database transactions. As a result, due to bolt's single-writer model, only a single channel can be updated at a time. In this format, on-disk updates have the potential to be the bottle-neck during concurrent channel updates.
Instead bolt
should only be used to store the initial meta-data required to re-construct the initial channel state. The revocation log and the current tip of our commitment chain should be stored in per-channel flat-files.
This can be partially mitigated with minor changes by using bolt
's .Batch()
method for all channel related updates. If this method is used, bolt
will internally attempt to coalesce several pending transactions into a single batch. As a result, throughout can be improved with a trivial code modification.
tx.Batch()
for all channel related updates. There may be some intricacies around handling failed batches.There a several areas within the daemon where a goroutine is launched to asynchronously complete a task. Currently the deployment of such goroutines is unbounded. Semaphores should be added to various areas in order to control the level of concurrency dedicated to these outstanding async requests.
Grepping for "semaphore" wtihin the repo yeids:
./fundingmanager.go:538: // TODO(roasbeef): semaphore to limit active chan open goroutines
./rpcserver.go:461: // TODO(roasbeef): semaphore to limit num outstanding
./server.go:327: // TODO(roasbeef): semaphore to limit the number of goroutines for
./server.go:402: // TODO(roasbeef): server semaphore to restrict num goroutines
In golang a simple semaphore can be created by creating a typedef around a chan struct{}
. The new type would also provide acquire
and release
methods along with a constructor to initialize a semaphore taking a parameter which dictates the level of buffering in the underlying channel.
Currently, there isn't an upgrade mechanism in place within the database which makes the current schema rigid and upgradeable. To remedy this, a new bucket should be added to the database (metaBucket
) which stores key that house meta-data related to the current/version state of the database.
The createChannelDB
method should be modified to create this new bucket+key during initializing. In a similar vein, when opening the database, the dbVersionKey
should be read from the metaBucket
and compared against some compile-time constant which stores the latest known database version. If the two versions differ (after some future database update/migration), logic should be inserted to convert the old format to the newly defined format. Additionally, it may be advisable to fist make a complete copy of the current database using bolt db's WriteToMethod
as a fail-safe in case of bugs or failures during the database migration.
Currently within the daemon, we keep records of all payments we've requested and fulfilled successfully. The code for this lives in channeldb/invoices.go
and also within the invoiceregistry.go
. With this code in place users/merchants/exchanges are able to maintain full records about all pending and settled payments they've requested.
However, records for the opposite don't currently exist within the daemon. By the opposite I mean tracking records for all outgoing payments we've sent. This excludes outgoing payments sent due to the forwarding of HTLC's, as those amy happen automatically based on daemon settings, those shouldn't be recorded.
It's important to maintain records about all user triggered outgoing payments for the following reasons:
A new bucket in the database should be created in order to house payments we've successfully settled.
channeldb.Invoice
struct and the related serialization methods can likely be re-used. In addition to the information stored within channeldb.Invoice
, the following attributes for outgoing payments should be tracked (for each payment):
The ultimate struct will likely use struct embedding for composition purposes.
Within the SendPayment
RPC, if the goroutine spawned to send the payment receives a non-nil error, then the record described above should be written to disk.
A new RPC should be added to lnrpc
and implemented within rpcserver.go
which returns a list of all outgoing payments.
A combination of the sequence
and lockTime
fields within the commitment transaction can be used to compactly encoded the current height of a particular commitment transaction. With such a state-hint, if a prior transaction is broadcast by counterparty then the necessary revocation hash/key can be located in constant-time. State-hints save us from performing linear scans through the stored revocation PRF.
In order to obfuscate exactly which state was broadcast to the out-side world, it was suggested at the Lighting Summit that the encoded state should be XOR'd with a secret only known to the channel participants as an intermediate step before encoding/decoding.
sequence
and lockTime
fields in the commitment transactionCurrently within the wallet's ChannelReservation
workflow any reservation which are partially started but not ultimately completed are never cleaned up, meaning the allocated resources will be active until a daemon restart.
A "zombie sweeper" goroutine should be added which manually cancels reservations which haven't progressed for a certain "idle" period of time.
Currently within the commitment update state-machine all HTLC adds are accepted. This is an issue since the addition of a new HTLC may push the commitment transaction over the consensus enforced weight (formerly called cost) limit for transactions (4,000,000). An additional factor to be considered is the weight of a future stealing transaction that sweeps all the HTLC's within a invalidly broadcast commitment transaction.
The LCP state machine should be modified to reject adding an HTLC if it puts both the commitment transaction, and a future stealing transaction over the current weight limit.
Currently, lncli
(cmd/lncli
) only support fully specifying each argument for each command. This parsing should be extended to also allow positional arguments, and also combo of positional+keyword arguments.
Steps:
Currently within the htlcSwitch
, handling forwarding failures aren't properly implemented. In case of any of the errors described below forwarding will either fail silently, trigger an error, lead to an invalid commitment state, or a "stuck" HTLC.
The following error cases need to be properly handled:
htlcSwtich
's htlcPlex
channel has an insufficient capacity to send the HTLC over.htlcPacket
sent over the htlcPlex
channel doesn't existThis issue is related to #79 as the htlcManager
goroutine who receives the error message will need to generate an onion-wrapped error message to the peer who originally sent the HTLC.
Additionally, the htlcSwitch
will need to be extended to handle cancel requests within the switch statement after a read from the htlcPlex
channel.
Currently within the daemon re-orgs are mostly unaccounted for w.r.t to channel operation.
There are two cases that need to be accounted for:
The current specification draft refers to channels globally via a short identifier rather than the full channel point (utxo). The rationale for the switch lies in the space savings on the wire when advertising new channels as well as within the nested-header for message which update the channel state (add/settle/cancel htlcs, update state, close, errors).
The short ID format is as follows:
block_num || tx_num || output_index
Switch HTLC structure back to the format which was originally in the paper.
See: https://lists.linuxfoundation.org/pipermail/lightning-dev/2015-November/000339.html
Currently all channels are created with exactly the same hard-coded values for the channel initialization parameters. These values should instead be placed within the lnwallet.Config
struct and parsed accordingly within lnd
, ultimately threading the values into the wallet.
A new RPC should be added called queryroute
or findpath
(something along those lines). The RPC should query the routingManager
for a potential path to a particular pubkey
(a node on the network). The RPC should also factor in channel capacity information (as flows may be insufficient).
As a stop-gap, and to force us to migrate to something else in the future we decided at the Lightning Summit to temporarily use IRC as a discovery mechanism for nodes on the network. Additionally, things like identity key rotations, or changes in the total capacity of a channel are to be advertised over IRC in the format initially defined within the spec.
Christian Decker is currently working to document the basic scheme they're currently using for discovery and channel advertisement. A new sub-system should be added which observes/drives an IRC bot which feeds in node discovery and channel data into the routing
package. There are several simple, well maintained framework for IRC interaction in golang
. A suitable one should be selected in order to complete this task.
Channel advertisements seen over IRC should be authenticated the added to the routing table. Any further updates (capacity changes, channel closure, etc) should also be updated accordingly.
Once the format used in c-lightning
is updated, this ticket will be update to either copy-pasta the format in-line or link to some document on The Internets.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.