Command format: close id [unilateraltimeout] [destination]
Close the channel with peer {id}, forcing a unilateral
close after {unilateraltimeout} seconds if non-zero, and
the to-local output will be sent to {destination}. If
{destination} isn't specified, the default is the address
of lightningd.
Also change the pylightning:
update the `close` API to support `destination` parameter
`shutdown_scriptpubkey[REMOTE]` is original remote_shutdown_scriptpubkey;
`shutdown_scriptpubkey[LOCAL]` is the script used for "to-local" output when `close`. Add the default is generated form `final_key_idx`;
Store `shutdown_scriptpubkey[LOCAL]` into wallet;
Currently the only source for amount_asset is the value getter on a tx output,
and we don't hand it too far around (mainly ignoring it if it isn't the
chain's main currency). Eventually we could bubble them up to the wallet, use
them to select outputs or actually support assets in the channels.
Since we don't hand them around too widely I thought it was ok for them to be
pass-by-value rather than having to allocate them and pass them around by
reference. They're just 41 bytes currently so the overhead should be ok.
Signed-off-by: Christian Decker <@cdecker>
We now have a pointer to chainparams, that fails valgrind if we do anything
chain-specific before setting it.
Suggested-by: Rusty Russell <@rustyrussell>
I was seeing some accidental pruning under load / Travis, and in
particular we stopped accepting channel_updates because they were 103
seconds old. But making it too long makes the prune test untenable,
so restore a separate flag that this test can use.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's generally clearer to have simple hardcoded numbers with an
#if DEVELOPER around it, than apparent variables which aren't, really.
Interestingly, our pruning test was always kinda broken: we have to pass
two cycles, since l2 will refresh the channel once to avoid pruning.
Do the more obvious thing, and cut the network in half and check that
l1 and l3 time out.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is the other origin, besides `bitcoin_tx`, where we create `bitcoin_tx`
instances, so add the context as soon as possible. Sadly I can't weave the
chainparams into the deserialization code since that'd need to change all the
generated wire code as well.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We normally reconnect after 1 second: have a flag to say wait for
60. This will be used in the next patch which handles "soft" errors.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Header from folded patch 'channel_fail_transient_slowretry.patch':
fixup! lightningd: add slow_reconnect flag for transient failure.
@ZmnSCPxj points out that function is unsafe, since omitting the bool
parameter still compiled. Make it two separate functions, each
with a distinctive name so every caller has to be fixed.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
There's only one caller which used the flag.
As a side-effect, now we'll try reconnect even if the previous owner
was NULL (which mainly effects the case where we couldn't create the subd).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This takes the guesswork out of `drop_to_chain` and allows us to annotate the
last_tx consistently.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Encapsulating the peer state was a win for lightningd; not surprisingly,
it's even more of a win for the other daemons, especially as we want
to add a little gossip information.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
These are always handed to subdaemons as a set, so group them. This makes
it easier to add an fd (in the next patch).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
make it a bit easier to track mutual channel closures by
adding broadcast txid to the listpeers billboard.
since lightningd manages the 'identity' of the closing tx we need
to send it back to closingd so it can update the billboard
appropriately.
Travis caught an error where this happened: when closingd reconnects it
was sending the reestablish message without the option_dataloss_protect
fields. That causes the peer to fail the channel!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
As a side-effect of using amount_msat in gossipd/routing.c, we explicitly
handle overflows and don't need to pre-prune ridiculous-fee channels.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is prep work for when we sign htlc txs with
SIGHASH_SINGLE|SIGHASH_ANYONECANPAY.
We still deal with raw signatures for the htlc txs at the moment, since
we send them like that across the wire, and changing that was simply too
painful (for the moment?).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The spec says so, and it's right: with the right pattern of packet loss
(thanks Travis!) the other end can still be in channeld, waiting for our
`shutdown` message.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We could refine this later (based on existing wallet, for example), but
this gives some estimate.
[ Rename onchain_estimates -> onchain_fee_estimates Suggested-by: @SimonVrouwe ]
[ Factor of 1000 fix Reported-by: @SimonVrouwe ]
Suggested-by: @molxyz
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We use feerate in several places, and each one really should react
differently when it's not available (such as when bitcoind is still
catching up):
1. For general fee-enforcement, we use the broadest possible limits.
2. For closingd, we use it as our opening negotiation point: just use half
the last tx feerate.
3. For onchaind, we can use the last tx feerate as a guide for our own txs;
it might be too high, but at least we know it was sufficient to be mined.
4. For withdraw and fund_channel, we can simply refuse.
Fixes: #1836
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
connectd tells master about every disconnection, and master knows
whether it's important to reconnect. Just get the master to invoke a new
connect command if it considers the peer important!
The only twist is timeouts: we don't want to immediately reconnect if
we've failed to connect. To solve this, connectd passes a 'delaytime'
to the master when a connection fails, and the master passes it back
when it asks for a connection.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
(This was sitting in my gossip-enchancement patch queue, but it simplifies
this set too, so I moved it here).
In 94711969f we added an explicit gossip_index so when gossipd gets
peers back from other daemons, it knows what gossip it has sent (since
gossipd can send gossip after the other daemon is already complete).
This solution is insufficient for the more general case where gossipd
wants to send other messages reliably, so replace it with the other
solution: have gossipd drain the "gossip fd" which the daemon returns.
This turns out to be quite simple, and is probably how I should have
done it originally :(
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
And on channel_fail_permanent and closing (the two places we drop to
chain), we tell gossipd it's no longer important.
Fixes: #1316
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
d822ba1ee accidentally removed this case, which is important: if the
other side didn't get our final matching closing_signed, it will
reconnect and try again. We consider the channel no longer "active"
and thus ignore it, and get upset when it send the
`channel_reestablish` message.
We could just consider CLOSINGD_COMPLETE to be active, but then we'd
have to wait for the closing transaction to be mined before we'd allow
another connection.
We can't special case it when the peer reconnects, because there
could be (in theory) multiple channels for that peer in CLOSINGD_COMPLETE,
and we don't know which one to reestablish.
So, we need to catch this when they send the reestablish, and hand
that msg to closingd to do negotiation again. We already have code
to note that we're in CLOSINGD_COMPLETE and thus ignore any result
it gives us.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Also report tx and txid, and whether we closed unilaterally or
bilaterally, if we could close the channel.
Also make a manpage.
Fixes: #1207Fixes: #714Fixes: #622
We had an intermittant test failure, where the fee we negotiated was
further from our ideal than the final commitment transaction. It worked
fine if the other side sent the mutual close first, but not if we sent
our unilateral close first.
ERROR: test_closing_different_fees (__main__.LightningDTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tests/test_lightningd.py", line 1319, in test_closing_different_fees
wait_for(lambda: p.rpc.listpeers(l1.info['id'])['peers'][0]['channels'][0]['status'][1] == 'ONCHAIN:Tracking mutual close transaction')
File "tests/test_lightningd.py", line 74, in wait_for
raise ValueError("Error waiting for {}", success)
ValueError: ('Error waiting for {}', <function LightningDTests.test_closing_different_fees.<locals>.<lambda> at 0x7f4b43e31a60>)
Really, if we're prepared to negotiate it, we should be prepared to
accept it ourselves. Simply take the cheapest tx which is above our
minimum.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>