Feerate changes are asymmetric, as they can only be sent by the funder.
For FUNDER, the remote feerate is set when upon send of
commitment_signed, and the local feerate is set on receipt of
revoke_and_ack.
For non-funder, the local feerate is set on receipt of
commitment_signed, and the remote feerate set on send of
revoke_and_ack. In our code, these two happen together.
channeld gets this right, but lightningd ignored the funder/fundee
distinction, and as a result, receipt of a commitment_signed by the
funder altered fees in the database. If there was a reconnection
event or restart, then these (incorrect) values would be used, causing
us to complain about a 'Bad commit_sig signature' and close the
channel.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
A 'Bad commit_sig signature' was reported by @Javier on Telegram and
@DarthCoin. This was between two c-lightning peers, so definitely our fault.
Analysis of this message revealed the signature was using the wrong
feerate. I finally managed to make a test case which triggered this.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Check behavior for user supplied upfront_shutdown_script via close_to
Header from folded patch 'fix__return__not__iff_well_close_to_the_provided_addr.patch':
fix: return not iff we'll close to the provided addr
This is mainly an internal-only change, especially since we don't
offer any globalfeatures.
However, LND (as of next release) will offer global features, and also
expect option_static_remotekey to be a *global* feature. So we send
our (merged) feature bitset as both global and local in init, and fold
those bitsets together when we get an init msg.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It sometimes fail with a bad_gossip error because the sending node
might not have found out about the channel when it gets a
channel_update. Make sure the whole network knows everything before
we start.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Since elements addresses look quite different from the bitcoin mainnet
addresses I just added a sample to the chainparams fixture. In addition I
extracted some of the fixed strings to reference chainparams instead.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
It only had an effect if the peer didn't support option_gossip_queries, but
still, we don't want a gossip blast any more.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The test was implicitly relying on us selecting the larger output and then not
touching the smaller, leaving it there for the final `withdraw` to claim. This
ordering of UTXOs is not guaranteed, and in particular can fail when switching
DB backends. To stabilize we just need to make sure to select the change
output as well.
It's generally clearer to have simple hardcoded numbers with an
#if DEVELOPER around it, than apparent variables which aren't, really.
Interestingly, our pruning test was always kinda broken: we have to pass
two cycles, since l2 will refresh the channel once to avoid pruning.
Do the more obvious thing, and cut the network in half and check that
l1 and l3 time out.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
After switching to a plugin, we verify that we can fund a channel
before we check to contact a peer. We'll need to have a funded wallet
to pass the check in this test that verifies that 'fundchannel' cannot
be called for a peer after fundchannel_start is.
For now, we can't fully ensure that the broadcast was catched from a third pary. Only when the transaction (broadcast by a third pary) is onchain, we can catch it.
531c8d7d9b
In this one, we always send my_current_per_commitment_point, though it's
ignored. And we have our official feature numbers.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
As per BOLT02 #message-retransmission :
if `next_commitment_number` is 1 in both the `channel_reestablish` it sent and received:
- MUST retransmit `funding_locked`
I don't remember ever seeing a bug which only showed up in VALGRIND=1 with developer
mode disabled, so don't test that, and spread out the other test more evenly.
In addition, disable the worst-performing tests in DEVELOPER=0 mode.
Here timings from my build machine: the worst 6 (- DEVELOPER=0 VALGRIND=0)
with the same tests (+ DEVELOPER=1 VALGRIND=1)
-452.42s call tests/test_pay.py::test_channel_spendable
+87.69s call tests/test_pay.py::test_channel_spendable
-335.66s call tests/test_gossip.py::test_gossip_store_compact_on_load
+47.41s call tests/test_gossip.py::test_gossip_store_compact_on_load
-332.07s call tests/test_connection.py::test_opening_tiny_channel
+89.71s call tests/test_connection.py::test_opening_tiny_channel
-331.97s call tests/test_pay.py::test_channel_spendable_large
+56.23s call tests/test_pay.py::test_channel_spendable_large
-305.28s call tests/test_invoices.py::test_invoice_routeboost
+37.57s call tests/test_invoices.py::test_invoice_routeboost
-284.28s call tests/test_plugin.py::test_htlc_accepted_hook_forward_restart
+49.12s call tests/test_plugin.py::test_htlc_accepted_hook_forward_restart
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We're about to change the API, so this makes the tests still work
across the transition (and, as a bonus, tests our backwards compat
shim).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This test is spawning 100 nodes concurrently, which is a lot even when not
running with `valgrind`, especially when executing tests in parallel.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Instead of taking over the ->cmd pointer, append ourselves to a list
of cancels. This fixes the test_funding_cancel_race.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
And clean up some dev ones which actually happen (mainly by calling
channel_fail_permanent which logs UNUSUAL, rather than
channel_internal_error which logs BROKEN).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
@pm47 gave a great bug report showing c-lightning sending the same
UPDATE_FEE over and over, with the final surprise result being that we
blamed the peer for sending us multiple empty commits!
The spam is caused by us checking "are we at the desired feerate?" but
then if we can't afford the desired feerate, setting the feerate we
can afford, even though it's a duplicate. Doing the feerate cap before
we test if it's what we have already eliminates this.
But the empty commits was harder to find: it's caused by a heuristic in
channel_rcvd_revoke_and_ack:
```
/* For funder, ack also means time to apply new feerate locally. */
if (channel->funder == LOCAL &&
(channel->view[LOCAL].feerate_per_kw
!= channel->view[REMOTE].feerate_per_kw)) {
status_trace("Applying feerate %u to LOCAL (was %u)",
channel->view[REMOTE].feerate_per_kw,
channel->view[LOCAL].feerate_per_kw);
channel->view[LOCAL].feerate_per_kw
= channel->view[REMOTE].feerate_per_kw;
channel->changes_pending[LOCAL] = true;
}
```
We assume we never send duplicates, so we detect an otherwise-empty
change using the difference in feerates. If we don't set this flag,
we will get upset if we receive a commitment_signed since we consider
there to be no changes to commit.
This is actually hard to test: the previous commit adds a test which
spams update_fee and doesn't trigger this bug, because both sides
use the same "there's nothing outstanding" logic.
Fixes: #2701
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Remote node may (incorrectly) not send announcement_signatures when
reconnecting, so we we use a copy and can still re-announce.
Also checks that we still send our announcement_signatures when reconnecting.