A log can have a default node_id, which can be overridden on a per-entry
basis. This changes the format of logging, so some tests need rework.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-changed: JSON API: `htlc_accepted` hook has `type` (currently `legacy` or `tlv`) and other fields directly inside `onion`.
Changelog-deprecated: JSON API: `htlc_accepted` hook `per_hop_v0` object deprecated, as is `short_channel_id` for the final hop.
If a 'upfront_shutdown_script' was specified, show the address +
scriptpubky in `listpeers`
Changelog-added: JSON API: `listpeers` channels now include `close_to` and `close_to_addr` iff a `close_to` address was specified at channel open
Quite a few of the things in the LightningNode class are tailored to their use
in the c-lightning tests, so I decided to split those customizations out into
a sub-class, and adding one more fixture that just serves the class. This
allows us to override the LightningNode implementation in our own tests, while
still having sane defaults for other users.
Reduce test_feerate_stress iterations, and simply don't run
test_pay_retry under VALGRIND with SLOW_MACHINE at all.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It currently works because we inject it so fast that it's still doing the
txout lookup, but that's about to change.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
A long time ago (93dcd5fed7), I
simplified the htlc reload code so it adjusted the amounts for HTLCs
in id order. As we presumably allowed them to be added in that order,
this avoided special-casing overflow (which was about to deliberately
be made harder by the new amount_msat code).
Unfortunately, htlc id order is not canonical, since htlc ids are
assigned consecutively in both directions! Concretely, we can have two HTLCs:
HTLC #0 LOCAL->REMOTE: 500,000,000 msat, state RCVD_REMOVE_REVOCATION
HTLC #0 REMOTE->LOCAL: 10,000 msat, state SENT_ADD_COMMIT
On a new remote-funded channel, in which we have 0 balance, these
commits *only* work in this order. Sorting by HTLC ID is not enough!
In fact, we'd have to worry about redemption order as well, as that
matters.
So, regretfully, we offset the balances halfway to UINT64_MAX, then check
they didn't underflow at the end. This loses us this one sanity check,
but that's probably OK.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We're about to change it so we always send our local messages, which
breaks this test. Add a new node which doesn't have any local
messages, so the test works correctly.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Sometimes the l3 seeker asks for scids, and the reply contains the
channel which is then closed by the time it checks, so it considers
the updates bad gossip.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Feerate changes are asymmetric, as they can only be sent by the funder.
For FUNDER, the remote feerate is set when upon send of
commitment_signed, and the local feerate is set on receipt of
revoke_and_ack.
For non-funder, the local feerate is set on receipt of
commitment_signed, and the remote feerate set on send of
revoke_and_ack. In our code, these two happen together.
channeld gets this right, but lightningd ignored the funder/fundee
distinction, and as a result, receipt of a commitment_signed by the
funder altered fees in the database. If there was a reconnection
event or restart, then these (incorrect) values would be used, causing
us to complain about a 'Bad commit_sig signature' and close the
channel.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
A 'Bad commit_sig signature' was reported by @Javier on Telegram and
@DarthCoin. This was between two c-lightning peers, so definitely our fault.
Analysis of this message revealed the signature was using the wrong
feerate. I finally managed to make a test case which triggered this.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Whenever we have multi-connected nodes, out-of-order gossip is possible.
In particular, if a node_announcement is 1 second fresher than the
channel_announcement, a timestamp_filter might get one and not the
other.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Check behavior for user supplied upfront_shutdown_script via close_to
Header from folded patch 'fix__return__not__iff_well_close_to_the_provided_addr.patch':
fix: return not iff we'll close to the provided addr
*If* we know the key has signed something else (as is the case for
channel_announcement) then we can effectively trust the key derivation.
This matches how LND's VerifyMessage works, though in the next patch
we will document it exactly.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Thanks Twitter helpers @duck1123 and @jochemin for tests!
And @bitconner for the initial test vector.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I had a report of a 0.7.2 user whose node hadn't appeared on 1ml. Their
node_announcement wasn't visible to my node, either.
I suspect this is a consequence of recent version reducing the amount of
gossip they send, as well as large nodes increasingly turning off gossip
altogether from some peers (as we do). We should ignore timestamp filters
for our own channels: the easiest way to do this is to push them out
directly from gossipd (other messages are sent via the store).
We change channeld to wrap the local channel_announcements: previously
we just handed it to gossipd as for any other gossip message we received
from our peer. Now gossipd knows to push it out, as it's local.
This interferes with the logic in tests/test_misc.py::test_htlc_send_timeout
which expects the node_announcement message last, so we generalize
that too.
[ Thanks to @trueptolmy for bugfix! ]
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is mainly an internal-only change, especially since we don't
offer any globalfeatures.
However, LND (as of next release) will offer global features, and also
expect option_static_remotekey to be a *global* feature. So we send
our (merged) feature bitset as both global and local in init, and fold
those bitsets together when we get an init msg.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It sometimes fail with a bad_gossip error because the sending node
might not have found out about the channel when it gets a
channel_update. Make sure the whole network knows everything before
we start.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We completely rework test_node_reannounce: it's assumes we always ask for
all gossip and that assumption will be broken in future patches too.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Our policy is generally to omit fields which aren't sensible.
Also, @niftynei points out the spacing in for loops.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>