Occasional failure in test_fulfill_incoming_first where the channel
closed before the final message from dev_disonnect was read. Cause
was the peer writing a gossip msg and failing due to ECONNRESET, before
it read the final message.
(Managed to reproduce under strace -f, FTW).
This is really a symptom of the fact that line_graph's announce=True
didn't wait for node announcements. Let's do that.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
In e46ce0fc84 I accidentally removed the
actual code which fails the command. As a result, if we retry and it
succeeds later, we can end up "succeeding" the started-failing
command, causing us to hit the 'assert(!cmd->have_json_stream);' in
new_json_stream.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When we have multiple HTLCs with the same preimage and the same CLTV,
it doesn't matter what order we treat them (they're literally
identical). But when we offer HTLCs with the same preimage but
different CLTVs, the commitment tx outputs look identical, but the
HTLC txs are different: if we simply take the first HTLC which matches
(and that's not the right one), the HTLC signature we got from them
won't match. As we rely on the signature matching to detect the fee
paid, we get:
onchaind: STATUS_FAIL_INTERNAL_ERROR: grind_fee failed
So we alter match_htlc_output() to return an array of all matching
HTLC indices, which can have more than one entry for offered HTLCs.
If it's our commitment, we loop through until one of the HTLC
signatures matches. If it's their commitment, we choose the HTLC with
the largest CLTV: we're going to ignore it once that hits anyway, so
this is the most conservative approach. If it's a penalty, it doesn't
matter since we steal all HTLC outputs the same independent of CLTV.
For accepted HTLCs, the CLTV value is encoded in the witness script,
so this confusion isn't possible. We nonetheless assert that the
CLTVs all match in that case.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If there are two HTLCs with the same preimage, lightningd would always
find the first one. By including the id in the `struct htlc_stub`
it's both faster (normal HTLC lookup) and allows lightningd to detect
that onchaind wants to fail both of them.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We set up HTLCs with the same preimage and both different and same
CLTVs in both directions, then make sure that onchaind is OK and that
the HTLCs are failed without causing downstream failure.
We do this for both our-unilateral and their-unilateral cases.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Create a second HTLC with a different CTLV but same preimage; onchaind
uses the wrong signature and fails to grind it.
Reported-by: molz (#c-lightning)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This was suggested by Pierre-Marie as the solution to the 'same HTLC,
different CLTV' signature mismatch.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The parameter is 'payment_hash' not 'hash', and the 'description' parameter
wasn't documented at all.
Reported-by: @darosior
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
And there's a difference between no description and "" as a description:
for no description, listpayments doesn't show the field at all. So fix
that.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We spend quite a bit of time in libsecp256k1 moving them to and from
DER encoding. With a bit of care, we can transfer the raw bytes from
gossipd and manually decode them so a malformed one can't make us
abort().
Before:
real 0m0.629000-0.695000(0.64985+/-0.019)s
After:
real 0m0.359000-0.433000(0.37645+/-0.023)s
At this point, the main issues are 11% of time spent in ccan/io's
backend_wake (I tried using a hash table there, but that actually makes
the small-number-of-fds case slower), and 65% of gossipd's time is
in marshalling the response (all those tal_resize add up!).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
My test case is a mainnet gossip store with 22107 channels, and
time to do `lightning-cli listchannels`:
Before: `lightning-cli listchannels` DEVELOPER=0
real 0m1.303000-1.324000(1.3114+/-0.0091)s
After:
real 0m0.629000-0.695000(0.64985+/-0.019)s
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's a very ugly one-liner; really ccan/io should have an io_replan
for this, but it would have to be written carefully as it makes
assumptions currently about plans not changing. In this case, we know
it's in io_write, and we're just moving a pointer.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Such an API is required for when we stream it directly. Almost all our
handlers fit this pattern already, or nearly do.
We remove new_json_result() in favor of explicit json_stream_success()
and json_stream_fail(), but still allowing command_fail() if you just
want a simple all-in-one fail wrapper.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This isn't a big change, since we basically dump the entire JSON
resuly string into the membuf then write it out, but it's prep for the
next changes.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We occasionaly had a travis hang in test_multirpc, and it's due to a
thinko in the prior patch: if a command completes immediately, it will
do the wake before we go to sleep. That means we don't digest the
rest of the buffer until the next write.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
There's a DoS if we keep reading commands and don't insist the client
read the responses.
My initial implementation simply removed the io_duplex, but that
doesn't work if we want to inject notifications in the stream (as we
will eventually want to do), so we operate it as duplex but have each
side wake the other when it's done.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
My test case is a mainnet gossip store with 22107 channels, and
time to do `lightning-cli listchannels`:
Before: `lightning-cli listchannels` DEVELOPER=0
real 0m1.396000-1.409000(1.4022+/-0.005)s
After:
real 0m1.307000-1.320000(1.3128+/-0.005)s
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's the only user of them, and it's going to get optimized.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
gossip.pydiff --git a/common/test/run-json.c b/common/test/run-json.c
index 956fdda35..db52d6b01 100644
This also highlights the danger of searching the logs: that error
appeared previously in the logs, so we didn't notice that the actual
withdraw call gave a different error.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Some people were alarmed that the state was set to "Loaded from
database" indefinitely. Saying that we are trying to reconnect may be
more informative.
And use wallet_forward_status_in_db() everywhere in db code.
And clean up extra CHANGELOG.md entry (looks like rebase error?)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The left join should make sure we still get the results but
referencing the fields and/or attempting to write them to the JSON-RPC
result will cause unforeseen problems. So just omit if we forgot
something.
Adapts the `test_forward_stats` test to include checks for the
`forwarded_payments` table. Will add checks for the `listforwardings`
RPC call next.
Signed-off-by: Christian Decker <@cdecker>
When the wrong key is used, the remote end simply hangs up.
We used to get a random errno, which tends to be "Operation now in progress."
Now it's defined to be 0, detect and provide a better error.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This was from a different series, so I just cherry-picked it.
It adds ccan/membuf as a depenency of ccan/rbuf, though we don't use
it directly yet.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Give a clear error at the beginning if it's not bolt11 payment,
rather than falling foul of other checks.
This will work at least until some altcoin adapts the 'ln' prefix :)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This covers both cbde3e20f7 which added
the parameter names, and d23a0e8adc which
added fallback for missing man page entries.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>