When developing in regtest or testnet it is really inconvenient to
have to fake traffic and generate blocks just to get estimatesmartfee
to return a valid estimate. This just sets the minfee if bitcoind
doesn't return a valid estimate.
Reported-by: Rene Pickhardt <@renepickhardt>
Signed-off-by: Christian Decker <@cdecker>
The only change is that the final_incorrect_htlc_amount field is now 64
bit. Since no implementation yet parses that field, we just updated it
quietly in the spec.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We always have an addr entry in the db (though it may be an ephemeral socket
the peer connected in from). We don't have to handle a NULL address.
While we're there, simplify new_peer not to take the features args;
the caller who cares immediately updates the features anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
In one case we can reduce, in the others we eliminated if VALGRIND.
Here are the ten slowest tests on my laptop:
469.75s call tests/test_closing.py::test_closing_torture
243.61s call tests/test_closing.py::test_onchain_multihtlc_our_unilateral
222.73s call tests/test_closing.py::test_onchain_multihtlc_their_unilateral
217.80s call tests/test_closing.py::test_closing_different_fees
146.14s call tests/test_connection.py::test_dataloss_protection
138.93s call tests/test_connection.py::test_restart_many_payments
129.66s call tests/test_gossip.py::test_gossip_persistence
128.73s call tests/test_connection.py::test_no_fee_estimate
122.46s call tests/test_misc.py::test_htlc_send_timeout
118.79s call tests/test_closing.py::test_onchain_dust_out
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
generate was deprecated some time ago, so we added the generate_block()
helper. But many calls crept back in, and git master refuses it.
(test_blockchaintrack relied on the return value, so make generate_block
return the list of blocks).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
After Ubuntu 18.10 upgrade, lots of new flake8 warnings.
$ flake8 --version:
3.5.0 (mccabe: 0.6.1, pycodestyle: 2.4.0, pyflakes: 1.6.0) CPython 3.6.7rc1 on Linux
Note it seems that W503 warned about line breaks before binary
operators, and W504 complains about them after. I prefer W504, so
disable W503.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Occasional failure in test_fulfill_incoming_first where the channel
closed before the final message from dev_disonnect was read. Cause
was the peer writing a gossip msg and failing due to ECONNRESET, before
it read the final message.
(Managed to reproduce under strace -f, FTW).
This is really a symptom of the fact that line_graph's announce=True
didn't wait for node announcements. Let's do that.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
In e46ce0fc84 I accidentally removed the
actual code which fails the command. As a result, if we retry and it
succeeds later, we can end up "succeeding" the started-failing
command, causing us to hit the 'assert(!cmd->have_json_stream);' in
new_json_stream.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When we have multiple HTLCs with the same preimage and the same CLTV,
it doesn't matter what order we treat them (they're literally
identical). But when we offer HTLCs with the same preimage but
different CLTVs, the commitment tx outputs look identical, but the
HTLC txs are different: if we simply take the first HTLC which matches
(and that's not the right one), the HTLC signature we got from them
won't match. As we rely on the signature matching to detect the fee
paid, we get:
onchaind: STATUS_FAIL_INTERNAL_ERROR: grind_fee failed
So we alter match_htlc_output() to return an array of all matching
HTLC indices, which can have more than one entry for offered HTLCs.
If it's our commitment, we loop through until one of the HTLC
signatures matches. If it's their commitment, we choose the HTLC with
the largest CLTV: we're going to ignore it once that hits anyway, so
this is the most conservative approach. If it's a penalty, it doesn't
matter since we steal all HTLC outputs the same independent of CLTV.
For accepted HTLCs, the CLTV value is encoded in the witness script,
so this confusion isn't possible. We nonetheless assert that the
CLTVs all match in that case.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If there are two HTLCs with the same preimage, lightningd would always
find the first one. By including the id in the `struct htlc_stub`
it's both faster (normal HTLC lookup) and allows lightningd to detect
that onchaind wants to fail both of them.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We set up HTLCs with the same preimage and both different and same
CLTVs in both directions, then make sure that onchaind is OK and that
the HTLCs are failed without causing downstream failure.
We do this for both our-unilateral and their-unilateral cases.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Create a second HTLC with a different CTLV but same preimage; onchaind
uses the wrong signature and fails to grind it.
Reported-by: molz (#c-lightning)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This was suggested by Pierre-Marie as the solution to the 'same HTLC,
different CLTV' signature mismatch.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The parameter is 'payment_hash' not 'hash', and the 'description' parameter
wasn't documented at all.
Reported-by: @darosior
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
And there's a difference between no description and "" as a description:
for no description, listpayments doesn't show the field at all. So fix
that.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We spend quite a bit of time in libsecp256k1 moving them to and from
DER encoding. With a bit of care, we can transfer the raw bytes from
gossipd and manually decode them so a malformed one can't make us
abort().
Before:
real 0m0.629000-0.695000(0.64985+/-0.019)s
After:
real 0m0.359000-0.433000(0.37645+/-0.023)s
At this point, the main issues are 11% of time spent in ccan/io's
backend_wake (I tried using a hash table there, but that actually makes
the small-number-of-fds case slower), and 65% of gossipd's time is
in marshalling the response (all those tal_resize add up!).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
My test case is a mainnet gossip store with 22107 channels, and
time to do `lightning-cli listchannels`:
Before: `lightning-cli listchannels` DEVELOPER=0
real 0m1.303000-1.324000(1.3114+/-0.0091)s
After:
real 0m0.629000-0.695000(0.64985+/-0.019)s
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's a very ugly one-liner; really ccan/io should have an io_replan
for this, but it would have to be written carefully as it makes
assumptions currently about plans not changing. In this case, we know
it's in io_write, and we're just moving a pointer.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Such an API is required for when we stream it directly. Almost all our
handlers fit this pattern already, or nearly do.
We remove new_json_result() in favor of explicit json_stream_success()
and json_stream_fail(), but still allowing command_fail() if you just
want a simple all-in-one fail wrapper.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This isn't a big change, since we basically dump the entire JSON
resuly string into the membuf then write it out, but it's prep for the
next changes.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We occasionaly had a travis hang in test_multirpc, and it's due to a
thinko in the prior patch: if a command completes immediately, it will
do the wake before we go to sleep. That means we don't digest the
rest of the buffer until the next write.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
There's a DoS if we keep reading commands and don't insist the client
read the responses.
My initial implementation simply removed the io_duplex, but that
doesn't work if we want to inject notifications in the stream (as we
will eventually want to do), so we operate it as duplex but have each
side wake the other when it's done.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
My test case is a mainnet gossip store with 22107 channels, and
time to do `lightning-cli listchannels`:
Before: `lightning-cli listchannels` DEVELOPER=0
real 0m1.396000-1.409000(1.4022+/-0.005)s
After:
real 0m1.307000-1.320000(1.3128+/-0.005)s
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's the only user of them, and it's going to get optimized.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
gossip.pydiff --git a/common/test/run-json.c b/common/test/run-json.c
index 956fdda35..db52d6b01 100644
This also highlights the danger of searching the logs: that error
appeared previously in the logs, so we didn't notice that the actual
withdraw call gave a different error.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Some people were alarmed that the state was set to "Loaded from
database" indefinitely. Saying that we are trying to reconnect may be
more informative.
And use wallet_forward_status_in_db() everywhere in db code.
And clean up extra CHANGELOG.md entry (looks like rebase error?)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The left join should make sure we still get the results but
referencing the fields and/or attempting to write them to the JSON-RPC
result will cause unforeseen problems. So just omit if we forgot
something.
Adapts the `test_forward_stats` test to include checks for the
`forwarded_payments` table. Will add checks for the `listforwardings`
RPC call next.
Signed-off-by: Christian Decker <@cdecker>