I don't remember ever seeing a bug which only showed up in VALGRIND=1 with developer
mode disabled, so don't test that, and spread out the other test more evenly.
In addition, disable the worst-performing tests in DEVELOPER=0 mode.
Here timings from my build machine: the worst 6 (- DEVELOPER=0 VALGRIND=0)
with the same tests (+ DEVELOPER=1 VALGRIND=1)
-452.42s call tests/test_pay.py::test_channel_spendable
+87.69s call tests/test_pay.py::test_channel_spendable
-335.66s call tests/test_gossip.py::test_gossip_store_compact_on_load
+47.41s call tests/test_gossip.py::test_gossip_store_compact_on_load
-332.07s call tests/test_connection.py::test_opening_tiny_channel
+89.71s call tests/test_connection.py::test_opening_tiny_channel
-331.97s call tests/test_pay.py::test_channel_spendable_large
+56.23s call tests/test_pay.py::test_channel_spendable_large
-305.28s call tests/test_invoices.py::test_invoice_routeboost
+37.57s call tests/test_invoices.py::test_invoice_routeboost
-284.28s call tests/test_plugin.py::test_htlc_accepted_hook_forward_restart
+49.12s call tests/test_plugin.py::test_htlc_accepted_hook_forward_restart
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is probably worth preventing.
1. Our depth estimate would be inaccurate possibly leading to us
timing out too early.
2. If we're not up-to-date our onchain funds are unknown.
3. We wouldn't be able to send or receive HTLCs until we're synced anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We want to still allow incoming connections, and reestablishment of
channels, but if one tries to give us an HTLC, stall until we're
synced.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we don't know block height, we shouldn't be sending HTLCs. This
stops us forwarding HTLCs as well as new payments.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
You're supposed to be able to hand mock_rpc either a function to call,
or a dict canned response. We never did the latter, and the logic
was broken.
It was testing the key, not the value for whether it was a dict. And
it could never have given a valid response anyway, since it wouldn't
know the id to use. So assume it's a successful result.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If l2 doesn't think we're onchain yet, it treats the new connection from l1
as a reconnection, triggering 'ValueError: 1 nodes had unexpected reconnections'
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We're about to change the API, so this makes the tests still work
across the transition (and, as a bonus, tests our backwards compat
shim).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This was the initial issue that was addressed by #2756 and now we just test
that all is working as expected.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This is just the test that we use to verify block backfilling below the wallet
birth height is working correctly.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
@renepickhardt has a shell script we broke. While we still produce
perfectly valid JSON, we should not gratuitously change tool output.
Plus, I prefer the missing space before the ':'.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This test is spawning 100 nodes concurrently, which is a lot even when not
running with `valgrind`, especially when executing tests in parallel.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This is a followup to #2892. Since we now attempt to lock the PID file before
starting plugins we need to make sure that we actually use a unique lightning
directory for anything that attempts to call `--help`. If not we may be
conflicting with a `lightningd` that is running against that directory.
Notice that this still means that we will be unable to call `--help` on
`lightningd` if we have a running instance, but isolation in this case is
good, otherwise we'd be reading the default config anyway.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We recently noticed that the way we unpack the call arguments for hooks and
notifications in pylightning breaks pretty quickly once you start changing the
hook and notification params. If you add params they will not get mapped
correctly causing the plugin to error out.
This can be fixed by adding a `VAR_KEYWORD` argument to the calbacks, i.e., by
adding a single `**kwargs` argument at the end of the signature. This commit
adds a check that such a catch-all argument exists, and emits a warning if it
doesn't.
It also fixes up the plugins that we ship ourselves.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Dumb programs which have a --daemon option call fork() early. This is
terrible UX since startup errors get lost: the program exits with
"success" immediately then you discover via the logs that it didn't
start at all.
However, forking late introduced a heap of problems with changing
pids. Instead, fork early but keep stderr and the parent around: if
we fail early on, the parent fails with us. We release our parent
with an explicit action just before the main loop.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We create our children then fork, so we're not a parent. I noticed this
because 'lightning-cli stop' takes a long time: this is because it tries to
wait for them and they don't respond.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Note that we move adding the plugin to the plugins list to the end, otherwise
the hook from logging can examine the (uninitialized) plugin.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Our previous param support was a bit limited in this case.
We create a dev- command multiplexer, so we can exercise it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We were having a few issues with malformed data in the past, so this time we
really check that stuff.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Instead of taking over the ->cmd pointer, append ourselves to a list
of cancels. This fixes the test_funding_cancel_race.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
And clean up some dev ones which actually happen (mainly by calling
channel_fail_permanent which logs UNUSUAL, rather than
channel_internal_error which logs BROKEN).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rewriting the gossip_store is much more trivial when we don't have
any pointers into it, so add some simple offline compaction code
and disable the automatic compaction code.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>