This adapts the test to the new 'plugin' command: no more sleeping,
since we are synchronous !
This tests the timeout by increasing the 'slowinit' plugin sleep
duration at init reception.
This adds a broken plugin to make sure we won't crash because of a
misbehaving plugin (unmet dependency is the most common case).
I was seeing some accidental pruning under load / Travis, and in
particular we stopped accepting channel_updates because they were 103
seconds old. But making it too long makes the prune test untenable,
so restore a separate flag that this test can use.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It only had an effect if the peer didn't support option_gossip_queries, but
still, we don't want a gossip blast any more.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The test was implicitly relying on us selecting the larger output and then not
touching the smaller, leaving it there for the final `withdraw` to claim. This
ordering of UTXOs is not guaranteed, and in particular can fail when switching
DB backends. To stabilize we just need to make sure to select the change
output as well.
This replaces the hard-coded path to the `postgres` and `initdb` binaries with
a slightly more flexible search. It'll pick the newest version installed.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This was weird right from the start, so we just split the table into integers
and blobs, so each column has a well-defined format. It is also required for
postgres not to cry about explicit casts in the `paramTypes` array.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We will soon have a postgres backend as well, so we need a way to control the
postgres process and to provision DBs to the nodes. The two interfaces are the
dsn that we pass to the node, and the python query interface needed to query
from tests.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
It's generally clearer to have simple hardcoded numbers with an
#if DEVELOPER around it, than apparent variables which aren't, really.
Interestingly, our pruning test was always kinda broken: we have to pass
two cycles, since l2 will refresh the channel once to avoid pruning.
Do the more obvious thing, and cut the network in half and check that
l1 and l3 time out.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If you send a message which simply changes timestamp and signature, we
drop it. You shouldn't be doing that, and the door to ignoring them
was opened by by option_gossip_query_ex, which would allow clients to
ignore updates with the same checksum.
This is more aggressive at reducing spam messages, but we allow refreshes
(to be conservative, we allow them even when 1/2 of the way through the
refresh period).
I dropped the now-unnecessary sleep from test_gossip_pruning, too.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Make update_local_channel use a timer if it's too soon to make another
update.
1. Implement cupdate_different() which compares two updates.
2. make update_local_channel() take a single arg for timer usage.
3. Set timestamp of non-disable update back 5 minutes, so we can
always generate a disable update if we need to.
4. Make update_local_channel() itself do the "unchanged update" suppression.
gossipd: clean up local channel updates.
5. Keep pointer to the current timer so we override any old updates with
a new one, to avoid a race.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
After switching to a plugin, we verify that we can fund a channel
before we check to contact a peer. We'll need to have a funded wallet
to pass the check in this test that verifies that 'fundchannel' cannot
be called for a peer after fundchannel_start is.
Allow a user to select the utxo set that will be added to a
transaction, via the `utxos` parameter. Optional.
Format for utxos should be of the form ["txid:vout","..."]
For now, we can't fully ensure that the broadcast was catched from a third pary. Only when the transaction (broadcast by a third pary) is onchain, we can catch it.
531c8d7d9b
In this one, we always send my_current_per_commitment_point, though it's
ignored. And we have our official feature numbers.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
As per BOLT02 #message-retransmission :
if `next_commitment_number` is 1 in both the `channel_reestablish` it sent and received:
- MUST retransmit `funding_locked`
It seems we spend a lot of time waiting for `bitcoind` and `lightningd` to
talk to disks. This adds the `TEST_DIR` environment variable, allowing for
example to use `/dev/shm`, or a faster disk than the disk `/tmp` is on, as the
root directory for all test-related files.
Testing this on one of our builder machines cut the time to run the entire
suite under valgrind roughly in half (180-200 seconds vs 440-490 seconds).
My machine would accumulate a number of zombie lightningd and bitcoind
processes over time while testing. Investigating this showed that if a fixture
raised an exception during fixture teardown then other fixtures that have not
been torn down would linger around. The issue is that pytest treats exceptions
in fixtures as non-recoverable and therefore will not catch them and call the
remaining ones.
This commit adds a new fixture, that is there just to collect eventual errors
from other fixtures and ensure that anything that needs to clean up something,
e.g., processes started by the fixture, are cleaned up before we raise an
eventual exception. This is achieved by making any fixture that needs cleaning
up dependent on the teardown_checks fixture, which also serves as central
point to collect errors and printer of eventual errors.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This has a slight side-effect of removing the actual begin and commit
statements from the `db_write` hooks, but they are mostly redundant anyway (no
harm in grouping pre-init statements into one transaction, and we know that
each post-init call is supposed to be wrapped anyway).
Signed-off-by: Christian Decker <decker.christian@gmail.com>
These are used to do one-time initializations and wait for pending statements
before closing.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
log files were being deleted on memleak errors, since
we weren't marking the node has having an error.
this helper function is designed to exactly handle this, so
we use the helper function and modify it to print any additional
error messages that are handed back from killall.
Throwing an exception while killing all nodes meant that
we aren't cleaning up all the nodes properly. Instead,
collect the errors, and return them back to the upper level,
where we report them and terminate as expected.