This will kill a test that was running for 550 seconds, so that we
get a traceback before the travis inactivity timeout of 600 seconds
kicks in. The traceback should show us which test got stuck and where
it got stuck.
Signed-off-by: Christian Decker <@cdecker>
pytest was an indirect dependency so far, making that one
explicit, and the timeout plugin should allow us to kill a stuck test
before travis kills it, and thus allow us to see where it got stuck.
Signed-off-by: Christian Decker <@cdecker>
Because gossip in this case takes up to a minute, this test took 10
minutes. The workaround is to do the waiting-for-gossip all at once.
Now it takes 362 seconds.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Unlike other daemons, closingd doesn't listen to the master, but runs
simply to its own beat. So instead of responding to the JSON dev_memleak
command, we always check for memory leaks, and make sure that the
python tests fail if they see MEMLEAK in the logs.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
For onchaind we need to remove globals from memleak consideration;
we also change the htlc pointer to an htlc copy, which simplifies
things as well.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is a bit different from the other cases: we need to iterate through
the peers and ask all the ones in openingd.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We need several notleak() annotations here:
1. The temporary structure which is handed to retry_peer_connected().
It's waiting for the master to respond to our connect_reconnected
message.
2. We don't keep a pointer to the io_conn for a peer, so we need to
mark those as not being a leak.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's a very bounded leak, since we can only have one and it's
connected to the peer lifetime, but we don't need it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we have an array of varlen structures (which require a ctx arg), we
should make that arg the array itself (which was tal_arr()), not the
root context.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This simplifies lifetime assumptions. Currently all callers keep the
original around, but everything broke when I changed that in the next
patch.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
They were not universally used, and most are trivial accessors anyway.
The exception is getting the channel reserve: we have to multiply by 1000
as well as flip direction, so keep that one.
The BOLT quotes move to `struct channel_config`.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We keep a chain_hash in struct daemon, becayse otherwise we end up with
`&peer->daemon->rstate->chainparams->genesis_blockhash` which is a bit
ridiculous.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We only take the pubkey and ignore all other fields, so we might as well
save the cycles used computing the hash for something else.
Signed-off-by: Jon Griffiths <jon_p_griffiths@yahoo.com>
We probably also want to call secp_randomise/wally_secp_randomize here
too, and since these calls all call setup_tmpctx, it probably makes
sense to have a helper function to do all that. Until thats done, I
modified the tests so grepping will show the places where the sequence
of calls is repeated.
Signed-off-by: Jon Griffiths <jon_p_griffiths@yahoo.com>
This avoids some very ugly switch() statements which mixed the two,
but we also take the chance to rename 'towire_gossip_' to
'towire_gossipd_' for those inter-daemon messages; they're messages to
gossipd, not gossip messages.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We had at least one bug caused by it not returning true when it had
queued something. Instead, just re-check thq queue after it's called.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>