Because gossip in this case takes up to a minute, this test took 10
minutes. The workaround is to do the waiting-for-gossip all at once.
Now it takes 362 seconds.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Unlike other daemons, closingd doesn't listen to the master, but runs
simply to its own beat. So instead of responding to the JSON dev_memleak
command, we always check for memory leaks, and make sure that the
python tests fail if they see MEMLEAK in the logs.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
For onchaind we need to remove globals from memleak consideration;
we also change the htlc pointer to an htlc copy, which simplifies
things as well.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is a bit different from the other cases: we need to iterate through
the peers and ask all the ones in openingd.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We need several notleak() annotations here:
1. The temporary structure which is handed to retry_peer_connected().
It's waiting for the master to respond to our connect_reconnected
message.
2. We don't keep a pointer to the io_conn for a peer, so we need to
mark those as not being a leak.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's a very bounded leak, since we can only have one and it's
connected to the peer lifetime, but we don't need it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we have an array of varlen structures (which require a ctx arg), we
should make that arg the array itself (which was tal_arr()), not the
root context.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This simplifies lifetime assumptions. Currently all callers keep the
original around, but everything broke when I changed that in the next
patch.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
They were not universally used, and most are trivial accessors anyway.
The exception is getting the channel reserve: we have to multiply by 1000
as well as flip direction, so keep that one.
The BOLT quotes move to `struct channel_config`.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We keep a chain_hash in struct daemon, becayse otherwise we end up with
`&peer->daemon->rstate->chainparams->genesis_blockhash` which is a bit
ridiculous.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We only take the pubkey and ignore all other fields, so we might as well
save the cycles used computing the hash for something else.
Signed-off-by: Jon Griffiths <jon_p_griffiths@yahoo.com>
We probably also want to call secp_randomise/wally_secp_randomize here
too, and since these calls all call setup_tmpctx, it probably makes
sense to have a helper function to do all that. Until thats done, I
modified the tests so grepping will show the places where the sequence
of calls is repeated.
Signed-off-by: Jon Griffiths <jon_p_griffiths@yahoo.com>
This avoids some very ugly switch() statements which mixed the two,
but we also take the chance to rename 'towire_gossip_' to
'towire_gossipd_' for those inter-daemon messages; they're messages to
gossipd, not gossip messages.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We had at least one bug caused by it not returning true when it had
queued something. Instead, just re-check thq queue after it's called.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We shouldn't insist on an exact reponse match: they can batch it and send
a whole batch, as long as it overlaps what we ask.
We also change to a bitmap to save some memory.
This isn't note in the CHANGELOG since we don't actually send gossip
range queries except for testing.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Messages from a peer may be invalid in many ways: we send an error
packet in that case. Rather than internally calling peer_error,
however, we make it explicit by having the handle_ functions return
NULL or an error packet.
Messages from the daemon itself should not be invalid: we log an error
and close the fd to them if it is. Previously we logged an error but
didn't kill them.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We now keep multiple commands for a json_connection, and an array of
json_streams.
When a command wants to write something, we allocate a new json_stream
at the end of the array.
We always output from the first available json_stream; once that
command has finished, we free that and move to the next. Once all are
done, we wake the reader.
This means we won't read a new command if output is still pending, but
as most commands don't start writing until they're ready to write
everything, we still get command parallelism.
In particular, you can now 'waitinvoice' and 'delinvoice' and it will
work even though the 'waitinvoice' blocks.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>