We hit the timestamp assert on #2750; it shouldn't happen, but crashing
doesn't leave much information.
Reported-by: @m-schmook
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It wasn't invalid due to a missing channel_update, but in fact was a
bad checksum due to a cut & paste bug. Fix that, and assert it's not
actually truncating.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This happened on Travis, and the gossip_store was a suspicious 4096
bytes long. This implies they're using some non-atomic filesystem
(gossipd always does atomic writes to gossip_store), but if they are,
others surely are too.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If something went wrong and there was an old one, we were
appending to it!
Reported-by: @SimonVrouwe
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It needs this in compat mode to detect old (pre-0.6.3) end of JSON.
But it always does the first command in compat mode.
This was never really reliable, since the first command could be to
a plugin for which we simply pass through the JSON (though, carefully
appending the expected '\n\n' if not already there).
Reported-by: @laanwj
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We might have channel_announcements which have no channel_update: normally
these don't get written into the store until there is one, but if the
store was truncated it can happen. We then get upset on compaction, since
we don't have an in-memory representation of the channel_announcement.
Similarly, we leave the node_announcement pending until after that
channel_announcement, leading to a similar case.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We can't continue, since we've moved the indexes. We'll just crash
anyway, as seen from bugs #2742 and #2743.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We catch node_announcements for nodes where we haven't finished
analyzing the channel_announcement yet (either because we're still
checking UTXO, or in this case, because we're waiting for a channel_update).
But we reference count the pending_node_announce, so if we have
multiple channels pending, we might try to insert it twice. Clear it
so this doesn't happen.
There's a second bug where we continue to catch node_announcements
until *all* the channel_announcements are no longer pending; this is fixed
by removing it from the map.
Fixes: #2735
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
That was changed to start the response object, which broke the openingd
code once we merged.
Of course, I should have *renamed it* when I changed the semantic!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Compile broke because we were using low-level JSON primitives here
(which, incidentally, would produce bad JSON now, since we can't just
put a raw string inside an object!).
Use json_add_string, which also has the benefit of escaping JSON
for us.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Big wiring re-org for funding-continue
In openingd, we move the 'persistent' state (their basepoints,
pubkey, and the minimum_depth requirement for the opening tx) into
the state object. We also look to keep code-reuse between
'continue' and normal 'fundchannel' as high as possible. Both
of these call the same 'fundchannel_reply' at the end.
In opening_control.c, we remap fundchannel_reply such that it is
now aware of the difference between an external/internally funded
channel open. It's the same return path, with the difference that
one finishes making and broadcasting the funding transaction; the
other is skips this.
Add an RPC method (not working at the moment) called
`fundchannel_continue` that takes as its parameters a
node_id and a txid for a transaction (that ostensibly has an output
for a channel)
Some channels won't be opened with a wtx struct, so keep
the total funding amount separate from it so we can
show some stats for listpeers.
Note that we're going to need to update/confirm this once
the transaction gets confirmed.
For the `fundchannel_cancel` we're going to want
to 'successfully' fail a funding channel operation. This allows
us to report it a failure back as an RPC success, instead of
automatically failing the RPC request.
We're going to need this for P2WSH scripts. pull it out into
a common file plus adopt the sanity checks so that it will allow for
either P2WSH or P2WPKH (previously only encoded P2WPKH scripts)
This is an old bug, where a plugin can get called while we're shutting
down (and have freed plugins), but it's triggered more reliably by the
new warning notification hook.
For good measure, we also make freeing a plugin self-delete.
Valgrind error file: valgrind-errors.16763
==16886== Invalid read of size 8
==16886== at 0x422919: plugins_notify (plugin.c:1096)
==16886== by 0x413919: notify_warning (notification.c:61)
==16886== by 0x412BDE: logv (log.c:251)
==16886== by 0x412A98: log_ (log.c:311)
==16886== by 0x4044BE: bcli_finished (bitcoind.c:178)
==16886== by 0x459480: destroy_conn (poll.c:244)
==16886== by 0x459499: destroy_conn_close_fd (poll.c:250)
==16886== by 0x4619E1: notify (tal.c:235)
==16886== by 0x461A7E: del_tree (tal.c:397)
==16886== by 0x461AB5: del_tree (tal.c:407)
==16886== by 0x461AB5: del_tree (tal.c:407)
==16886== by 0x461AB5: del_tree (tal.c:407)
==16886== Address 0x634a578 is 40 bytes inside a block of size 352 free'd
==16886== at 0x4C2EDEB: free (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==16886== by 0x461AFD: del_tree (tal.c:416)
==16886== by 0x461FB7: tal_free (tal.c:481)
==16886== by 0x411E0A: main (lightningd.c:841)
==16886== Block was alloc'd at
==16886== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==16886== by 0x4617CE: allocate (tal.c:245)
==16886== by 0x461E4C: tal_alloc_ (tal.c:423)
==16886== by 0x42255E: plugins_new (plugin.c:106)
==16886== by 0x41133D: new_lightningd (lightningd.c:218)
==16886== by 0x411AD4: main (lightningd.c:649)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is a painpoint with testing, that there's a noticable delay between
"Shutting down" from lightning-cli and being able to restart lightningd.
This fixes that by creating a canned response for this case, which is
simply written out immediately before exit. At this point, the pidfile
has been deleted, the sockets have been closed, and the database
has been closed.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>