There were several gossip breakages in master; bumping version means
upgrades get a clean store (not just those upgrading from stable version).
Fixes: #2719
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Since we add the transactions while processing the blockchain, and before we
have enough context to annotate them correctly, i.e., in the txwatches, we add
them first and then annotate them aposteriori.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We currently allocate utxos off cmd, but the next commit will persist a
wtx beyond the command which created it, breaking that assumption.
In general, a struct member should be owned by the struct itself, and
a tal context should be an explicit arg, not implicit.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This means we intercept the peer's gossip_timestamp_filter request
in the per-peer subdaemon itself. The rest of the semantics are fairly
simple however.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
(We don't increment the gossip_store version, since there are only a
few commits since the last time we did this).
This lets the reader simply filter messages; this is especially nice since
the channel_announcement timestamp is *derived*, not in the actual message.
This also creates a 'struct gossip_hdr' which makes the code a bit
clearer.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Keeping the uintmap ordering all the broadcastable messages is expensive:
130MB for the million-channels project. But now we delete obsolete entries
from the store, we can have the per-peer daemons simply read that sequentially
and stream the gossip itself.
This is the most primitive version, where all gossip is streamed;
successive patches will bring back proper handling of timestamp filtering
and initial_routing_sync.
We add a gossip_state field to track what's happening with our gossip
streaming: it's initialized in gossipd, and currently always set, but
once we handle timestamps the per-peer daemon may do it when the first
filter is sent.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
They already send *us* gossip messages, so they have to be distinct anyway.
Why make us both do extra work?
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Encapsulating the peer state was a win for lightningd; not surprisingly,
it's even more of a win for the other daemons, especially as we want
to add a little gossip information.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We ask gossipd for the channel_update for the outgoing channel; any other
messages it sends us get queued for later processing.
But this is overzealous: we can shunt those msgs to the peer while
we're waiting. This fixes a nasty case where we have to handle
WIRE_GOSSIPD_NEW_STORE_FD messages by queuing the fd for later.
This then means that WIRE_GOSSIPD_NEW_STORE_FD can be handled
internally inside handle_gossip_msg(), since it's always dealt with
the same, simplifying all callers.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We use the high bit of the length field: this way we can still check
that the checksums are valid on deleted fields.
Once this is done, serially reading the gossip_store file will result
in a complete, ordered, minimal gossip broadcast. Also, the horrible
corner case where we might try to delete things from the store during
load time is completely gone: we only load non-deleted things.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Since we might soon be changing the payload it is a good idea to not just
expose the v0 payload, but also the raw payload for the plugin to
interpret. This might also include payloads that `lightningd` itself cannot
understand, but the plugin might.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Suggested-by: Corné Plooy <@bitonic-cjp>
Instead of reading the store ourselves, we can just send them an
offset. This saves gossipd a lot of work, putting it where it belongs
(in the daemon responsible for the specific peer).
MCP bench results:
store_load_msec:28509-31001(29206.6+/-9.4e+02)
vsz_kb:580004-580016(580006+/-4.8)
store_rewrite_sec:11.640000-12.730000(11.908+/-0.41)
listnodes_sec:1.790000-1.880000(1.83+/-0.032)
listchannels_sec:21.180000-21.950000(21.476+/-0.27)
routing_sec:2.210000-11.160000(7.126+/-3.1)
peer_write_all_sec:36.270000-41.200000(38.168+/-1.9)
Signficant savings in streaming gossip:
-peer_write_all_sec:48.160000-51.480000(49.608+/-1.1)
+peer_write_all_sec:35.780000-37.980000(36.43+/-0.81)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We can be spammy, which is good for tests, but bad for our simple message queue.
This avoids breaking our tests but also avoid the worst case (1M entries and counting!)
for gossip status messages in the case where we're syncing a large peer.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fixes#2518
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Changelog-fixed: `minconf` no longer gets wrapped around for large values, which was causing funds with insufficient confirmations to be selected.
This fixes block parsing on testnet; specifically, non-standard tx versions.
We hit a type bug in libwally (wallt_get_secp_context()) which I had to
work around for the moment, and the updated libsecp adds an optional hash
function arg to the ECDH function.
Fixes: #2563
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I tried to just do gossipd, but it was uncontainable, so this ended up being
a complete sweep.
We didn't get much space saving in gossipd, even though we should save
24 bytes per node.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Node ids are pubkeys, but we only use them as pubkeys for routing and checking
gossip messages. So we're packing and unpacking them constantly, and wasting
some space and time.
This introduces a new type, explicitly the SEC1 compressed encoding
(33 bytes). We ensure its validity when we load from the db, or get it
from JSON. We still use 'struct pubkey' for peer messages, which checks
validity.
Results from 5 runs, min-max(mean +/- stddev):
store_load_msec,vsz_kb,store_rewrite_sec,listnodes_sec,listchannels_sec,routing_sec,peer_write_all_sec
39475-39572(39518+/-36),2880732,41.150000-41.390000(41.298+/-0.085),2.260000-2.550000(2.336+/-0.11),44.390000-65.150000(58.648+/-7.5),32.740000-33.020000(32.89+/-0.093),44.130000-45.090000(44.566+/-0.32)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Pubkeys are not not actually DER encoding, but Pieter Wuille corrected
me: it's SEC 1 documented encoding.
Results from 5 runs, min-max(mean +/- stddev):
store_load_msec,vsz_kb,store_rewrite_sec,listnodes_sec,listchannels_sec,routing_sec,peer_write_all_sec
38922-39297(39180.6+/-1.3e+02),2880728,41.040000-41.160000(41.106+/-0.05),2.270000-2.530000(2.338+/-0.097),44.570000-53.980000(49.696+/-3),32.840000-33.080000(32.95+/-0.095),43.060000-44.950000(43.696+/-0.72)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is what all of this has been working towards: ripping out the handwoven
transaction handling. By removing the custom parsing we can finally switch
over to using `wally_tx` as sole representation of transactions in
memory. The commit is a bit larger but it's mostly removing setters and old
references to the input and output fields.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This is the second to last time I'm touching this file, just need to remove
the `tx->input` and `tx->output` swapping once they are removed.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
The `wally_tx_input`s do not keep track of their input value, which means we
need to track them ourselves if we try to sign these transactions at a later
point in time.
Signed-off-by: Christian Decker <decker.christian@gmail.com>