It would always return bech32; fix that, and don't bother printing
it if they use the (new) 'all' parameter.
This API was introduced in 3e67c09d5e,
which means it wasn't in a release so no CHANGELOG entry necessary.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
(Or, if we crashed before we got to write out the channel_update).
It's a corner case, but one reported by @darosior and reproduced
on my test node (both with bad gossip_store due to previous iterations
of this patchset!).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Triggered by a previous variant of this PR, but a goo1d idea to simply
discard the store in general when we get a duplicate entry.
We crash trying to delete old ones, which means writing to the store.
But they should have already been deleted.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This clarifies things a fair bit: we simply add and remove from the
gossip_store directly.
Before this series: (--disable-developer, -Og)
store_load_msec:20669-20902(20822.2+/-82)
vsz_kb:439704-439712(439706+/-3.2)
listnodes_sec:0.890000-1.000000(0.92+/-0.04)
listchannels_sec:11.960000-13.380000(12.576+/-0.49)
routing_sec:3.070000-5.970000(4.814+/-1.2)
peer_write_all_sec:28.490000-30.580000(29.532+/-0.78)
After: (--disable-developer, -Og)
store_load_msec:19722-20124(19921.6+/-1.4e+02)
vsz_kb:288320
listnodes_sec:0.860000-0.980000(0.912+/-0.056)
listchannels_sec:10.790000-12.260000(11.65+/-0.5)
routing_sec:2.540000-4.950000(4.262+/-0.88)
peer_write_all_sec:17.570000-19.500000(18.048+/-0.73)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We have a problem: if we get halfway through writing the compacted store
and run out of disk space, we've already changed half the indexes.
This changes it so we do nothing until writing is finished: then we
iterate through and update indexes. It also weans us off broadcast
ordering, which we can now eliminated.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We didn't count some records before, so we could compare the two counters.
This is much simpler, and avoids reliance on bs.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This means we intercept the peer's gossip_timestamp_filter request
in the per-peer subdaemon itself. The rest of the semantics are fairly
simple however.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We need the timestamp for channel_announcement, but this is simplified
because MCP always follows the channel_announcement by a channel_update.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
(We don't increment the gossip_store version, since there are only a
few commits since the last time we did this).
This lets the reader simply filter messages; this is especially nice since
the channel_announcement timestamp is *derived*, not in the actual message.
This also creates a 'struct gossip_hdr' which makes the code a bit
clearer.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Keeping the uintmap ordering all the broadcastable messages is expensive:
130MB for the million-channels project. But now we delete obsolete entries
from the store, we can have the per-peer daemons simply read that sequentially
and stream the gossip itself.
This is the most primitive version, where all gossip is streamed;
successive patches will bring back proper handling of timestamp filtering
and initial_routing_sync.
We add a gossip_state field to track what's happening with our gossip
streaming: it's initialized in gossipd, and currently always set, but
once we handle timestamps the per-peer daemon may do it when the first
filter is sent.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We used to generate this in the caller, then save it in case we needed
to retry. We're about to change the message we send to lightningd, so
we'll need to regenerate it every time; just hand all the extra args
into peer_connected() and we can generate the `connect_peer_connected`
msg there.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1diff --git a/connectd/connectd.c b/connectd/connectd.c
index 94fe50b56..459c9ac63 100644
There's a corner case where otherwise a reader could see the header and
not the body of a message. It could handle that in various ways,
but simplest (and most efficient) is to avoid it happening.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
They already send *us* gossip messages, so they have to be distinct anyway.
Why make us both do extra work?
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Encapsulating the peer state was a win for lightningd; not surprisingly,
it's even more of a win for the other daemons, especially as we want
to add a little gossip information.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We ask gossipd for the channel_update for the outgoing channel; any other
messages it sends us get queued for later processing.
But this is overzealous: we can shunt those msgs to the peer while
we're waiting. This fixes a nasty case where we have to handle
WIRE_GOSSIPD_NEW_STORE_FD messages by queuing the fd for later.
This then means that WIRE_GOSSIPD_NEW_STORE_FD can be handled
internally inside handle_gossip_msg(), since it's always dealt with
the same, simplifying all callers.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We use the high bit of the length field: this way we can still check
that the checksums are valid on deleted fields.
Once this is done, serially reading the gossip_store file will result
in a complete, ordered, minimal gossip broadcast. Also, the horrible
corner case where we might try to delete things from the store during
load time is completely gone: we only load non-deleted things.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We're about to bump version again, and the code to upgrade it was
quite hairy (and buggy!). It's not worthwhile for such a
poorly-tested path: I will just add code to limit how much incoming
gossip we get to avoid flooding when we upgrade, however.
I also use a modern gossip_store version in our test_gossip_store_load
test, instead of relying on the upgrade path.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
They're really gossipd-internal, and we don't want per-peer daemons
to confuse them with normal updates.
I don't bump the gossip_store version; that's coming with another update
anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Since we have more or less given up on the separation between response
callback and deserialization we can also just have the individual parts
returned.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Suggested-by: Rusty Russell <@rustyrussell>
It disables the error when attempting to do a state transition from
`RCVD_ADD_ACK_REVOCATION` to `RCVD_ADD_ACK_REVOCATION` which was done before
getting to this point.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Since we might soon be changing the payload it is a good idea to not just
expose the v0 payload, but also the raw payload for the plugin to
interpret. This might also include payloads that `lightningd` itself cannot
understand, but the plugin might.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Suggested-by: Corné Plooy <@bitonic-cjp>
This is a rather simple hook that allows a plugin to take control over
HTLCs that were accepted, but weren't resolved as part of an invoice
or forwarded to the next hop yet.
The goal is to allow plugins to terminate a route early, perform
intermediate checks before the payment is accepted (check inventory or
service delivery before accepting in order to avoid a refund for
example) or handle an onion differently if it has a different
realm (cross-chain atomic swaps).
This doesn't implement serializing the payload or deserializing it,
instead just passes the full context along. The details for
serializing and deserializing will be implemented in a future commit.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Now we handle node_announcements properly, we have a failure case where we
try to move them when a channel is deleted while loading the store.
We're going to remove this soon, in favor of in-place delete, so
workaround this for now to avoid an assert() when we try to write to
the store while loading.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When we first receive a channel_update, we write both the
channel_announcement and that channel_update to the store: we need
that first update so we can set the channel_announcement timestamp.
However, the channel_update can be replaced later. This means we can
have a channel_announcement, a node_update which relies on it, then
the channel_update later.
So move the "this applies to a pending announcement" check lower, where
gossip_store can use it too. Has a nice side-effect of avoiding
one lookup of the node id.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
First, we should have a channel_update so we actually do some compaction!
(Reported-by @SimonVrouwe). But we should also handle the cases where:
1. A channel_announcement is *not* directly followed by a
channel_update (happens when the channel_update is replaced).
2. A node_announcement predates a channel_update for the peer
(again, can happen once a channel_update is replaced).
3. A local/private channel_creation is not directly followed by an
update.
In addition, we might as well check that we can *load* such a store,
before compaction.
This checks the corner cases which occur in real gossip stores.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's possible that it hasn't got the node_announcement messages;
it will still list the nodes, however (the channel_announcement tells
it the nodes exist). Check for the alias field instead.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Basically, any "Bad" message from gossipd is something we should look
at. This covers failures loading the gossip_store, too!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Our "-O3" CI check wasn't doing what I thought. But building with -flto
is hard with older gccs, so remove that.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>