This makes it clear we're dealing with a message which is a wrapped error
reply (needing unwrap_onionreply), not an already-wrapped one.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This lets us do more flexible filtering in the next patch. But it also
keeps some weird logic out of gossipd.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We've been sending them errors for invalid replies; instead, this works
around it.
Changelog-Added: Workaround LND's reply_channel_range issues instead of sending error.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is ignored in subdaemons which are per-peer, but very useful for
multi-peer daemons like connectd and gossipd.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It really, really doesn't matter. But we were dramatically reducing
our view of the network:
In my gossip_store (mainnet):
channel_announcement: 30349
channel_update: 55119
node_announcment: 1783
Changelog-Fixed: No longer discard most node_announcements (fixes#3194)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The flat feature PR changes the rules so these are OK to propagate.
That makes sense: the unsupported features means there's something
unsupported about the *node* or *channel*, not the msg itself
(for that we'd use a different message type).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This prevents a gratuitous lookup of we get a late channel_announce,
but even better, it suppresses the "bad gossip" messages in case of
a late channel_update, which have plagued Travis (especially since we
got aggressive in pushing our own updates).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is a better fix than doing it manually, which turned out
to do it in the wrong order (node_announcement followed by
channel_announcement) anyway.
Should fix many "Bad gossip" messages.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I was wondering why TAGS was missing some functions, and finally
tracked it down: PRINTF_FMT() confuses etags if it's at the start
of a function, and it ignores the rest of the file.
So we put PRINTF_FMT at the end, but that doesn't work for
*definitions*, only *declarations*. So we remove it from definitions
and add gratuitous declarations in the few static places.1
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Asking for the last few blocks was logical, but my node is missing
most gossip in practice.
For the moment, simply ask a peer for every channel it knows, once
we're started up.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When probing, no point probing for before lightning became cool. Current
logic means we often probe below block 500,000, and think things are OK
because there are no short_channel_ids.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We always ended up sending an empty query when we had stale scids!
And it turns out we consider such a query invalid:
Bad query_short_channel_ids query_flags 010506226e46111a0b59caaf126043eb5bbf28c34f3a5e332a1fc7b2b73cf188910f000100010100
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We only chose 3 peers to gossip with us (down from 8 last release).
There's no justification for this number, or reason to believe that
it is sufficient to keep us in sync.
Be more conservative for now; we can always decrease it later once
we have more data.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
polling the last 32 is fairly useless in practice, since they tend to
be recent nodes; it won't detect long-forgotten ones.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we get a channel_update while we're still verifying the channel_announcement
we didn't set the peer pointer, so it didn't get credit. As a result, the
seeker tended to think we were done gossiping sooner than we were.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I had a report of a 0.7.2 user whose node hadn't appeared on 1ml. Their
node_announcement wasn't visible to my node, either.
I suspect this is a consequence of recent version reducing the amount of
gossip they send, as well as large nodes increasingly turning off gossip
altogether from some peers (as we do). We should ignore timestamp filters
for our own channels: the easiest way to do this is to push them out
directly from gossipd (other messages are sent via the store).
We change channeld to wrap the local channel_announcements: previously
we just handed it to gossipd as for any other gossip message we received
from our peer. Now gossipd knows to push it out, as it's local.
This interferes with the logic in tests/test_misc.py::test_htlc_send_timeout
which expects the node_announcement message last, so we generalize
that too.
[ Thanks to @trueptolmy for bugfix! ]
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is mainly an internal-only change, especially since we don't
offer any globalfeatures.
However, LND (as of next release) will offer global features, and also
expect option_static_remotekey to be a *global* feature. So we send
our (merged) feature bitset as both global and local in init, and fold
those bitsets together when we get an init msg.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I thought LND had a bug, but turns out it doesn't like out-of-order
short_channel_ids: in fact, the spec says they have to be in order!
This means we use uintmap instead of a htable for unknown_scids and
stale_scids so they're nicely ordered.
But our nodes-missing-announcements probe is harder since they can
also contain duplicates: we switch that to iterate through channels
rather than nodes.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We weren't supposed to do any gossiping until we were synced (and thus
knew blockheight), but our seeker_check() didn't wait for it! Move the
waiting all into seeker.c, so it can handle it all consistently.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
On testing, I found a node which would hang up every time we asked it
for query_short_channel_ids (despite it offering features 0x81, meaning
it should handle this message).
Then it would reconnect, and we'd choose it again as our
PROBING_NANNOUNCES peer!
Instead, leave finding another peer to the once-a-minute
seeker_check() function.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
By combining set_state() with selected_peer() we can give a single log
line describing what we're asking for, from whom.
We also add more verbosity to a few key areas, such as gossip rotation
and when gossipd tells a peer to send an error. And move a comment which
was above the wrong function (due to rebase?).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It usually means we're missing something, but there's no way to ask what.
Simply start a broad scid probe.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This should give more reliable results, though it risks us getting
suckered into always consulting the same peer.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We assume that the time for gossip propagation is < 10 minutes, so by
going back that far from last gossip we won't miss anything,
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's simple: if we wouldn't accept the timestamp we see, don't put
the channel in the stale_scid_map.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Just try to choose another. Under Travis, this causes many failures due
to slowness (they only get 10 seconds in -dev mode).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We eliminate the "need peer" states and instead check if the
random_peer_softref has been cleared.
We can also unify our restart handlers for all these cases; even the
probe_scids case, by giving gossip credit for the scids as they come
in (at a discount, since scids are 8 bytes vs the ~200 bytes for
normal gossip messages).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Build up a map of short_channel_ids which we have old info for (only
if peer supports gossip_query_ex).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This asks peers to append the timestamps or checksums: if it has
gossip_query_ex support, it will, otherwise it's ignored.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>