This is a best effort attempt to skip connection attempts if we detect a broken
ISP resolver. A broken ISP resolver is a resolver that will replace NXDOMAIN
replies with a dummy response. This is best effort in that it'll only detect a
single fixed dummy reply, it'll check only on startup, and will not detect if we
switched networks. It should be good enough for most cases, and in the worst
case it will result in a connection attempt that does not complete.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Reported-by: Glenn Willen <@gwillen>
Cut & paste means we sometimes sent NULL:
```
2018-06-15T00:13:51.908Z lightningd(23653): lightning_closingd-03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f chan #436: Gossipd gave us bad send_gossip message 0bc80000
```
Fixes: #1581
Reported-by: @Xian001
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
In this case, local and remote are *both* NULL; so if someone tries to
send a packet with take(), we need to free it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I think this is what is causing #1536: getting disconnected causes gossipd to
attempt to reach the peer again, unconditionally setting the flag to tell the
master. At the same time the master also issues a reaching command (which is
allowed since it is its first), but then it clashes on the already set
flag. Setting this flag only when the master actually needs to be told should
fix this.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
A failed compaction shouldn't be deadly, but we should also not attempt to do
one on every gossip message after the first one fails.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
`gossip_store_add` is the entry point for messages from the network, so it
should do the bookkeeping and disable on failures. `gossip_store_append` is the
shared function that wraps messages and writes it to the given file. This is
shared between the from network path and the compaction path, so we don't
directly use the `gossip_store` instance, but `fd`s.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We write both when coming from outside, as well as when compacting, so we
extract the write functionality to use it in both cases.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This makes the exposed interface much smaller, cleaner and will allow us to just
replay gossip messages from the broadcast.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Two cases:
1. Node no longer has any public channels: remove node_announcement.
2. Node's node_announcement now preceeds all the channel_announcements:
move node_announcement to the end.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This lets detect if a node announce preceeds a channel announce once we
delete the node announcement.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We *accept* a node_announce if we have a channel_announce, but we
can't queue it until we queue the channel_announce, which we only do
once we have recieved a channel_update.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Since we currently only (ab)use it to send everything, we need a way to
generate boutique queries for testing.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We have a function called 'wake_pkt_out' which is really 'start
gossiping', so rename it to 'wake_gossip_out'.
In addition, it's fired both on a timer, and in response to our first
gossip_timestamp_filter, which leads to very confusing (though,
technically, not incorrect) behavior.
Keep a single timer at all times, which now doubles as the flag to
indicating we're syncing right now. Set it once we're done syncing
gossip.
Technically this means we got from once-every-60-seconds to
quiet-for-60-seconds-between-gossip, but that's OK.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
And initialize filter (to "never") when we negotiated LOCAL_GOSSIP_QUERIES,
and send initial filter message.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is kind of orthogonal to the other changes, but makes sense: if we
would instantly or never prune the message, don't accept it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We use the same system as for gossip: we trickle out replies when we're
otherwise idle.
As we trickle out replies to query_short_channel_ids, we remember the
pubkeys of nodes we mention. At the end, we sort and uniquify, and
then send any node_announcements we have for those.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We use the same system as for gossip: we trickle out replies when we're
otherwise idle.
This is minimal infrastructure: we don't actually process the
query_short_channel_ids message yet, nor do we append node
announcements.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
In general, we need to only publish node announcements after
publishing channel announcements, though we can accept node
announcements as soon as we see channel announcements. So we keep a
flag for those node_announcement which haven't been broadcast yet.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
handle_pending_cannouncement might not actually add the announcment,
as it could be waiting for a channel_update. We need to wait for
the actual announcement before considering announcing our node.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>