Whenever we have multi-connected nodes, out-of-order gossip is possible.
In particular, if a node_announcement is 1 second fresher than the
channel_announcement, a timestamp_filter might get one and not the
other.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The case where this is needed is when the wallet had a forwarded payment
somewhere between commits 66a47d2 (which started tracking forwardings) and
d901304 (which added the `received_time` column). This just emulates the
behavior of sqlite3 for postgres as well.
Signed-off-by: Christian Decker <@cdecker>
Checking on whether we access a null field is ok, but should we crash right
away? Probably not. This reduces the access to a warning on sqlite3 and let's
it continue. We can look for occurences and fix them as they come up and then
re-arm the asserts once we addressed all cases.
Asking for the last few blocks was logical, but my node is missing
most gossip in practice.
For the moment, simply ask a peer for every channel it knows, once
we're started up.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When probing, no point probing for before lightning became cool. Current
logic means we often probe below block 500,000, and think things are OK
because there are no short_channel_ids.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We were using `i` as index variable in two nested loops. This works as long as
the DNS seed resolves to a single address, but will crash if the node has both
an A as well as an AAAA entry, at which point we'll try to index the hostname
without a matching entry.
Signed-off-by: Christian Decker <@cdecker>
We were implicitly relying on sqlite3 behavior that returns the zero-value for
nulled fields when accessing them. This adds the same behavior explicitly to
the DB abstraction in order to reduce `db_column_is_null` checks in the logic,
but still make it evident what is happening here.
Fixes https://github.com/fiatjaf/mcldsp/issues/1
Signed-off-by: Christian Decker <@cdecker>
Check behavior for user supplied upfront_shutdown_script via close_to
Header from folded patch 'fix__return__not__iff_well_close_to_the_provided_addr.patch':
fix: return not iff we'll close to the provided addr
Takes advantage of upfront-shutdown-script to permit users to
specify the close-to address for a channel at open, by adding
a `close_to` field to `fundchannel_start`.
Note that this only is in effect if `fundchannel_start` returns
with `close_to` set -- otherwise, peer doesn't
support `option_upfront_shutdown_script`.
We always ended up sending an empty query when we had stale scids!
And it turns out we consider such a query invalid:
Bad query_short_channel_ids query_flags 010506226e46111a0b59caaf126043eb5bbf28c34f3a5e332a1fc7b2b73cf188910f000100010100
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We only chose 3 peers to gossip with us (down from 8 last release).
There's no justification for this number, or reason to believe that
it is sufficient to keep us in sync.
Be more conservative for now; we can always decrease it later once
we have more data.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
polling the last 32 is fairly useless in practice, since they tend to
be recent nodes; it won't detect long-forgotten ones.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we get a channel_update while we're still verifying the channel_announcement
we didn't set the peer pointer, so it didn't get credit. As a result, the
seeker tended to think we were done gossiping sooner than we were.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
They're a little subtle, so let's spell out exactly what checkmessage
means. In particular, we avoid discussing key derivation from a
signature, as it tends to get people (like me!) into trouble: we
describe it instead as iterating through every known node.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
*If* we know the key has signed something else (as is the case for
channel_announcement) then we can effectively trust the key derivation.
This matches how LND's VerifyMessage works, though in the next patch
we will document it exactly.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Thanks Twitter helpers @duck1123 and @jochemin for tests!
And @bitconner for the initial test vector.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I wanted to call it verifymessage, but then I read the LND API for that
and wanted nothing to do with it!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I had a report of a 0.7.2 user whose node hadn't appeared on 1ml. Their
node_announcement wasn't visible to my node, either.
I suspect this is a consequence of recent version reducing the amount of
gossip they send, as well as large nodes increasingly turning off gossip
altogether from some peers (as we do). We should ignore timestamp filters
for our own channels: the easiest way to do this is to push them out
directly from gossipd (other messages are sent via the store).
We change channeld to wrap the local channel_announcements: previously
we just handed it to gossipd as for any other gossip message we received
from our peer. Now gossipd knows to push it out, as it's local.
This interferes with the logic in tests/test_misc.py::test_htlc_send_timeout
which expects the node_announcement message last, so we generalize
that too.
[ Thanks to @trueptolmy for bugfix! ]
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>