This is mainly an internal-only change, especially since we don't
offer any globalfeatures.
However, LND (as of next release) will offer global features, and also
expect option_static_remotekey to be a *global* feature. So we send
our (merged) feature bitset as both global and local in init, and fold
those bitsets together when we get an init msg.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We completely rework test_node_reannounce: it's assumes we always ask for
all gossip and that assumption will be broken in future patches too.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It relies on the fact that nodes don't do their own gossip queries.
Use devtools instead.
This revealed that the entire logic was broken! It just happened to work.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I was seeing some accidental pruning under load / Travis, and in
particular we stopped accepting channel_updates because they were 103
seconds old. But making it too long makes the prune test untenable,
so restore a separate flag that this test can use.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's generally clearer to have simple hardcoded numbers with an
#if DEVELOPER around it, than apparent variables which aren't, really.
Interestingly, our pruning test was always kinda broken: we have to pass
two cycles, since l2 will refresh the channel once to avoid pruning.
Do the more obvious thing, and cut the network in half and check that
l1 and l3 time out.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If you send a message which simply changes timestamp and signature, we
drop it. You shouldn't be doing that, and the door to ignoring them
was opened by by option_gossip_query_ex, which would allow clients to
ignore updates with the same checksum.
This is more aggressive at reducing spam messages, but we allow refreshes
(to be conservative, we allow them even when 1/2 of the way through the
refresh period).
I dropped the now-unnecessary sleep from test_gossip_pruning, too.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Make update_local_channel use a timer if it's too soon to make another
update.
1. Implement cupdate_different() which compares two updates.
2. make update_local_channel() take a single arg for timer usage.
3. Set timestamp of non-disable update back 5 minutes, so we can
always generate a disable update if we need to.
4. Make update_local_channel() itself do the "unchanged update" suppression.
gossipd: clean up local channel updates.
5. Keep pointer to the current timer so we override any old updates with
a new one, to avoid a race.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
During sync it is highly likely that we can coalesce multiple calls and share
results among them. We also report back failures for non-existing blocks early
on, so we don't run into issues with blocks that our bitcoind doesn't have
yet.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This was caused by us not checking against the max_blockheight, but rather the
min_blockheight which can be negative with a newly created node. This is still
safe since we check for duplicates anyway in `wallet_filteredblock_add`.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
I don't remember ever seeing a bug which only showed up in VALGRIND=1 with developer
mode disabled, so don't test that, and spread out the other test more evenly.
In addition, disable the worst-performing tests in DEVELOPER=0 mode.
Here timings from my build machine: the worst 6 (- DEVELOPER=0 VALGRIND=0)
with the same tests (+ DEVELOPER=1 VALGRIND=1)
-452.42s call tests/test_pay.py::test_channel_spendable
+87.69s call tests/test_pay.py::test_channel_spendable
-335.66s call tests/test_gossip.py::test_gossip_store_compact_on_load
+47.41s call tests/test_gossip.py::test_gossip_store_compact_on_load
-332.07s call tests/test_connection.py::test_opening_tiny_channel
+89.71s call tests/test_connection.py::test_opening_tiny_channel
-331.97s call tests/test_pay.py::test_channel_spendable_large
+56.23s call tests/test_pay.py::test_channel_spendable_large
-305.28s call tests/test_invoices.py::test_invoice_routeboost
+37.57s call tests/test_invoices.py::test_invoice_routeboost
-284.28s call tests/test_plugin.py::test_htlc_accepted_hook_forward_restart
+49.12s call tests/test_plugin.py::test_htlc_accepted_hook_forward_restart
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
And clean up some dev ones which actually happen (mainly by calling
channel_fail_permanent which logs UNUSUAL, rather than
channel_internal_error which logs BROKEN).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rewriting the gossip_store is much more trivial when we don't have
any pointers into it, so add some simple offline compaction code
and disable the automatic compaction code.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The crashes in #2750 are mostly caused by us trying to partially truncate
the store. The simplest fix for release is to discard the whole thing if
we detect a problem.
This is a workaround: it'd be far nicer to try to recover.
Fixes: #2750
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It wasn't invalid due to a missing channel_update, but in fact was a
bad checksum due to a cut & paste bug. Fix that, and assert it's not
actually truncating.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If something went wrong and there was an old one, we were
appending to it!
Reported-by: @SimonVrouwe
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We might have channel_announcements which have no channel_update: normally
these don't get written into the store until there is one, but if the
store was truncated it can happen. We then get upset on compaction, since
we don't have an in-memory representation of the channel_announcement.
Similarly, we leave the node_announcement pending until after that
channel_announcement, leading to a similar case.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I decided to try a faster implementation, only to find our crc32c was
not correct! Ouch.
I removed the crc32c functions from ccan/crc, and added a new crc32c
module which has the Mark Adler x86-64-optimized variants.
We bump gossip_store version again, since csums have changed.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is where payment tests should go. Also mark it xfail for the moment,
and remove developer-only tag (propagating gossip is only 60 seconds, which
is OK).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
There were several gossip breakages in master; bumping version means
upgrades get a clean store (not just those upgrading from stable version).
Fixes: #2719
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We didn't count some records before, so we could compare the two counters.
This is much simpler, and avoids reliance on bs.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This means we intercept the peer's gossip_timestamp_filter request
in the per-peer subdaemon itself. The rest of the semantics are fairly
simple however.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
(We don't increment the gossip_store version, since there are only a
few commits since the last time we did this).
This lets the reader simply filter messages; this is especially nice since
the channel_announcement timestamp is *derived*, not in the actual message.
This also creates a 'struct gossip_hdr' which makes the code a bit
clearer.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Keeping the uintmap ordering all the broadcastable messages is expensive:
130MB for the million-channels project. But now we delete obsolete entries
from the store, we can have the per-peer daemons simply read that sequentially
and stream the gossip itself.
This is the most primitive version, where all gossip is streamed;
successive patches will bring back proper handling of timestamp filtering
and initial_routing_sync.
We add a gossip_state field to track what's happening with our gossip
streaming: it's initialized in gossipd, and currently always set, but
once we handle timestamps the per-peer daemon may do it when the first
filter is sent.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We use the high bit of the length field: this way we can still check
that the checksums are valid on deleted fields.
Once this is done, serially reading the gossip_store file will result
in a complete, ordered, minimal gossip broadcast. Also, the horrible
corner case where we might try to delete things from the store during
load time is completely gone: we only load non-deleted things.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We're about to bump version again, and the code to upgrade it was
quite hairy (and buggy!). It's not worthwhile for such a
poorly-tested path: I will just add code to limit how much incoming
gossip we get to avoid flooding when we upgrade, however.
I also use a modern gossip_store version in our test_gossip_store_load
test, instead of relying on the upgrade path.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When we first receive a channel_update, we write both the
channel_announcement and that channel_update to the store: we need
that first update so we can set the channel_announcement timestamp.
However, the channel_update can be replaced later. This means we can
have a channel_announcement, a node_update which relies on it, then
the channel_update later.
So move the "this applies to a pending announcement" check lower, where
gossip_store can use it too. Has a nice side-effect of avoiding
one lookup of the node id.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
First, we should have a channel_update so we actually do some compaction!
(Reported-by @SimonVrouwe). But we should also handle the cases where:
1. A channel_announcement is *not* directly followed by a
channel_update (happens when the channel_update is replaced).
2. A node_announcement predates a channel_update for the peer
(again, can happen once a channel_update is replaced).
3. A local/private channel_creation is not directly followed by an
update.
In addition, we might as well check that we can *load* such a store,
before compaction.
This checks the corner cases which occur in real gossip stores.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's possible that it hasn't got the node_announcement messages;
it will still list the nodes, however (the channel_announcement tells
it the nodes exist). Check for the alias field instead.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>