If we were to just insert filtered blocks in the range that we will scan later
we'd be hitting the uniqueness constraints later.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Instead of allowing all calls to `getfilteredblock` to be scheduled on the
`bitcoind` queue right away we instead add them in a separate queue, and
process a single call at a time. This limits the concurrency and avoids
thrashing `bitcoind`. At the same time we dispatch incoming results back to
all calls that were queued for that particular blockheight, reducing the
overall number of calls and an increase in overall speed.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We will be calling the callback out of order once we fan out the results of a
single lookip to multiple calls, so being sure that everything is allocated
ahead of time is necessary.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Since we now check all P2WSH outputs in a block, this is getting quite a
common occurence, so logging just produces lots of noise.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This will eventually replace the multi-step `getblockhash` + `getblock` +
`gettxout` mechanism, and return entire filtered blocks which can be added to
the DB, and represent the full set of P2WSH UTXOs.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This is just the test that we use to verify block backfilling below the wallet
birth height is working correctly.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
@renepickhardt has a shell script we broke. While we still produce
perfectly valid JSON, we should not gratuitously change tool output.
Plus, I prefer the missing space before the ':'.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The helpme plugin is more comprensive, but this at least connects to a
few random nodes, and doesn't require python libraries in path or anything.
I selected the nodes from helpme.py, eliminating ones I couldn't reach.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This test is spawning 100 nodes concurrently, which is a lot even when not
running with `valgrind`, especially when executing tests in parallel.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This was causing `--help` to fail if we already had a `lightningd` running
with the same `--lightning-dir`.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This is a followup to #2892. Since we now attempt to lock the PID file before
starting plugins we need to make sure that we actually use a unique lightning
directory for anything that attempts to call `--help`. If not we may be
conflicting with a `lightningd` that is running against that directory.
Notice that this still means that we will be unable to call `--help` on
`lightningd` if we have a running instance, but isolation in this case is
good, otherwise we'd be reading the default config anyway.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We recently noticed that the way we unpack the call arguments for hooks and
notifications in pylightning breaks pretty quickly once you start changing the
hook and notification params. If you add params they will not get mapped
correctly causing the plugin to error out.
This can be fixed by adding a `VAR_KEYWORD` argument to the calbacks, i.e., by
adding a single `**kwargs` argument at the end of the signature. This commit
adds a check that such a catch-all argument exists, and emits a warning if it
doesn't.
It also fixes up the plugins that we ship ourselves.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
1. Now checking the pid file really does precede touching the db and
starting plugins, which is far safer.
2. Crashlog is now activated just after daemon parent release, and just
before the main loop, which means no "crash" on startup if we call fatal().
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Dumb programs which have a --daemon option call fork() early. This is
terrible UX since startup errors get lost: the program exits with
"success" immediately then you discover via the logs that it didn't
start at all.
However, forking late introduced a heap of problems with changing
pids. Instead, fork early but keep stderr and the parent around: if
we fail early on, the parent fails with us. We release our parent
with an explicit action just before the main loop.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We create our children then fork, so we're not a parent. I noticed this
because 'lightning-cli stop' takes a long time: this is because it tries to
wait for them and they don't respond.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Since we are walking the entire allocation tree anyway, and access the tal
metadata anyway, we can just as well also track the size of the memory
allocations to simplify debugging of memory use.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We always know the length, so we don't need it. It causes much extra work
when we want to delete a record, which I suspect may cause issues amongst
some users who've been seeing gossip_store corruption.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Otherwise it creates the lightning-dir. This can't be helped for --help
(at least, if plugins are present), but --version simply prints and exits.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Note that we move adding the plugin to the plugins list to the end, otherwise
the hook from logging can examine the (uninitialized) plugin.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is easy since we did the option parsing cleanup, but it has the
effect that plugins are launched from the lightning-dir. Now
we have dynamic plugins, this means startup and post-startup plugins
experience the same environment.
This is absolutely a desirable thing: they can just drop files in
their cwd rather than having to move (including, I might note, core
files!).
We also highlight the change in various places (and a drive-up update
of PLUGINS.md which says you have to use --plugin).
The next patch adds a backwards compatibility wedge for old users of
relative plugin paths.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
My test machine started failing on dynamic plugin tests, unable to
find the lightning module:
ModuleNotFoundError: No module named 'lightning'
This is because plugins at startup are run from whatever directory
you're in, but on refresh are run from the lightning-dir.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Naturally, it's a struct pubkey. However, those are large, and take
time to marshal, so gossipd treats them as node_id which is a simple
array. It adds explicit checks at the right points to make sure
they're valid pubkeys.
However, the next commit adds TLV test vectors, which assumes we treat
node_id as a point (thus catch invalid values when parsing). The best
solution is to restrain our types here to exactly those we've
optimized for.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We currently send channel_announcement as soon as we and our
peer agree it's 6 blocks deep. In theory, our other peers might
not have seen that block yet though, so delay a little.
This is mitigated by two factors:
1. lnd will stash any "not ready yet" channel_announcements anyway.
2. c-lightning doesn't enforce the 6 depth minimum at all.
We should not rely on other nodes' generosity or laxity, however!
Next release, we can start enforcing the depth limit, and maybe stashing
ones which don't quite make it (or simply enforce depth 5, not 6).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>