Implements an EWMA for the fee estimation. Achieves 90% influence of the newer
fee after 5 minutes, and adjusts to the polling rate that is configured.
Until now, `command_fail()` reported an error code of -1 for all uses.
This PR adds an `int code` parameter to `command_fail()`, requiring the
caller to explicitly include the error code.
This is part of #1464.
The majority of the calls are used during parameter validation and
their error code is now JSONRPC2_INVALID_PARAMS.
The rest of the calls report an error code of LIGHTNINGD, which I defined to
-1 in `jsonrpc_errors.h`. The intention here is that as we improve our error
reporting, all occurenaces of LIGHTNINGD will go away and we can eventually
remove it.
I also converted calls to `command_fail_detailed()` that took a `NULL` `data`
parameter to use the new `command_fail()`.
The only difference from an end user perspecive is that bad input errors that
used to be -1 will now be -32602 (JSONRPC2_INVALID_PARAMS).
Make --override-fee-rates a dev option. We use default-fee-rate in
its place, which (since bitcoind won't give fee estimates in regtest
mode for short chains) gives an effective feerate of 15000/7500/3750.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We never hit the guess_feerate() path, because we turned a 0 ("can't
estimate fee") into 253.
This also revealed that we weren't initializing topo->feerate, and
that we were giving spurious updates even if we were using override-fee-rates.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We're getting spurious closures, even on mainnet. Using --ignore-fee-limits
is dangerous; it's slightly less so to lower the minimum (which is the
usual cause of problems).
So let's halve it, but beware the floor.
This is a workaround, until we get independent feerates in the spec.
Fixes: #613
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Simplification of the offset calculation to use the rescan parameter, and rename
of `wallet_first_blocknum`. We now use either relative rescan from our last
known location, or absolute if a negative rescan was given. It's all handled in
a single location (except the case in which the blockcount is below our
precomputed offset), so this should reduce surprises.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
The only use for these was to compute their txids so we could notify depth
in case of reorgs.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We are slowly hollowing out the in-memory blockchain representation to make
restarts easier.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This will later allow us to determine the transaction confirmation count, and
recover transactions for rebroadcasts.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
There are two very hard problems in software engineering:
1. Off-by-one errors
In this case we were rolling back further than needed and we were starting the
catchup one block further than expected.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
But only if we're actually going to change the feerate, otherwise we'd
log every time.
Suggested-by: @ZmnSCPxj
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Naively, this would be 250 satoshi per sipa, but it's not since bitcoind's
fee calculation was not rewritten to deal with weight, but instead bolted
on using vbytes.
The resulting calculations made me cry; I dried my tears on the thorns
of BUILD_ASSERT (I know that makes no sense, but bear with me here as I'm
trying not to swear at my bitcoind colleagues right now).
Fixes: #1194
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We would `block_map_add` inside `add_tip`, but we never
`block_map_del` inside `remove_tip`, which is dangerous as
we actually `tal_free` the block inside `remove_tip`.
Our CI did not reliably trap this problem since block
hashes are random and rerunning the `test_blockchaintrack`
often passed spuriously.
Our testing also reveals a bug: we start lightningd and shut it down
before fully processing the blockchain, so we don't set
last_processed_block. Fix that by setting it immediately once we have
a block: worst case it goes backwards a little.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We do a complicated dance because we don't know the current block
height before setting up the topology.
If we're starting at a particular block, we want to go back 100 blocks
before that to cover any reorgs.
If we're not (fresh startup), we still want to go back 100 blocks
because we don't bother handling a reorg which removes all the blocks
we know.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
With fallback depending on chainparams: this means the first upgrade
will be slow, but after that it'll be fast.
Fixes: #990
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This interacts badly with --daemon (next patch) which then tries to
reap a child it didn't create, which took me a couple of hours to
figure out.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We usually did this, but sometimes they were named after what they did,
rather than what they cleaned up.
There are still a few exceptions:
1. I didn't bother creating destroy_xxx wrappers for htable routines
which already existed.
2. Sometimes destructors really are used for side-effects (eg. to simply
mark that something was freed): these are clearer with boutique names.
3. Generally destructors are static, but they don't need to be: in some
cases we attach a destructor then remove it later, or only attach
to *some* cases. These are best with qualifiers in the destroy_<type>
name.
Suggested-by: @ZmnSCPxj
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We move it into jsonrpc where it belongs, and make it fail the command.
This means it can tell us exactly what was wrong.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>