We initialize it to 30 seconds, but it's *always* overridden by the
gossip_init message (and usually to 60 seconds, so it's doubly
misleading).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Gossipd provided a generic "get endpoints of this scid" and we only
use it in one place: to look up htlc forwards. But lightningd just
assumed that one would be us.
Instead, provide a simpler API which only returns the peer node
if any, and now we handle it much more gracefully.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Right now, the `config` file is read *after* the configuration working directory is moved to in the software. However one configuration option `lightning-dir` settable in the `config` file sets this working directory. As the directory is already opened (which defaults to `$HOME/.lightning`) before the configuration is read, the configured directory will not be used.
This patch parses the configuration file before opening the working directory, fixing this bug.
[ Update CHANGELOG.md and man pages -- RR ]
It went something like:
niftynei: Hey, cppcheck complains this might be NULL, so I put in a check.
rusty: cppcheck is dumb. Make it an assert("Rusty always right!").
niftynei: You seem certain of this so I shall do that.
https://github.com/ElementsProject/lightning/pull/1994
...
renepickhardt: I asked fiatjaf to run
`lightning-cli sendpay "[{'id':'02db8f487fcc0a'}]" 4efe0ba89b`
and his node crashed!
rusty: grep Assertion logs/*
lightningd/jsonrpc.c:326: connection_complete_error: Assertion `Rusty is always right!' failed.
It turns out that in the 'can't parse' error case, we hand NULL cmd to
connection_compete_error.
Next time, less asserting, more grepping!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is a bit of overkill now that we simply accumulate the entire
JSON response in the buffer before flushing, but when we move to
streamed responses it allows us to have a single command that has
exclusive access to the out direction of the JSON-RPC connection.
This is the source of failure in the test_restart_many_payments stress
test: we don't commit the outgoing HTLC immediately, instead waiting for
gossip to tell us the peer for the outgoing channel, then waiting for
that channeld to tell is it's committed. The result was incoming HTLCs
with no outgoing.
I initially pushed the HTLCs through that same path, but of course
(since peers are not connected yet!) the only result was that we failed
these HTLCs immediately. So I chose the far simpler course of just
failing them directly.
To reproduce this, I had to increase the test_restart_many_payments
num to 10, and run it with nice -20 taskset -c 0.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
During tests, this is half our log! And Travis truncates it if we get
a failure in test_restart_many_payments.
Interestingly, test_logging had a bug which relied on this spam :)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Instead of two code paths that return different help objects, simplify things by
always returning the full help object. This not only includes description and
the command name, but the verbose description as well.
Signed-off-by: William Casarin <jb55@jb55.com>
If another channel has set the optional `htlc_maximum_msat` field,
we should correctly parse that field and respect it when drawing up
routes for payments.
Noted by @cdecker, the term 'local' is grossly overused, and the hout
preimage is basically only used as a sanity check (though I've just put
a FIXME there for now).
Also eliminated spurious blank line which crept into wallet.c.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We don't expect payment or payment->route_channels to be NULL without an
old db, but putting an assert there reveals that we try to fail an HTLC
which has already succeeded in 'test_onchain_unwatch'.
Obviously we only want to fail an HTLC which goes onchain if we don't
already have the preimage!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
failoutchannel tells us which channel to send an update for (specifically
for temporary_channel_failure); but we don't save it into the db. It's
not even clear we should, since it's a corner case and the channel might
not even exist when we come back.
So on db restore, change such errors to WIRE_TEMPORARY_NODE_FAILURE
which doesn't need an update.
We also don't memset it to 0 in the normal case (we only access if it
failcode has the UPDATE bit set) so valgrind will trigger if we're
wrong.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We don't save them to the database, so fix things up as we load them.
Next patch will actually save them into the db, and this will become
COMPAT code.
Also: call htlc_in_check() with NULL on db load, as otherwise it aborts
internally.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This means we need to check when we've altered the state, so the checks
are moved to the callers of htlc_in_update_state and htlc_out_update_state.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
globalfeatures should not be accessed if we haven't received a
channel_update. Treat it like the other fields which are only
initialized and marshalled/unmarshalled if the timestamp is positive.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
And use ARRAY_SIZE() everywhere which will break compile if it's not a
literal array, plus assertions that it's the same length.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We split json_invoice(), as it now needs to round-trip to the gossipd,
and uniqueness checks need to happen *after* gossipd replies to avoid
a race.
For every candidate channel gossipd gives us, we check that it's in
state NORMAL (not shutting down, not still waiting for lockin), that
it's connected, and that it has capacity. We then choose one with
probability weighted by excess capacity, so larger channels are more
likely.
As a side effect of this, we can tell if an invoice is unpayble (no
channels have sufficient incoming capacity) or difficuly (no *online*
channels have sufficient capacity), so we add those warnings.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
For routeboost, we want to select from all our enabled channels with
sufficient incoming capacity. Gossipd knows which are enabled (ie. we
have received a `channel_update` from the peer), but doesn't know the
current incoming capacity.
So we get gossipd to give us all the candidates, and lightningd
selects from those.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We do this a lot, and had boutique helpers in various places. So add
a more generic one; for convenience it returns a pointer to the new
end element.
I prefer the name tal_arr_expand to tal_arr_append, since it's up to
the caller to populate the new array entry.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The help command now adds command usage to its output by calling each
command handler in CMD_USAGE mode.
Instead of seeing, for example:
decodepay
Decode {bolt11}, using {description} if necessary
we see:
decodepay bolt11 [description]
Decode {bolt11}, using {description} if necessary
Signed-off-by: Mark Beckwith <wythe@intrig.com>
Callers to param() can now optionally set a flag to see if command_fail was
called.
This is necessary because the `cmd` is freed in case of failure.
I spent a bit of time trying to extend the lifetime of the `cmd` to the end
of parse_request(), but the destructors still needed to be called when they
were, and it was getting ugly. So I took this minimal approach.
Signed-off-by: Mark Beckwith <wythe@intrig.com>
Now call param() even for commands that don't accept any parameters.
This is a bugfix of sorts. For example, before you could call:
bitcoin-cli getinfo blah
and the blah parameter would be ignored.
Now you will get an error: "too many parameters: got 1, expected 0"
Signed-off-by: Mark Beckwith <wythe@intrig.com>
Added the concept of a "command mode". The
behavior of param() changes based on the mode.
Added and tested the command mode of CMD_USAGE for
setting the usage of a command without running it.
Only infrastructure and test. No functional changes.
Signed-off-by: Mark Beckwith <wythe@intrig.com>
BOLT 7's been updated to split the flags field in `channel_update`
into two: `channel_flags` and `message_flags`. This changeset does the
minimal necessary to get to building with the new flags.
We couldn't use it before because it asserted dbid was non-zero. Remove
assert and save some code.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Header from folded patch 'fixup!_lightningd__use_hsm_get_client_fd()_helper_for_global_daemons_too.patch':
fixup! lightningd: use hsm_get_client_fd() helper for global daemons too.
Suggested-by: @ZmnSCPxj
That matches the other CSV names (HSM was the first, so it was written
before the pattern emerged).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The current code sends hsmstatus_client_bad_request via the req fd;
this won't work, since lightningd uses that synchronously and only
expects a reply to its commands. So send it via status_conn.
We also enhance hsmstatus_client_bad_request to include details, and
create convenience functions for it. Our previous handling was ad-hoc;
we sometimes just closed on the client without telling lightningd,
and sometimes we didn't tell lightningd *which* client was broken.
Also make every handler the exact same prototype, so they now use the
exact same patterns (hsmd *only* handles requests, makes replies).
I tested this manually by corrupting a request to hsmd.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We currently just ignore them. This is one reason the hsm (in some places)
explicitly calls log_broken so we get some idea.
This was the only subdaemon which had a NULL msgcb and msgname, so eliminate
those checks in subd.c.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The `json_tok_percentage` parser is used for the `fuzzpercent` in `getroute` and
`maxfeepercent` in `pay`. In both cases it seems reasonable to allow values
larger than 100%. This has bitten users in the past when they transferred single
satoshis to things like satoshis.place over a route longer than 2 hops.
With the previous patch, we could still get stuck behind a low-prio
request. Generalize it into separate queues, and allow more than one
request in parallel.
Worth noting that the test time for `VALGRIND=0 pytest -vx tests/ -n 10`
doesn't change measurably.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
fiatjaf has a cheap VPS, connecting remotely to his home bitcoind node.
fiatjaf's latency on bitcoin-cli getblock is between 10 and 37 seconds.
fiatjaf's c-lightning node is getting one block per hour.
fiatjaf is sad.
We single-file our bitcoind requests, because bitcoind has a limited
thread pool and it *fails* rather than queueing if you upset it. We
probably be fine using separate queues for each command type, but simply
allowing some requests to cut in line should prove my theory that we're
getting stuck behind gossip verification requests.
fiatjaf now gets one block per 2 minutes.
fiatjaf is less sad.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>