It turns out we were heavily relying on the fact that after each message from
the client there'd be a flush, and that there would not be anything after the
JSON object we read. This will no longer be the case once we start streaming
things or we are very quick in issuing the JSON-RPC requests.
This just takes one of the error paths (incomplete read) and makes it into a
successful path if we have indeed read a full root element.
Incrementing version number means stores which were prior to the previous
commit will be removed, and refreshed. The simplest fix, if not the most
efficient.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
There's no reason for the db to ever return non-NULL if it's spent. And there's
only one caller, for which that is definitely true.
Suggested-by: @cdeckerFixes: #1934
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The code to regenerate the local BOLT copy was causing eternal rebuild.
So only build if it's wrong.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We extract the tx from the logs, and then we wait until that hits
the mempool. This is more reliable than 'sendrawtx' in the logs,
which might catch a previous sendrawtx; it's also more explicit
that we expect that tx exactly.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
BOLT 7's been updated to split the flags field in `channel_update`
into two: `channel_flags` and `message_flags`. This changeset does the
minimal necessary to get to building with the new flags.
We used to use it to complain about bad requests, but we use the status conn
now, so it's unused except for tests and asserts.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It offers them a DoS vector, if they don't read the replies. We really want
to use raw ccan/io so we can avoid buffering for this.
It makes the handing of fds for new clients a bit more complex
(callback based), but it's not too bad.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Thanks greatly to the four people who I *know* have read this:
@wythe, @ZmnSCPxj, @SimonVrouwe, and @cdecker
Your feedback will help future developers seeking enlightenment!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We couldn't use it before because it asserted dbid was non-zero. Remove
assert and save some code.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Header from folded patch 'fixup!_lightningd__use_hsm_get_client_fd()_helper_for_global_daemons_too.patch':
fixup! lightningd: use hsm_get_client_fd() helper for global daemons too.
Suggested-by: @ZmnSCPxj
That matches the other CSV names (HSM was the first, so it was written
before the pattern emerged).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The current code sends hsmstatus_client_bad_request via the req fd;
this won't work, since lightningd uses that synchronously and only
expects a reply to its commands. So send it via status_conn.
We also enhance hsmstatus_client_bad_request to include details, and
create convenience functions for it. Our previous handling was ad-hoc;
we sometimes just closed on the client without telling lightningd,
and sometimes we didn't tell lightningd *which* client was broken.
Also make every handler the exact same prototype, so they now use the
exact same patterns (hsmd *only* handles requests, makes replies).
I tested this manually by corrupting a request to hsmd.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We currently just ignore them. This is one reason the hsm (in some places)
explicitly calls log_broken so we get some idea.
This was the only subdaemon which had a NULL msgcb and msgname, so eliminate
those checks in subd.c.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Got a spurious failure in test_no_fee_estimate; we fired too soon from the logs (presumably
we raced in on the first response, but estimatesmartfee gets called 3 times).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
this enables addr like --addr=autotor:127.0.0.1 or
--addr=autotor:localhost to just use the default tor service port
Signed-off-by: Saibato <Saibato.naga@pm.me>
- reduces probability for a deadlock where we block on sending data because
the other peer cannot receive because it blocks on sending data etc.
- when either side sends so much data that it fills up the kernel/network buffer
- however sending out gossip can still block when (malicious) peer never receives
The `json_tok_percentage` parser is used for the `fuzzpercent` in `getroute` and
`maxfeepercent` in `pay`. In both cases it seems reasonable to allow values
larger than 100%. This has bitten users in the past when they transferred single
satoshis to things like satoshis.place over a route longer than 2 hops.
With the previous patch, we could still get stuck behind a low-prio
request. Generalize it into separate queues, and allow more than one
request in parallel.
Worth noting that the test time for `VALGRIND=0 pytest -vx tests/ -n 10`
doesn't change measurably.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
fiatjaf has a cheap VPS, connecting remotely to his home bitcoind node.
fiatjaf's latency on bitcoin-cli getblock is between 10 and 37 seconds.
fiatjaf's c-lightning node is getting one block per hour.
fiatjaf is sad.
We single-file our bitcoind requests, because bitcoind has a limited
thread pool and it *fails* rather than queueing if you upset it. We
probably be fine using separate queues for each command type, but simply
allowing some requests to cut in line should prove my theory that we're
getting stuck behind gossip verification requests.
fiatjaf now gets one block per 2 minutes.
fiatjaf is less sad.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>