`openchannel_signed` and `openchannel_update` which allow a user to
continue a openchannel or kick off the completion of a openchannel.
`openchannel_update` should be called until it returns with
`commitments_secured`.
`openchannel_signed` commands hang out across the openingd/channeld
boundary -- we don't return until we've successfully broadcast the
transaction (or timed out waiting for them to send a tx_sigs back).
We need the PSBT to create the finalized tx from once the peer's
tx_signatures are received. Since we're passing the PSBT, we no longer
need the secondary message to be passed, as it was derived from the
PSBT.
Also removes now unused witness serialization code
There are 3 commands for opening a channel with dualfunding.
`openchannel_init` is the first of these.
It initializes the open-channel dialog, and stops once we've run out of
updates (input/outputs) to send to the peer.
We need this for dual funding, since the interactive tx construction
protocol requires the full tx to send places. We add it to all PSBTs (if
we have it), here
This adds an environment variable $NO_PYTHON check to Makefiles so that:
- It checks and runs into an defined error instead of some python hickup:
`ModuleNotFoundError: No module named 'mako'`
- makes it possible to manually export this environment variable NO_PYTHON=1
to run the same testcase that travis is running on job 1 which causes
so much pain ;)
Changelog-None
Since `fundchannel` now supports the 'close_to' argument, we can remove
all the logic needed to call fundchannel_start here.
Underneath, we're still calling `fundchannel_start`, we're just one (or
two, if you count multifundchannel) call levels away from it now.
Finally, extends the 'close_to' functionality up to the flagship 'open a
channel' command.
Changelog-Added: JSON-API `fundchannel` now accepts an optional 'close_to' param, a bitcoin address that the channel funding should be sent to on close. Requires `opt_upfront_shutdownscript`
`test_opening_tiny_channel` fails if EXPERIMENTAL_FEATURES is on because
we don't include the anchor in our reserve if we're the channel opener.
Seems fine to include in all cases?
We had one report of this, and then Eugene and Roasbeef of Lightning
Labs confirmed it; they saw misordered HTLCs on reconnection too.
Since we didn't enforce this when we receive HTLCs, we never noticed :(
Fixes: #3920
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: Protocol: fixed retransmission order of multiple new HTLCs (causing channel close with LND)
We didn't care, but other implementations (particularly lnd) do. And it
does violate the spec.
(We need to use skip not xfail on the test which catches this, since
xfail doesn't seem to stop errors reported by cleanup)
(Includes Christian's typo fix!)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Great report from whitslack on this crash at startup:
```
2020-10-07T13:03:21.419Z **BROKEN** lightningd: FATAL SIGNAL 6 (version 0.9.1)
2020-10-07T13:03:21.419Z **BROKEN** lightningd: backtrace: common/daemon.c:51 (crashdump) 0x559fb67bcc76
2020-10-07T13:03:21.419Z **BROKEN** lightningd: backtrace: /var/tmp/portage/sys-libs/glibc-2.32-r2/work/glibc-2.32/signal/../sysdeps/unix/sysv/linux/x86_64/sigaction.c:0 ((null)) 0x7f61cdca8baf
2020-10-07T13:03:21.419Z **BROKEN** lightningd: backtrace: ../sysdeps/unix/sysv/linux/raise.c:50 (__GI_raise) 0x7f61cdca8b31
2020-10-07T13:03:21.419Z **BROKEN** lightningd: backtrace: /var/tmp/portage/sys-libs/glibc-2.32-r2/work/glibc-2.32/stdlib/abort.c:79 (__GI_abort) 0x7f61cdc92535
2020-10-07T13:03:21.419Z **BROKEN** lightningd: backtrace: /var/tmp/portage/sys-libs/glibc-2.32-r2/work/glibc-2.32/assert/assert.c:92 (__assert_fail_base) 0x7f61cdc9241e
2020-10-07T13:03:21.419Z **BROKEN** lightningd: backtrace: /var/tmp/portage/sys-libs/glibc-2.32-r2/work/glibc-2.32/assert/assert.c:101 (__GI___assert_fail) 0x7f61cdca1241
2020-10-07T13:03:21.419Z **BROKEN** lightningd: backtrace: lightningd/subd.c:750 (subd_send_msg) 0x559fb67a1c31
2020-10-07T13:03:21.419Z **BROKEN** lightningd: backtrace: lightningd/subd.c:745 (subd_send_msg) 0x559fb67a1c31
2020-10-07T13:03:21.419Z **BROKEN** lightningd: backtrace: lightningd/peer_htlcs.c:252 (local_fail_in_htlc) 0x559fb6798f77
2020-10-07T13:03:21.419Z **BROKEN** lightningd: backtrace: lightningd/peer_htlcs.c:1441 (onchain_failed_our_htlc) 0x559fb6798f77
2020-10-07T13:03:21.419Z **BROKEN** lightningd: backtrace: lightningd/onchain_control.c:339 (handle_missing_htlc_output) 0x559fb6786b9d
2020-10-07T13:03:21.419Z **BROKEN** lightningd: backtrace: lightningd/onchain_control.c:455 (onchain_msg) 0x559fb6786b9d
```
The problem is a channel with an onchaind can be in state FUNDING_STATE_SEEN,
because onchaind has started but not responded to init yet (which it does once it
has analyzed the commitment tx).
Channel B is onchain, and its onchaind fails the HTLC, and we try to send a msg
to channel A's onchaind as if it were channeld.
Explicitly check if it's channeld, rather than trying to see if it's onchaind.
Fixes: #4114
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: crash: assertion fail at restart when source and destination channels of an HTLC are both onchain.
We're rarely in a hurry here, and bitcoind is aggressive with fees.
You can always spend this output if you really have to, using CPFP.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Changed: Protocol: mutual closing feerate reduced to "slow" to avoid overpaying.
We had a couple of instances where a plugin would be killed by `lightningd`
because we were returning a result of an exception twice, and it was hard to
trace down the logic error in the user plugin that caused that. This patch
adds a traceback the first time we return a result/exception, and raise an
exception with a stacktrace of the first termination when a second one comes
in.
This can still terminate the plugin, but the programmer gets a clear
indication where the result was set, and can potentially even recover from it.
Changelog-Added: pyln: Plugin method and hook requests prevent the plugin developer from accidentally setting the result multiple times, and will raise an exception detailing where the result was first set.
It is often pretty usefuk to use the builtin logging module to debug things,
including libraries that a plugin may use. This adds a simple
`PluginLogHandler` that maps the python logging levels to the `lightningd`
logging levels, and formats the record in a way that it doesn't clutter up the
`lightningd` logs (no duplicate timestamps and levels).
This allow us to tweak the log level that is reported to `lightningd` simply
using the following
```python3
import logging
logging.basicConfig(level=logging.DEBUG)
```
Notice that in order for the logs to be displayed on the terminal or the
logfile, both the logging level in the plugin _and_ the `--log-level`
`lightningd` is running need to be adjusted (the python logging level only
controls which messages get forwarded to `lightningd`, it does not have the
power to overrule `lightningd` about what to actually display).
I chose `logging.INFO` as the default, since libraries have a tendency to spew
out everything in `logging.DEBUG` mode
Changelog-Added: pyln: Plugins have been integrated with the `logging` module for easier debugging and error reporting.