We used to have a hacky close timeout which would immediately fire
when we'd closed because the connection was down. Far better to have
a specific "connection lost" input, and have it respond like CMD_CLOSE.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We don't actually implement closing when we have HTLCs (we should
allow it, as that's what the clearing phase is for), since soon we'll
rewrite HTLC to match the async HTLC protocol of BOLT #2.
Note that this folds the close paths, using a simple check if we have
a close transaction. That's a slight state layer violation, but
reduces code duplication.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Actually generating the anchor transaction in my implementation
requires interaction with bitcoind, which we want to be async. So add
a callback and a new state to wait for it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is conceptually cleaner, especially since it means we're running
a command until we're set up (which prevents other commands, so no
special case needed).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We'd expect stop_commands to stop all commands, but we (ab)used
CMD_SEND_HTLC_FULFILL to send us R values even in closing state.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When a unilateral close occurs, we have to watch on-chain ("live")
HTLCs. If the other side spends their HTLC output, we need to grab
the rvalue. If it times out, we need to spend it back to ourselves.
If we get an R value, we need to spend our own HTLC output back to
ourselves.
Because there are multiple HTLCs, this doesn't fit very neatly into a
state machine. We divide into "have htlcs" and "don't have htlcs",
and use a INPUT_NO_MORE_HTLCS once all htlcs are resolved to transition.
Our test harness now tracks individual HTLCs, so we refined some
inputs (in particular, it won't try to complete/timeout an HTLC before
we have any).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>