On node start we replay onchaind's transactions from the database/from
our loaded htlc table. To keep things tidy, we shouldn't notify the
ledger about these, so we wrap pretty much everything in a flag that
tells us whether or not this is a replay.
There's a very small corner case where dust transactions will get missed
if the node crashes after the htlc has been added to the database but
before we've successfully notified onchaind about it.
Notably, most of the obtrusive updates to onchaind wrappings are due to
the fact that we record dust (ignored outputs) before we receive
confirmation of its confirmation.
HTLCs trigger a coin movement only when their final form (state) is
reached. This prevents us from needing to concern ourselves with
retries, as well as being the absolutely most correct in terms of
answering the question 'when has the money irrevocably changed hands'.
All coin movements should pass this bar, for ultimate accounting
correctness
The current plan for coin movements involves tagging
origination/destination htlc's with a separate tag from 'routed' htlcs
(which pass through our node). In order to do this, we need a persistent flag on
incoming htlcs as to whether or not we are the final destination.
I noticed the following in logs for tests/test_connection.py::test_feerate_stress:
```
DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-chan#1: Failing HTLC 18446744073709551615 due to peer death
DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-chan#1: local_routing_failure: 8194 (WIRE_TEMPORARY_NODE_FAILURE)
```
This is because it reports the (transient) node_failure error, because
our channel_failure message is incomplete. Fix this wart up.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Previously we've used the term 'funder' to refer to the peer
paying the fees for a transaction; v2 of openchannel will make
this no longer true. Instead we rename this to 'opener', or the
peer sending the 'open_channel' message, since this will be universally
true in a dual-funding world.
The plugin can basically return whatever it thinks the preimage is, but we
weren't handling the case in which it doesn't actually match the hash. If it
doesn't match now we just return an error claiming we don't have any matching
invoice.
One is called on every plugin return, and tells us whether to continue;
the other is only called if every plugin says ok.
This works for things like payload replacement, where we need to process
the results from each plugin, not just the final one!
We should probably turn everything into a chained callback next
release.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
They callback must take ownership of the payload (almost all do, but
now it's explicit).
And since the payload and cb_arg arguments to plugin_hook_call_() are
always identical, make them a single parameter.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Note that it's channeld which calculates the shared secret, too. This
minimizes the work that lightningd has to do, at cost of passing this
through.
We also don't yet save the blinding field(s) to the database.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This requires us to call ecdh() in the corner case where the blinding seed
is in the TLV itself (which is the case for the start of a blinded route).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This happened on my testnet node because I've been failing to reconnect to
a node which created a channel and never exchanged announcement sigs.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We currently abuse the added_htlc and failed_htlc messages to tell channeld
about existing htlcs when it restarts. It's clearer to have an explicit
'existing_htlc' type which contains all the information for this case.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
For messages, we use the onion but payload lengths 0 and 1 aren't special.
Create a flag to disable that logic.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Even without optimization, it's faster to walk all the channels than
ping another daemon and wait for the response.
Changelog-Changed: Forwarding messages is now much faster (less inter-daemon traffic)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Instead of saving a stripped_update, we use the new
local_fail_in_htlc_needs_update.
One minor change: we return the more correct
towire_temporary_channel_failure when the node is still syncing.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The idea is that gossipd can give us the cupdate we need for an error, and
we wire things up so that we ask for it (async) just before we send the
error to the subdaemon.
I tried many other things, but they were all too high-risk.
1. We need to ask gossipd every time, since it produces these lazily
(in particular, it doesn't actually generate an offline update unless
the channel is used).
2. We can't do async calls in random places, since we'll end up with
an HTLC in limbo. What if another path tries to fail it at the same time?
3. This allows us to use a temporary_node_failure error, and upgrade it
when gossipd replies. This doesn't change any existing assumptions.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is a common thing to do, so create a macro.
Unfortunately, it still needs the type arg, because the paramter may
be const, and the return cannot be, and C doesn't have a general
"(-const)" cast.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This completes the conversion; any in-flight HTLC failures get turned into temporary_node_failures.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This cleans up the "local failure" callers for incoming HTLCs to hand
an onionreply instead of making us generate it from the code inside
make_failmsg.
(The db path still needs make_failmsg, so that's next).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-deprecated: Plugins: htlc_accepted_hook "failure_code" only handles simple cases now, use "failure_message".
Unfortunately the invoice_payment_hook can give us a failcode, so I simply
restrict it to the two sensible ones.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-deprecated: plugins: invoice_payment_hook "failure_code" only handles simple cases now, use "failure_message".
We tell channeld that an htlc is bad by sending it a 'struct
failed_htlc'. This usually contains an onionreply to forward, but for
the case where the onion itself was bad, it contains a failure code
instead.
This makes the "send a failed_htlc for a bad onion" a completely
separate code path, then we can work on removing failcodes from the
other path.
In several places 'failcode' is now changed to 'badonion' to reflect
that it can only be a BADONION failcode.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
At the moment, we store e.g. WIRE_TEMPORARY_CHANNEL_FAILURE, and then
lightningd has a large demux function which turns that into the correct
error message.
Such an enum demuxer is an anti-pattern.
Instead, store the message directly for output HTLCs; channeld now
sends us an error message rather than an error code.
For input HTLCs we will still need the failure code if the onion was
bad (since we need to prompt channeld to send a completely different
message than normal), though we can (and will!) eliminate its use in
non-BADONION failure cases.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We're going to change our internal structure next, so this is preparation.
We populate existing errors with temporary node failures, for simplicity.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Instead of making it ourselves, lightningd does it. Now we only have
two cases of failed htlcs: completely malformed (BADONION), and with
an already-wrapped onion reply to send.
This makes channeld's job much simpler.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I hadn't realized that lightningd asks gossipd every time we forward
a payment. But I'm going to abuse it here to get the latest channel_update,
otherwise (as lightningd takes over error message generation) lightningd
needs to do an async request at various painful points.
So have gossipd tell us the lastest update (stripped so compatible with
the strange in-onion-error format).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Turn it into temporary node failure: this only happens if we restart
with a failed htlc in, but it's clearer and more robust to handle it
generically.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
For incoming htlcs, we need failure details in case we need to
re-xmit them. But for outgoing htlcs, lightningd is telling us it
already knows they've failed, so we just need to flag them failed
and don't need the details.
Internally, we set the ->fail to a dummy non-NULL value; this is
cleaned up next.
This matters for the next patch, which moves onion handling into
lightningd.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1. forward_htlc sets hout to NULL.
2. forward_htlc passes &hout to send_htlc_out.
3. forward_htlc checks the failcode and frees(NULL) and sets hout to NULL
(again). This in fact covers every failcode which send_htlc_out returns.
We should ensure send_htlc_out sets *houtp to NULL on failure; in fact,
both callers pass houtp, so we can make it unconditional.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If the peer is not connected, or other error which means we don't
actually create an outgoing HTLC, we don't record the
short_channel_id. This is unhelpful!
Pass the scid down to the wallet code, and explicitly hand the
scid and amount down to the notification code rather than handing it
the htlc_out (which it doesn't need).
Changelog-Changed: JSON API: `listforwards` now shows `out_channel` even if we couldn't forward.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Make the `htlc_accepted` hook the first chained hook in our repertoire. The
plugins are called one after the other in order until we have no more plugins
or the HTLC was handled by one of the plugins. If no plugins handles the HTLC
we continue to handle it internally like always.
Handling in this case means the plugin returns either `{"result": "resolve",
...}` or `{"result": "fail", ...}`.
Changelog-Changed: plugin: Multiple plugins can now register for the htlc_accepted hook.
The newly introduced type is used to determine what the call semantics of the
hook are. We have `single` corresponding to the old behavior, as well as
`chain` which allows multiple plugins to register for the hook, and they are
then called sequentially (if all plugins return `{"result": "continue"}`) or
exit the chain if the hook event was handled.