Rounds out the application of `upfront_shutdown_script`, allowing
an accepting node to specify a close_to address.
Prior to this, only the opening node could specify one.
Changelog-Added: Plugins: Allow the 'accepter' to specify an upfront_shutdown_script for a channel via a `close_to` field in the openchannel hook result
--dev-force-tmp-channel-id flag takes a 64-character hex string
to use as the temporary channel id. Useful for spec tests
[ Fixed crash in non-DEVELOPER mode --RR ]
Changelog-None
Takes advantage of upfront-shutdown-script to permit users to
specify the close-to address for a channel at open, by adding
a `close_to` field to `fundchannel_start`.
Note that this only is in effect if `fundchannel_start` returns
with `close_to` set -- otherwise, peer doesn't
support `option_upfront_shutdown_script`.
This is mainly an internal-only change, especially since we don't
offer any globalfeatures.
However, LND (as of next release) will offer global features, and also
expect option_static_remotekey to be a *global* feature. So we send
our (merged) feature bitset as both global and local in init, and fold
those bitsets together when we get an init msg.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We now have a pointer to chainparams, that fails valgrind if we do anything
chain-specific before setting it.
Suggested-by: Rusty Russell <@rustyrussell>
Valgrind error file: valgrind-errors.112365
==112365== Conditional jump or move depends on uninitialised value(s)
==112365== at 0x1105E0: main (openingd.c:1504)
==112365==
==112365== Conditional jump or move depends on uninitialised value(s)
==112365== at 0x110604: main (openingd.c:1507)
==112365==
==112365== Conditional jump or move depends on uninitialised value(s)
==112365== at 0x110628: main (openingd.c:1510)
==112365==
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's generally clearer to have simple hardcoded numbers with an
#if DEVELOPER around it, than apparent variables which aren't, really.
Interestingly, our pruning test was always kinda broken: we have to pass
two cycles, since l2 will refresh the channel once to avoid pruning.
Do the more obvious thing, and cut the network in half and check that
l1 and l3 time out.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When the peer sends back an error, we return null, but sending
a NULL message back to lightningd causes a parsing error on their
side. negotiation_failed already calls back the peer, all we need
to do here is exit.
The way we build transactions, serialize them, and compute fees depends on the
chain we are working on, so let's add some context to the transactions.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
It probably doesn't matter to "fundchannel_cancel" exactly why the
fundchannel didn't work (though it can read the error msg), and we
should always fail any pending fundchannel_complete command.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Big wiring re-org for funding-continue
In openingd, we move the 'persistent' state (their basepoints,
pubkey, and the minimum_depth requirement for the opening tx) into
the state object. We also look to keep code-reuse between
'continue' and normal 'fundchannel' as high as possible. Both
of these call the same 'fundchannel_reply' at the end.
In opening_control.c, we remap fundchannel_reply such that it is
now aware of the difference between an external/internally funded
channel open. It's the same return path, with the difference that
one finishes making and broadcasting the funding transaction; the
other is skips this.
Add an RPC method (not working at the moment) called
`fundchannel_continue` that takes as its parameters a
node_id and a txid for a transaction (that ostensibly has an output
for a channel)
For the `fundchannel_cancel` we're going to want
to 'successfully' fail a funding channel operation. This allows
us to report it a failure back as an RPC success, instead of
automatically failing the RPC request.
This means we intercept the peer's gossip_timestamp_filter request
in the per-peer subdaemon itself. The rest of the semantics are fairly
simple however.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Keeping the uintmap ordering all the broadcastable messages is expensive:
130MB for the million-channels project. But now we delete obsolete entries
from the store, we can have the per-peer daemons simply read that sequentially
and stream the gossip itself.
This is the most primitive version, where all gossip is streamed;
successive patches will bring back proper handling of timestamp filtering
and initial_routing_sync.
We add a gossip_state field to track what's happening with our gossip
streaming: it's initialized in gossipd, and currently always set, but
once we handle timestamps the per-peer daemon may do it when the first
filter is sent.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Encapsulating the peer state was a win for lightningd; not surprisingly,
it's even more of a win for the other daemons, especially as we want
to add a little gossip information.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We ask gossipd for the channel_update for the outgoing channel; any other
messages it sends us get queued for later processing.
But this is overzealous: we can shunt those msgs to the peer while
we're waiting. This fixes a nasty case where we have to handle
WIRE_GOSSIPD_NEW_STORE_FD messages by queuing the fd for later.
This then means that WIRE_GOSSIPD_NEW_STORE_FD can be handled
internally inside handle_gossip_msg(), since it's always dealt with
the same, simplifying all callers.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Instead of lightningd telling us when it's ready, we ask it.
This also provides an opportunity to have a plugin hook at this point.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
My raspberry pi node hung up on my other node:
lightning_openingd-... chan #1: Got bad message from gossipd: 0db1
This is because we didn't handle that message in one path.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Instead of reading the store ourselves, we can just send them an
offset. This saves gossipd a lot of work, putting it where it belongs
(in the daemon responsible for the specific peer).
MCP bench results:
store_load_msec:28509-31001(29206.6+/-9.4e+02)
vsz_kb:580004-580016(580006+/-4.8)
store_rewrite_sec:11.640000-12.730000(11.908+/-0.41)
listnodes_sec:1.790000-1.880000(1.83+/-0.032)
listchannels_sec:21.180000-21.950000(21.476+/-0.27)
routing_sec:2.210000-11.160000(7.126+/-3.1)
peer_write_all_sec:36.270000-41.200000(38.168+/-1.9)
Signficant savings in streaming gossip:
-peer_write_all_sec:48.160000-51.480000(49.608+/-1.1)
+peer_write_all_sec:35.780000-37.980000(36.43+/-0.81)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1. Rename channel_funding_locked to channel_funding_depth in
channeld/channel_wire.csv.
2. Add minimum_depth in struct channel in common/initial_channel.h and
change corresponding init function: new_initial_channel().
3. Add confirmation_needed in struct peer in channeld/channeld.c.
4. Rename channel_tell_funding_locked to channel_tell_depth.
5. Call channel_tell_depth even if depth < minimum, and still call
lockin_complete in channel_tell_depth, iff depth > minimum_depth.
6. channeld ignore the channel_funding_depth unless its >
minimum_depth(except to update billboard, and set
peer->confirmation_needed = minimum_depth - depth).
We need to do it in various places, but we shouldn't do it lightly:
the primitives are there to help us get overflow handling correct.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Basically we tell it that every field ending in '_msat' is a struct
amount_msat, and 'satoshis' is an amount_sat. The exceptions are
channel_update's fee_base_msat which is a u32, and
final_incorrect_htlc_amount's incoming_htlc_amt which is also a
'struct amount_msat'.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>