We don't compile with NDEBUG defined, but if we did, this code would
vanish. I did a quick audit, inspired by @ZmnSCPxj.
I actually hacked up something to compile with NDEBUG (many unused vars
resulted, and of course unit tests are allowed to rely on assert()), and
after this the testsuite still passes.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
These messages may be exchanged between the master and any daemon. For now
these are just the daemons that a peer may be attached to at any time since
the first example of this is the custommsg infrastructure.
This makes it clear we're dealing with a message which is a wrapped error
reply (needing unwrap_onionreply), not an already-wrapped one.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We still close the channel if we *send* an error, but we seem to have hit
another case where LND sends an error which seems transient, so this will
make a best-effort attempt to preserve our channel in that case.
Some test have to be modified, since they don't terminate as they did
previously :(
Changelog-Changed: quirks: We'll now reconnect and retry if we get an error on an established channel. This works around lnd sending error messages that may be non-fatal.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is an intermediary step: we still don't save it to the database,
but we do use the fee_states struct to track it internally.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This uses the same state machine as HTLCs, but they're only
ever added, not removed. Since we can only have one in each
state, we use a simple array; mostly NULL.
We could make this more space-efficient by folding everything into the
first 5 states, but that would be more complex than just using the
identical state machine.
One subtlety: we don't send uncommitted fee_states over the wire.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
test_openchannel_hook_1:
MEMLEAK: 0x557593c164e8'
label=wire/fromwire.c:320:char[]'
backtrace:'
ccan/ccan/tal/tal.c:437 (tal_alloc_)'
ccan/ccan/tal/tal.c:466 (tal_alloc_arr_)'
wire/fromwire.c:320 (fromwire_wirestring)'
openingd/gen_opening_wire.c:205 (fromwire_opening_got_offer_reply)'
openingd/openingd.c:1067 (fundee_channel)'
openingd/openingd.c:1279 (handle_peer_in)'
openingd/openingd.c:1535 (main)'
parents:
fromwire_opening_got_offer_reply() allocates two fields off NULL:
err_reason and our_upfront_shutdown_script. err_reason is used
immediately afterwards (and was the leak detected here), so fixing
that is easy.
To fix the leak of our_upfront_shutdown_script, it makes sense to simply
make it a member of 'state'.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rounds out the application of `upfront_shutdown_script`, allowing
an accepting node to specify a close_to address.
Prior to this, only the opening node could specify one.
Changelog-Added: Plugins: Allow the 'accepter' to specify an upfront_shutdown_script for a channel via a `close_to` field in the openchannel hook result
--dev-force-tmp-channel-id flag takes a 64-character hex string
to use as the temporary channel id. Useful for spec tests
[ Fixed crash in non-DEVELOPER mode --RR ]
Changelog-None
Takes advantage of upfront-shutdown-script to permit users to
specify the close-to address for a channel at open, by adding
a `close_to` field to `fundchannel_start`.
Note that this only is in effect if `fundchannel_start` returns
with `close_to` set -- otherwise, peer doesn't
support `option_upfront_shutdown_script`.
This is mainly an internal-only change, especially since we don't
offer any globalfeatures.
However, LND (as of next release) will offer global features, and also
expect option_static_remotekey to be a *global* feature. So we send
our (merged) feature bitset as both global and local in init, and fold
those bitsets together when we get an init msg.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We now have a pointer to chainparams, that fails valgrind if we do anything
chain-specific before setting it.
Suggested-by: Rusty Russell <@rustyrussell>
Valgrind error file: valgrind-errors.112365
==112365== Conditional jump or move depends on uninitialised value(s)
==112365== at 0x1105E0: main (openingd.c:1504)
==112365==
==112365== Conditional jump or move depends on uninitialised value(s)
==112365== at 0x110604: main (openingd.c:1507)
==112365==
==112365== Conditional jump or move depends on uninitialised value(s)
==112365== at 0x110628: main (openingd.c:1510)
==112365==
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's generally clearer to have simple hardcoded numbers with an
#if DEVELOPER around it, than apparent variables which aren't, really.
Interestingly, our pruning test was always kinda broken: we have to pass
two cycles, since l2 will refresh the channel once to avoid pruning.
Do the more obvious thing, and cut the network in half and check that
l1 and l3 time out.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When the peer sends back an error, we return null, but sending
a NULL message back to lightningd causes a parsing error on their
side. negotiation_failed already calls back the peer, all we need
to do here is exit.
The way we build transactions, serialize them, and compute fees depends on the
chain we are working on, so let's add some context to the transactions.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
It probably doesn't matter to "fundchannel_cancel" exactly why the
fundchannel didn't work (though it can read the error msg), and we
should always fail any pending fundchannel_complete command.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Big wiring re-org for funding-continue
In openingd, we move the 'persistent' state (their basepoints,
pubkey, and the minimum_depth requirement for the opening tx) into
the state object. We also look to keep code-reuse between
'continue' and normal 'fundchannel' as high as possible. Both
of these call the same 'fundchannel_reply' at the end.
In opening_control.c, we remap fundchannel_reply such that it is
now aware of the difference between an external/internally funded
channel open. It's the same return path, with the difference that
one finishes making and broadcasting the funding transaction; the
other is skips this.
Add an RPC method (not working at the moment) called
`fundchannel_continue` that takes as its parameters a
node_id and a txid for a transaction (that ostensibly has an output
for a channel)
For the `fundchannel_cancel` we're going to want
to 'successfully' fail a funding channel operation. This allows
us to report it a failure back as an RPC success, instead of
automatically failing the RPC request.
This means we intercept the peer's gossip_timestamp_filter request
in the per-peer subdaemon itself. The rest of the semantics are fairly
simple however.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Keeping the uintmap ordering all the broadcastable messages is expensive:
130MB for the million-channels project. But now we delete obsolete entries
from the store, we can have the per-peer daemons simply read that sequentially
and stream the gossip itself.
This is the most primitive version, where all gossip is streamed;
successive patches will bring back proper handling of timestamp filtering
and initial_routing_sync.
We add a gossip_state field to track what's happening with our gossip
streaming: it's initialized in gossipd, and currently always set, but
once we handle timestamps the per-peer daemon may do it when the first
filter is sent.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Encapsulating the peer state was a win for lightningd; not surprisingly,
it's even more of a win for the other daemons, especially as we want
to add a little gossip information.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We ask gossipd for the channel_update for the outgoing channel; any other
messages it sends us get queued for later processing.
But this is overzealous: we can shunt those msgs to the peer while
we're waiting. This fixes a nasty case where we have to handle
WIRE_GOSSIPD_NEW_STORE_FD messages by queuing the fd for later.
This then means that WIRE_GOSSIPD_NEW_STORE_FD can be handled
internally inside handle_gossip_msg(), since it's always dealt with
the same, simplifying all callers.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Instead of lightningd telling us when it's ready, we ask it.
This also provides an opportunity to have a plugin hook at this point.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>