memleak can't see into htables, as it overloads unused pointer bits.
And it can't see into intmap, since they use malloc (it only looks for tal
pointers).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We use feerate in several places, and each one really should react
differently when it's not available (such as when bitcoind is still
catching up):
1. For general fee-enforcement, we use the broadest possible limits.
2. For closingd, we use it as our opening negotiation point: just use half
the last tx feerate.
3. For onchaind, we can use the last tx feerate as a guide for our own txs;
it might be too high, but at least we know it was sufficient to be mined.
4. For withdraw and fund_channel, we can simply refuse.
Fixes: #1836
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Manipulate fees via fake-bitcoin-cli. It's not quite the same, as
these are pre-smoothing, so we need a restart to override that where
we really need an exact change. Or we can wait until it reaches a
certain value in cases we don't care about exact amounts.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Not just during startup: we could have bitcoind not give estimates until
later, but we don't want to smooth with zero.
The test changes in next patch trigger this, so I didn't write a test
with this patch.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We don't respond to fee changes until we're locked in: make sure we catch
up at that point.
Note that we use NORMAL fees during opening, but IMMEDIATE after, so
this often sends a fee update. The tests which break, we set those
feerates to be equal.
This (sometimes) changes the behavior of test_permfail, as we now
get an immediate commit, so that is fixed too so we always wait for
that to complete.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is a noop if we're opening a new channel (channel_fees_can_change(channel)
is false until funding locked in), but important if we're restarting.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When in this state, we send a canned error "Awaiting unilateral close".
We enter this both when we drop to chain, and when we're trying to get
them to drop to chain due to option_data_loss_protect.
As this state (unlike channel errors) is saved to the database, it means
we will *never* talk to a peer again in this state, so they can't
confuse us.
Since we set this state in channel_fail_permanent() (which is the only
place we call drop_to_chain for a unilateral close), we don't need to
save to the db: channel_set_state() does that for us.
This state change has a subtle effect: we return WIRE_UNKNOWN_NEXT_PEER
instead of WIRE_TEMPORARY_CHANNEL_FAILURE as soon as we get a failure
with a peer. To provoke a temporary failure in test_pay_disconnect we
take the node offline.
Reported-by: Christian Decker @cdecker
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This means we don't try to unilaterally close after a restart, *and*
we can tell onchaind to try to use the point to recover funds when the
peer unilaterally closes.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
For option_data_loss_protect, the peer can prove to us that it's ahead;
it gives us the (hopefully honest!) per_commitment_point it will use,
and we make sure we don't broadcast the commitment transaction we have.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We ignore incoming for now, but this means we advertize the option and
we send the required fields.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
peer features are only kept for connected peers (as they can change),
but we didn't update them on reconnect. The main effect was that
after a restart we displayed the features as empty, even after
reconnect.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1. l1 update_fee -> l2
2. l1 commitment_signed -> l2 (using new feerate)
3. l1 <- revoke_and_ack l2
4. l1 <- commitment_signed l2 (using new feerate)
5. l1 -> revoke_and_ack l2
When we break the connection after #3, the reconnection causes #4 to
be retransmitted, but it turns out l1 wasn't telling the master to set
the local feerate until it received the commitment_signed, so on
reconnect it uses the old feerate, with predictable results (bad
signature).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It introduces imprecision (took 1 satoshi off results in the coming
tests), and we have a helper for this already.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We only did this when we were first creating a wallet, or when we
asked for a relative rescan, not in the normal case!
Fixes: #1843
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Normal wallet txs get reconfirmed as blocks come in, but ones which need
closeinfo are more fragile, so we do it manually using txwatch for them.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We're about to use the txwatch facility for UTXOs, where there's no channel,
so allow that the be NULL, and hand the struct lightningd which callers
want anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This was a very simple change and allowed us to remove the special
`json_opt_tok` macro.
Moved the callback out of `common/json.c` to `lightningd/json.c` because the new
callbacks are dependent on `struct command` etc.
(I already started on `json_tok_number`)
My plan is to:
1. upgrade json_tok_X one a time, maybe a PR for each one.
2. When done, rename macros (i.e, remove "_tal").
3. Remove all vestiges of the old callbacks
4. Add new callbacks so that we no longer need json_tok_tok!
(e.g., json_tok_label, json_tok_str, json_tok_msat)
Signed-off-by: Mark Beckwith <wythe@intrig.com>
1. connect convenience variable for improved readabilty.
2. a comment explaining that timer is on channel, not HTLC.
3. use modern python style in test_htlc_send_timeout
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Since we now fixed the bug where nodes receiving a connection would
try to reconnect to the source IP/port of that connection, we now expose
an issue mentioned by other implementers: we can continually cross over
reconnections unless we add some fuzz. One second should be sufficient.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
[ Squashed into single commit --RR ]
This adds two new macros, `p_req_tal()` and `p_opt_tal()`. These support
callbacks that take a `struct command *` context. Example:
static bool json_tok_label_x(struct command *cmd,
const char *name,
const char *buffer,
const jsmntok_t *tok,
struct json_escaped **label)
The above is taken from the run-param unit test (near the bottom of the diff).
The return value is true on success, or false (and it calls command_fail itself).
We can pretty much remove all remaining usage of `json_tok_tok` in the codebase
with this type of callback.
We currently hand the error back to the master, who then stores it for
future connections and hands it back to another openingd to send and exit.
Just send directly; it's more reliable and simpler.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Include it as an optional field in the connect_to_peer message (it was
added before we had optional fields).
The only issue is that reconnects want it too, so again connectd hands
it back to master in connectctl_connect_failed.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
connectd tells master about every disconnection, and master knows
whether it's important to reconnect. Just get the master to invoke a new
connect command if it considers the peer important!
The only twist is timeouts: we don't want to immediately reconnect if
we've failed to connect. To solve this, connectd passes a 'delaytime'
to the master when a connection fails, and the master passes it back
when it asks for a connection.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We used to separate implicit connection requests (ie. timed retries
for important peers) and explicit ones, and send a
WIRE_CONNECTCTL_CONNECT_TO_PEER_RESULT for the latter.
In the success case, that's now redundant, since we hand the connected
peer to the master using WIRE_CONNECT_PEER_CONNECTED; we just need a
message for the failure case. And we might as well tell the master
every failure, so we don't have to distinguish internally.
This also solves a race we had before: connectd would send
WIRE_CONNECTCTL_CONNECT_TO_PEER_RESULT which completes the incoming
JSON connect command, then send WIRE_CONNECT_PEER_CONNECTED. So
there's a window where the JSON command can return, but the peer isn't
known to lightningd yet.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>