In one case, the channel_update which we expected to activate the channel
from l2 was suppressed as redundant. This is certainly valid, so just
check the results.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This was a very simple change and allowed us to remove the special
`json_opt_tok` macro.
Moved the callback out of `common/json.c` to `lightningd/json.c` because the new
callbacks are dependent on `struct command` etc.
(I already started on `json_tok_number`)
My plan is to:
1. upgrade json_tok_X one a time, maybe a PR for each one.
2. When done, rename macros (i.e, remove "_tal").
3. Remove all vestiges of the old callbacks
4. Add new callbacks so that we no longer need json_tok_tok!
(e.g., json_tok_label, json_tok_str, json_tok_msat)
Signed-off-by: Mark Beckwith <wythe@intrig.com>
Waiting for three node_announcments isn't always enough, since l2 can
publish two of them (an independent bug). Do the more Right Thing and
just wait for 30 seconds of no input...
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1. connect convenience variable for improved readabilty.
2. a comment explaining that timer is on channel, not HTLC.
3. use modern python style in test_htlc_send_timeout
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now sending a ping makes sense: it should force the other end to send
a reply, unblocking the commitment process.
Note that rather than waiting for a reply, we're actually spinning on
a 100ms loop in this case. But it's simple and it works.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This doesn't do much (though we might get an error before we send the
commitment_signed), but it's infrastructure for the next patch.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Currently, if we don't realize a TCP connection is down, we almost
certainly don't find out until *after* we're sent the
commitment_signed message, in which case we cannot fail the incoming
HTLC.
This test demonstrates that. Note the 30 second sleep: we should really
run Travis tests in parallel!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Avoid that 200ms loss. We don't want to disable nagle generally,
since it's great for gossip and other traffic; we just want to push at
critical times.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Since we now fixed the bug where nodes receiving a connection would
try to reconnect to the source IP/port of that connection, we now expose
an issue mentioned by other implementers: we can continually cross over
reconnections unless we add some fuzz. One second should be sufficient.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we have an address hint, we start with that, but we'll use
node_announcement information if required.
Note: we (ab)use the address hint when restoring from the database
or reconnecting, even if the connection was *incoming*. That meant
that the recipient of a connection would *never* manage to connect out.
We still don't take multiple addresses from the DNS seeds: I assume we
should, since there could be IPv4 and IPv6.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We're currently overriding fatal() with something that actually
returns, which contrasts with its declaration as NORETURN.
This breaks in the next patch which wants a real fatal() in wallet.h.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
[ Squashed into single commit --RR ]
This adds two new macros, `p_req_tal()` and `p_opt_tal()`. These support
callbacks that take a `struct command *` context. Example:
static bool json_tok_label_x(struct command *cmd,
const char *name,
const char *buffer,
const jsmntok_t *tok,
struct json_escaped **label)
The above is taken from the run-param unit test (near the bottom of the diff).
The return value is true on success, or false (and it calls command_fail itself).
We can pretty much remove all remaining usage of `json_tok_tok` in the codebase
with this type of callback.
So much code for so little noticable difference, but mention that
`state` (an accidental legacy from before we moved the field into
`channels` long ago) no longer exists.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We currently hand the error back to the master, who then stores it for
future connections and hands it back to another openingd to send and exit.
Just send directly; it's more reliable and simpler.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Include it as an optional field in the connect_to_peer message (it was
added before we had optional fields).
The only issue is that reconnects want it too, so again connectd hands
it back to master in connectctl_connect_failed.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
connectd tells master about every disconnection, and master knows
whether it's important to reconnect. Just get the master to invoke a new
connect command if it considers the peer important!
The only twist is timeouts: we don't want to immediately reconnect if
we've failed to connect. To solve this, connectd passes a 'delaytime'
to the master when a connection fails, and the master passes it back
when it asks for a connection.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We used to separate implicit connection requests (ie. timed retries
for important peers) and explicit ones, and send a
WIRE_CONNECTCTL_CONNECT_TO_PEER_RESULT for the latter.
In the success case, that's now redundant, since we hand the connected
peer to the master using WIRE_CONNECT_PEER_CONNECTED; we just need a
message for the failure case. And we might as well tell the master
every failure, so we don't have to distinguish internally.
This also solves a race we had before: connectd would send
WIRE_CONNECTCTL_CONNECT_TO_PEER_RESULT which completes the incoming
JSON connect command, then send WIRE_CONNECT_PEER_CONNECTED. So
there's a window where the JSON command can return, but the peer isn't
known to lightningd yet.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>