Add towire_int() and fromwire_int() functions to "(de)serialize"
"int". This will only work as long as both the caller of towire_int()
and the caller of fromwire_int() use the same in-memory representation
of signed integers and have the same sizeof(int).
Changelog-None
This is mainly an internal-only change, especially since we don't
offer any globalfeatures.
However, LND (as of next release) will offer global features, and also
expect option_static_remotekey to be a *global* feature. So we send
our (merged) feature bitset as both global and local in init, and fold
those bitsets together when we get an init msg.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
With enable-autotor-v2 defined in cmdline the default behavior to create
v3 onions with the tor service call, is set to v2 onions.
Signed-off-by: Saibato <saibato.naga@pm.me>
Keeping the uintmap ordering all the broadcastable messages is expensive:
130MB for the million-channels project. But now we delete obsolete entries
from the store, we can have the per-peer daemons simply read that sequentially
and stream the gossip itself.
This is the most primitive version, where all gossip is streamed;
successive patches will bring back proper handling of timestamp filtering
and initial_routing_sync.
We add a gossip_state field to track what's happening with our gossip
streaming: it's initialized in gossipd, and currently always set, but
once we handle timestamps the per-peer daemon may do it when the first
filter is sent.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Encapsulating the peer state was a win for lightningd; not surprisingly,
it's even more of a win for the other daemons, especially as we want
to add a little gossip information.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I tried to just do gossipd, but it was uncontainable, so this ended up being
a complete sweep.
We didn't get much space saving in gossipd, even though we should save
24 bytes per node.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We currently hand the feature set from lightningd, but that's confusing
if they were ever different.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We need several notleak() annotations here:
1. The temporary structure which is handed to retry_peer_connected().
It's waiting for the master to respond to our connect_reconnected
message.
2. We don't keep a pointer to the io_conn for a peer, so we need to
mark those as not being a leak.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Include it as an optional field in the connect_to_peer message (it was
added before we had optional fields).
The only issue is that reconnects want it too, so again connectd hands
it back to master in connectctl_connect_failed.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
connectd tells master about every disconnection, and master knows
whether it's important to reconnect. Just get the master to invoke a new
connect command if it considers the peer important!
The only twist is timeouts: we don't want to immediately reconnect if
we've failed to connect. To solve this, connectd passes a 'delaytime'
to the master when a connection fails, and the master passes it back
when it asks for a connection.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We used to separate implicit connection requests (ie. timed retries
for important peers) and explicit ones, and send a
WIRE_CONNECTCTL_CONNECT_TO_PEER_RESULT for the latter.
In the success case, that's now redundant, since we hand the connected
peer to the master using WIRE_CONNECT_PEER_CONNECTED; we just need a
message for the failure case. And we might as well tell the master
every failure, so we don't have to distinguish internally.
This also solves a race we had before: connectd would send
WIRE_CONNECTCTL_CONNECT_TO_PEER_RESULT which completes the incoming
JSON connect command, then send WIRE_CONNECT_PEER_CONNECTED. So
there's a window where the JSON command can return, but the peer isn't
known to lightningd yet.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We now simply maintain a pubkey set for connected peers (we only care
if there's a reconnect), not the entire peer structure.
lightningd no longer queries us for getpeers: it knows more than we do
already.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
gossip_getnodes_entry was used by gossipd for reporting nodes, and for
reporting peers. But the local_features field is only available for peers,
and most other fields are only available from node_announcement.
Note that the connectd change actually means we get less information
about peers: gossipd used to do the node lookup for peers and include the
node_announcement information if it had it.
Since generate_wire.py can't create arrays-of-arrays, we add a 'struct
peer_features' to encapsulate the two feature arrays for each peer, and
for convenience we add it to lightningd/gossip_msg.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
gossipd does a two-step initialization: it only tries to create the
listening sockets when it's activated. This means it doesn't know the
addresses to announce until this point.
Now connectd is doing this, this would mean we'd have to tell gossipd
later ("oh, BTW here are your addresses") since we need to start gossipd
before connectd activation.
So make connectd do all the setup, but only actually listen on the fds
once we activate it. We clone the gossip_init message into
connect_wire.csv. The master daemon still waits for a reply from
connectd for the activate message, since it wants to be listening
before it prints "Server started".
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is just copying most of gossipd/gossip.c into connectd/connect.c.
It shares the same wire format as gossipd during transition, and changes
are deliberately minimal.
It also has an additional message 'connect_reconnected' which it sends
to the master daemon to tell it to kill a peer; gossipd relied on
closing the gossipfd to do this, but connectd doesn't maintain an fd
with remote peers.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>