We are slowly migrating towards a wally-transactions only world, but to make
this reviewable we start building both old and new style transactions in
parallel. In a second pass we'll then start removing the old ones and use
libwally only.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
1. Rename channel_funding_locked to channel_funding_depth in
channeld/channel_wire.csv.
2. Add minimum_depth in struct channel in common/initial_channel.h and
change corresponding init function: new_initial_channel().
3. Add confirmation_needed in struct peer in channeld/channeld.c.
4. Rename channel_tell_funding_locked to channel_tell_depth.
5. Call channel_tell_depth even if depth < minimum, and still call
lockin_complete in channel_tell_depth, iff depth > minimum_depth.
6. channeld ignore the channel_funding_depth unless its >
minimum_depth(except to update billboard, and set
peer->confirmation_needed = minimum_depth - depth).
In particular, `make unittest/bitcoin/test/run-secret_eq_consttime`
didn't set VALGRIND if it was set via config.vars, so runs for a *long*
time.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
TLV's use var_int's for messages sizes, both internally and
in the top level (you should really stack a var_int inside a var_int!!)
this updates our automagick generator code to understand 'var_ints'
passing back a null TLV was crashing here, because we tried to
dereference a null pointer. instead, we put it into a temporary
struct that we can check for NULL-ness, before assigning to the
passed in pointer.
let's let the fromwire__tlv methods allocate the tlv-objects and
return them. we also want to initialize all of their underlying
messages to NULL, and fail if we discover a duplicate mesage type.
if parsing fails, instead of returning a struct we return NULL.
Suggested-By: @rustyrussell
Since messages in TLV's are optional, the ideal way to deal with
them is to have a 'master struct' object for every defined tlv, where
the presence or lack of a field can be determined via the presence
(or lack thereof) of a struct for each of the optional message
types.
In order to do this, appropriately, we need a struct for every
TLV message. The next commit will make use of these.
Note that right now TLV message structs aren't namespaced to the
TLV they belong to, so there's the potential for collision. This
should be fixed when/where it occurs (should fail to compile).
Add tlv-messages to the general messages set so that their parsing
messages get printed out.
FIXME: figure out how to account for partial message length processing?
Version 1.1 of the lightning-rfc spec introduces TLVs for optional
data fields. This starts the process of updating our auto-gen'd
wireformat parsers to be able to understand TLV fields.
The general way to declare a new TLV field is to add a '+' to the
end of the fieldname. All field type declarations for that TLV set
should be added to a file in the same directory by the name
`gen_<field_name>_csv`.
Note that the FIXME included in this commit is difficult to fix, as
we currently pass in the csv files via stdin (so there's no easy
way to ascertain the originating directory of file)
If we asked `bitcoind` for a txout and it failed we were not storing that
information anywhere, meaning that when we see the channel announcement the
next time we'd be reaching out to `lightningd` and `bitcoind` again, just to
see it fail again. This adds an in-memory cache for these failures so we can
just ignore these the next time around.
Fixes#2503
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Case 5 in the Tor documentation currently states that if you use `--bind-addr=autotor:127.0.0.1:9051`, you can get your onion address by running `lightning-cli getinfo`. I have not found that to be the case; with that flag no onion address will be generated.
On the other hand, if `--addr=autotor:127.0.0.1:9051` is used instead, an onion address is generated and `lightning-cli getinfo` behaves as the docs say.
Otherwise we can't really return a variable sized message with more than 65k
results. This was causing an integer overflow in `listchannels` (see #2504 for
details).
Signed-off-by: Christian Decker <decker.christian@gmail.com>
gossipd in l1 might not have registered l2 reconnecting, thus considering
the channel local-disabled.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
- This will make a channel loop when 'id' argument was "all".
- The response will now contain an array of objects (peer_id, scid, ...)
- It will skip channels in invalid states.
- Moves iffy channel/peer param stuff to param_channel_or_all