I was tempted to create a new db_select_stmt wrapper type, but that means
a lot of boilerplate around binding, which expects to work with db_prepare
*and* db_select_prepare.
This lets us clearly differentiate between db queries (which don't need to
go to a plugin) and db changes (which do).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
- Intrduce DB update `channel` values: `feerate_base` and `feerate_ppm`
- Make fist use of now context realted DB migration
- Add `struct channel` members of the same name
- Use struct values instead of config when commiting new channels
Allow a function as well as (or instead of!) an sql statement. That
will let us do things like set per-channel values to the global
defaults, for example.
Since we remove the NULL termination, the final entry is ARRAY_SIZE()-1
not ARRAY_SIZE()-2.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
More efficient to measure the ARRAY_SIZE(), which is a runtime
constant. We move it into the unit test.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Below this code appears:
if (current != orig)
db_exec(__func__, db,
"INSERT INTO db_upgrades VALUES (%i, '%s');",
orig, version());
But since the loop pre-increments current, this is always true. I wondered
why there were so many duplicates in my db_upgrades table!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This field was used by `pay` to hold the bolt11 description if the bolt11
string used `h` to hash the description (which nobody ever did). If the
`h` field wasn't present, it could contain anything, as it wasn't checked.
It's really useful to have a label for payments (eg. '1 Cuban'), but adding
yet-another option would be painful, so we simply rename 'description'
to 'label' except inside the db.
This means we need to do some tricky parameter parsing to handle array
and keyword JSON arguments, but only until we remove the old name.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Without this, there's no proof of payment, since it is the signed invoice
that make the receipt valid.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
In order to avoid having to ask the HSM for public keys to
their_unilateral/to_us outputs we just store the `scriptPubkey` with the UTXO,
which can then be converted to the P2WPKH address.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We need to do it in various places, but we shouldn't do it lightly:
the primitives are there to help us get overflow handling correct.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We were not correctly allocating the `db->filename`, failing to copy the
null-terminator. This was causing and error when reopening the database after
the call to `fork()`.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Reported-by: Sean McNally <@sfmcnally>
Changelog-fixed: Fixed a crash when running in daemon-mode due to db filename overrun
We need to still accept it when parsing the database, but this flag
should allow upgrade testing for devs building on top
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Christian and I both unwittingly used it in form:
*tal_arr_expand(&x) = tal(x, ...)
Since '=' isn't a sequence point, the compiler can (and does!) cache
the value of x, handing it to tal *after* tal_arr_expand() moves it
due to tal_resize().
The new version is somewhat less convenient to use, but doesn't have
this problem, since the assignment is always evaluated after the
resize.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
json_escaped.[ch], param.[ch] and jsonrpc_errors.h move from lightningd/
to common/. Tests moved too.
We add a new 'common/json_tok.[ch]' for the common parameter parsing
routines which a plugin might want, taking them out of
lightningd/json.c (which now only contains the lightningd-specific
ones).
The rest is mainly fixing up includes.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's the only user of them, and it's going to get optimized.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
gossip.pydiff --git a/common/test/run-json.c b/common/test/run-json.c
index 956fdda35..db52d6b01 100644
We do this a lot, and had boutique helpers in various places. So add
a more generic one; for convenience it returns a pointer to the new
end element.
I prefer the name tal_arr_expand to tal_arr_append, since it's up to
the caller to populate the new array entry.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's an array: we were only saving the single element; if there was more than
one changed HTLC we'd get a bad signature!
The report in #1907 is probably caused by the other side re-requesting
something we considered already finalized; to avoid this particular error,
we should set the field to NULL if there's no last_sent_commit.
I'm increasingly of the opinion we want to just save all the update
packets to the db and blast them out, instead of doing this
second-guessing dance.
Fixes: #1907
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This means we don't try to unilaterally close after a restart, *and*
we can tell onchaind to try to use the point to recover funds when the
peer unilaterally closes.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We're currently overriding fatal() with something that actually
returns, which contrasts with its declaration as NORETURN.
This breaks in the next patch which wants a real fatal() in wallet.h.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
In several places we use low-level tal functions because we want the
label to be something other than the default. ccan/tal is adding
tal_*_label so replace them and shim it for now.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
tal_count() is used where there's a type, even if it's char or u8, and
tal_bytelen() is going to replace tal_len() for clarity: it's only needed
where a pointer is void.
We shim tal_bytelen() for now.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I would have liked to make it a tal object, then we'd catch most
things with our memleak detection. However, sqlite3 doesn't seem to
allow allocator overrides.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The no-rescan change requires us to rescan one last time from the first_blocknum
of our channels (if we have any). The migrations just drop blocks that are
higher, then insert a dummy with the first_blocknum, and then clean up after
us. If we don't have any channels we don't go back at all.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
These transactions being seen on the blockchain triggered some action in
onchaind so we need to replay them when we restore the onchaind.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Currently these are either transactions we sent ourselves or transactions that
we are watching because they are part of a channel.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
So we know how much counterparty could theoretically steal from us
(msatoshi_to_us - msatoshi_to_us_min) and how much we could
theoretically steal from counterparty (msatoshi_to_us_max -
msatoshi_to_us).
For more piloting goodness.
This may be causing #1280, since with `--daemon` the DB is being reopened
without enabling the foreign key relations and hence the delete cascades.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
It would be better to give them unique values, but we don't fully support
db migrate anyway and this is simple (though they will end up using the
same key for multiple channel closes if created before this commit).
Note that even if bip32_max_index is currently unset, it defaults to 0
so it will be found.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>