Nesting is provided by only actually performing the outermost
transaction and simulating the nested ones. This still allows us to
ensure on lower levels that we are in the context of a transaction
without having to resort to keeping explicitly track of it in the
calling code.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
In addition we also set some of the test values to a pattern instead
of just `memset`ting it to 0, which may hide some crossed lines.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We use these quite often and it is cumbersome having to do these
simple conversions inline, so just expose pseudo-sqlite3 methods to
bind and extract from/to a stmt.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
"near \"AND\": syntax error"
This was caught by the "always keep errors for db_commit_transaction".
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We'd like to not keep them in memory and retrieve them on-demand when
`onchaind` is launched. This uses the `channel_htlcs` table as backing
but only fetches the minimal necessary information.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This is a necessary evil since at the time we load `struct htlc_out`
associated with a channel we might not have loaded the `struct
htlc_in` that it depends on, so we defer the rewiring until we have
loaded all HTLCs for all channels. At that point rewiring MUST work,
otherwise we report a failure.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
While loading HTLCs from the database we might not yet have all the
incoming HTLCs loaded when loading a dependent htlc_out. So we defer
the wiring of the HTLCs until we are sure we have them loaded.
This is also the first step towards keeping that association only in
the database, since otherwise we cannot selectively load channels from
DB.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This was causing some compilation trouble on 32bit systems, see #256.
Reported-by: @shsmith
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Also, we split the more sophisticated json_add helpers to avoid pulling in
everything into lightning-cli, and unify the routines to print struct
short_channel_id (it's ':', not '/' too).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The peer->seed needs to be unique for each channel, since bitcoin
pubkeys and the shachain are generated from it. However we also need
to guarantee that the same seed is generated for a given channel every
time, e.g., upon a restart. The DB channel ID is guaranteed to be
unique, and will not change throughout the lifetime of a channel, so
we simply mix it in, instead of a separate increasing counter.
We also needed to make sure to store in the DB before deriving the
seed, in order to get an ID assigned by the DB.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This is the big one, and it's completely anticlimactic: it loads all
channels that have reached opening and are not marked as
closingd_complete into memory, that's it.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
They happen to advance at the same pace but mixing them may have
unforeseen consequences, and I have done so a few times already so
this explicitly separates them.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This was supposed to be a temporary solution anyway, and I had a
rather annoying mixup between peer_id and unique_id, the latter of
which is actually a connection identifier.
If we kill the daemon without performing any commits we ended up with
a 0 instead of UINT48_MAX which was expected.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
And store in peer->last_tx/peer->last_sig like all other places,
that way we broadcast it if we need to.
Note: the removal of tmpctx in funder_channel() is needed because we
use txs[0], which was allocated off tmpctx.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Definitely not as nice as it could be, but it works for now. This is
primarily intended as a simple dump method that just saves everything
to the database. We will later use smaller incremental updates to
update specific things. wallet_channel_save serves both to insert as
well as update.
This needed a rather annoying hack since sqlite3 can only store
integers up to 2^63, so I just squash it down/invert it, and hope that
we never ever have more than 2^63 updates.
Splitting the detection for outputs that we own into a separate
`wallet_extract_owned_outputs` function and use it when the broadcast
succeeds to re-add the change output back to the database.
Wallet should really be the container for anything bip32 related, so
I'd like to slowly wean off of `ld->bip32_base` in favor of
`ld->wallet->bip32_base`