We're rarely in a hurry here, and bitcoind is aggressive with fees.
You can always spend this output if you really have to, using CPFP.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Changed: Protocol: mutual closing feerate reduced to "slow" to avoid overpaying.
lightning-5 can sometimes see itself sweeping the unilateral output resulting
in this weird line:
```log
HTLC already resolved by SELF when we found preimage
```
Marking spent means if the transaction doesn't confirm for some
reason, the user will need to force a rescan to find the funds. Now
we have timed reservations, reserving for (an additional) 12 hours
should be sufficient.
We also take this opportunity (now we have our own callback path)
to record the tx in the wallet only on success.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Anchor outputs break many assumptions in our tests:
1. Remove some hardcoded numbers in favor of a fee calc, so we only have to
change in one place.
FIXME: This should also be done for elements!
2. Do binary search to get feerate for a given closing fee.
3. Don't assume output #0: anchor outputs perturb them.
4. Don't assume we can make 1ksat channels (anchors cost 660 sats!).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It might have already happened, and anyway, we wait for the actual state
below.
```
# make database snapshot of l2
l2.stop()
l2_db_path = os.path.join(l2.daemon.lightning_dir, chainparams['name'], 'lightningd.sqlite3')
l2_db_path_bak = os.path.join(l2.daemon.lightning_dir, chainparams['name'], 'lightningd.sqlite3.bak')
copyfile(l2_db_path, l2_db_path_bak)
l2.start()
sync_blockheight(bitcoind, [l2])
# push some money from l3->l2, so that the commit counter advances
l2.rpc.connect(l3.info['id'], 'localhost', l3.port)
> l2.daemon.wait_for_log('now ACTIVE')
tests/test_closing.py:908:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
contrib/pyln-testing/pyln/testing/utils.py:288: in wait_for_log
return self.wait_for_logs([regex], timeout)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <pyln.testing.utils.LightningD object at 0x7f0c145a32d0>
regexs = ['now ACTIVE'], timeout = 60
def wait_for_logs(self, regexs, timeout=TIMEOUT):
"""Look for `regexs` in the logs.
We tail the stdout of the process and look for each regex in `regexs`,
starting from last of the previous waited-for log entries (if any). We
fail if the timeout is exceeded or if the underlying process
exits before all the `regexs` were found.
If timeout is None, no time-out is applied.
"""
logging.debug("Waiting for {} in the logs".format(regexs))
exs = [re.compile(r) for r in regexs]
start_time = time.time()
pos = self.logsearch_start
while True:
if timeout is not None and time.time() > start_time + timeout:
print("Time-out: can't find {} in logs".format(exs))
for r in exs:
if self.is_in_log(r):
print("({} was previously in logs!)".format(r))
> raise TimeoutError('Unable to find "{}" in logs.'.format(exs))
E TimeoutError: Unable to find "[re.compile('now ACTIVE')]" in logs.
contrib/pyln-testing/pyln/testing/utils.py:264: TimeoutError
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
And when it's set, and we're SLOW_MACHINE, simply disable valgrind.
Since Travis (SLOW_MACHINE=1) only does VALGRIND=1 DEVELOPER=1 tests,
and VALGRIND=0 DEVELOPER=0 tests, it was missing tests which needed
DEVELOPER and !VALGRIND.
Instead, this demotes them to non-valgrind tests for SLOW_MACHINEs.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I started replacing all get_node() calls, but got bored, so then just did the
tests which call get_node() 3 times or more.
Ends up not making a measurable speed difference, but it does make some
things neater and more standard.
Times with SLOW_MACHINE=1 (given that's how Travis tests):
Time before (non-valgrind):
393 sec (had 3 failures?)
Time after (non-valgrind):
410 sec
Time before (valgrind):
890 seconds (had 2 failures)
Time after (valgrind):
892 sec
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It looked like we weren't printing the address on closing outputs.
But we are, because the 'scriptPubkey' field is in the 'outputs' db
table since 0.7.3 (66a47d2761).
So make the logic clearer, and remove a completely bogus comment (UTXOs
with closing_info are definitely spendable!).
We export the json_add_utxos() for future use, too.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If the daemon already knows about the channel before it was stopped,
it won't get this message from gossipd. That's OK, since we explicitly
test for the channel being active two lines down.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's possible for our peer to publish a commitment tx that has already
updated our balance for an htlc before we've completed removing it from
our commitment tx (aka before we've updated our balance). This used to
crash, now we just update our balance (and the channel balance logs!)
and keep going.
If they've removed anything from our balance, we'll end up counting it
as chain_fees below. Not ideal but fine... probably.
On node start we replay onchaind's transactions from the database/from
our loaded htlc table. To keep things tidy, we shouldn't notify the
ledger about these, so we wrap pretty much everything in a flag that
tells us whether or not this is a replay.
There's a very small corner case where dust transactions will get missed
if the node crashes after the htlc has been added to the database but
before we've successfully notified onchaind about it.
Notably, most of the obtrusive updates to onchaind wrappings are due to
the fact that we record dust (ignored outputs) before we receive
confirmation of its confirmation.
Mostly we update existing tests to account for channel balances. In a
few places, new tests were needed as there wasn't an existing pathway
that tested the chain-fees for a few penalty cases
For cheats, we do a little bit of weird accounting. First we 'update'
our on-ledger balance to be the entirety of the channel's balance. Then,
as outputs get resolved, we record the fees and outputs as withdrawals
from this amount.
It's possible that they might successfully 'cheat', in which case we
record those as 'penalty' but debits (not credits).
We didn't wait until l2 processed the final state of HTLC #2, so
it might not include it when it drops onchain, leading to us only
getting 3 (not 4) sendrawtx calls.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Previously we've used the term 'funder' to refer to the peer
paying the fees for a transaction; v2 of openchannel will make
this no longer true. Instead we rename this to 'opener', or the
peer sending the 'open_channel' message, since this will be universally
true in a dual-funding world.
They were looking for specific amounts which proved to be rather flaky. Now
they look for specific outputs being available in the `listfunds` result after
everything was settled.
A CONSERVATIVE/3 target for them.
Some noisy changes to the tests as we had to update the estimatesmartfee
mock.
Changelog-Changed: We now use a higher feerate for resolving onchain HTLCs and for penalty transactions
We kept track of an URGENT, a NORMAL, and a SLOW feerate. They were used
for opening (NORMAL), mutual (NORMAL), UNILATERAL (URGENT) transactions
as well as minimum and maximum estimations, and onchain resolution.
We now keep track of more fine-grained feerates:
- `opening` used for funding and also misc transactions
- `mutual_close` used for the mutual close transaction
- `unilateral_close` used for unilateral close (commitment transactions)
- `delayed_to_us` used for resolving our output from our unilateral close
- `htlc_resolution` used for resolving onchain HTLCs
- `penalty` used for resolving revoked transactions
We don't modify our requests to our Bitcoin backend, as the next commit
will batch them !
Changelog-deprecated: The "urgent", "slow", and "normal" field of the `feerates` command are now deprecated.
Changelog-added: The fields "opening", "mutual_close", "unilateral_close", "delayed_to_us", "htlc_resolution" and "penalty" have been added to the `feerates` command.
Before this patch we would only update `channel->last_tx` with the newly
proposed closure tx from the peer if the fee of the new one was lower.
In negotiations where we are at the higher end and the peer starts
lower, all peer's subsequent proposals will be higher than his initial
proposal and in this case we would never update `channel->last_tx`
and would wrongly broadcast his initial proposal at the end of the
negotiation.
Fixes https://github.com/ElementsProject/lightning/issues/3549
Changelog-Fixed: Always broadcast the latest close transaction at the end of the close fee negotiation, instead of sometimes broadcasting the peer's initial closing proposal.
We still close the channel if we *send* an error, but we seem to have hit
another case where LND sends an error which seems transient, so this will
make a best-effort attempt to preserve our channel in that case.
Some test have to be modified, since they don't terminate as they did
previously :(
Changelog-Changed: quirks: We'll now reconnect and retry if we get an error on an established channel. This works around lnd sending error messages that may be non-fatal.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Spaces just make life a little harder for everyone.
(Plus, fix documentation: it's 'jsonrpc' not 'json' subsystem).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
A log can have a default node_id, which can be overridden on a per-entry
basis. This changes the format of logging, so some tests need rework.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We were checking against hardcoded hrp and prefixes. Now we parametrize via
the chainparams.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This was weird right from the start, so we just split the table into integers
and blobs, so each column has a well-defined format. It is also required for
postgres not to cry about explicit casts in the `paramTypes` array.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
It's generally clearer to have simple hardcoded numbers with an
#if DEVELOPER around it, than apparent variables which aren't, really.
Interestingly, our pruning test was always kinda broken: we have to pass
two cycles, since l2 will refresh the channel once to avoid pruning.
Do the more obvious thing, and cut the network in half and check that
l1 and l3 time out.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>