I misunderstood the API, this ended up nesting a result inside the JSON-RPC
result.
No concerns about backwards compatibility since this is so new.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This seems like overkill, at least for now. Handling the JSON
inline is clearer, for the existing examples at least.
We also remove the dummy hook, rather than fix it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now we can benchmark, and remove 500 bytes per node.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:35093-37907(36146+/-1.1e+03)
vsz_kb:555168
store_rewrite_sec:12.120000-13.750000(12.7+/-0.6)
listnodes_sec:1.270000-1.370000(1.322+/-0.039)
listchannels_sec:29.770000-31.600000(30.82+/-0.64)
routing_sec:0.00
peer_write_all_sec:63.630000-67.850000(65.432+/-1.7)
MCP notable changes from pre-Dijkstra (>1 stddev):
-vsz_kb:577456
+vsz_kb:555168
-routing_sec:60.70
+routing_sec:12.04
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Do it inside the can_reach() function, which is less optimal for BFG
which does 20 ops on the same channel, but fine for Dijkstra.
This does have a measurable cost, so we might want to use
non-cryptographic fuzz in future:
$ gossipd/test/run-bench-find_route 100000 100:
Before:
100 (100 succeeded) routes in 100000 nodes in 97346 msec (973461784 nanoseconds per route)
After:
100 (100 succeeded) routes in 100000 nodes in 113381 msec (1133813412 nanoseconds per route)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If a route is too long, we try to bias Dijkstra towards choosing a
shorter route by adding a per-hop cost. We do a naive "shortest path"
pass, then using that cost as a ceiling on per-hop cost, we do a
binary search.
There are some subtleties: we use risk rather than total as our
counter field (we normally bias this by 1 anyway, so it's easy to make
that a variable), and we set riskfactor to a mimimal value once we're
iterating. It's good enough to get a solution, we don't need to do a
2-dimensional search on riskfactor and riskbias.
Of course, this is extremely slow if we hit it on our benchmark,
though it doesn't happen in a more realistic network:
$ gossipd/test/run-bench-find_route 100000 100:
Before:
100 (79 succeeded) routes in 100000 nodes in 25341 msec (253412314 nanoseconds per route)
After:
100 (100 succeeded) routes in 100000 nodes in 97346 msec (973461784 nanoseconds per route)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Our uintmap can be a little slow with all the reallocation, so leave
NULL entries and walk to find the first one. Since we don't clean
them up, keep a cache of where the min non-all-NULL value is in the
heap.
It's clearer benefit on really large tests, so here's 1M nodes:
Comparison using gossipd/test/run-bench-find_route 1000000 10:
Before:
10 (10 succeeded) routes in 1000000 nodes in 91995 msec (9199532898 nanoseconds per route)
After:
10 (10 succeeded) routes in 1000000 nodes in 20605 msec (2060539287 nanoseconds per route)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Use a uintmap as our minheap.
Note that Dijkstra can give overlength routes, so some checks are disabled.
Comparison using gossipd/test/run-bench-find_route 100000 10:
Before:
10 (10 succeeded) routes in 100000 nodes in 120087 msec (12008708402 nanoseconds per route)
After:
10 (10 succeeded) routes in 100000 nodes in 2269 msec (226925462 nanoseconds per route)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is a weakness with Dijkstra, so write an explicit unit test that
we can find a short enough (but more expensive) route.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Channels have a htlc_minimum_msat of 10000, which is why we didn't
find routes.
This makes a significant speed drop:
-routing_sec:26.940000-27.990000(27.616+/-0.39)
+routing_sec:60.70
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
68fe5eacde introduced a skip in the iteration
of the available funds, which means utxos[i] may be off the end of utxos.
Reported-and-debugged-by: @nitramiz
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
For me this happened only under valgrind with
test_option_upfront_shutdown_script (in a pending branch):
==5063== by 0x51FC076: raise (raise.c:48)
==5063== by 0x51DD534: abort (abort.c:79)
==5063== by 0x1292D2: fatal (log.c:647)
==5063== by 0x116570: channel_set_state (channel.c:340)
==5063== by 0x116E04: lockin_complete (channel_control.c:73)
==5063== by 0x116F15: peer_got_funding_locked (channel_control.c:108)
==5063== by 0x117354: channel_msg (channel_control.c:208)
No CHANGELOG: this was introduced in a recent refactor.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We generated blocks to announce the channel, but it can also expire
the HTLC if the timing is wrong. We don't need to anyway, since we
fixed the FIXME; we store local unannounced channels for restoration
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Millisatoshi's inner representation expected to be an int, otherwise unwanted exceptions could occur
The following example raises: TypeError: __int__ returned non-int (type decimal.Decimal)
from lightning import Millisatoshi
one_sat = Millisatoshi("1sat")
two_sats = one_sat * 2
The old value of 1000 sat was too small to cover the dust reserves.
This lead to the situation when trying to open a channel with minimal
amount, the channels got refused because they were not able cover the
commitment fees.
For this reason the minimal capacity should be increased to i.e. 10k
satoshi, as the technical minimum that also accounts for fees and
reserves is somewhere around 6k sat.
The spec says not to send a commitment_signed without any changes, but LND
does this. To understand why, you have to understand how LND works. I
haven't read the code, but I'm pretty sure it works like this:
1. lnd slows down to do garbage collection, because it's in Go.
2. When an alert timer goes off, noticing it's not making process, it
sends a twitter message to @roasbeef.
3. @roasbeef sshs into the user's machine and binary patches lnd to send
a commitment_signed message.
4. Unfortunately he works so fast that various laws of causality are broken,
meaning sometimes the commitment_signed is sent before any of thes
other things happen.
I'm fairly sure that this will stop as @roasbeef ages, or lnd introduces
some kind of causality enforcement fix.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Refactored the weighted-reservoir-sampling algo to make it more straightforward.
It now uses the excess as fraction of capacity as weight. This favors channels that
are more _relatively_ unbalanced to be used for incoming payment.
Now passes test_invoice_routeboost_private() when using max fundamount=16777215.
This fixes block parsing on testnet; specifically, non-standard tx versions.
We hit a type bug in libwally (wallt_get_secp_context()) which I had to
work around for the moment, and the updated libsecp adds an optional hash
function arg to the ECDH function.
Fixes: #2563
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
make it easier to fire up a local test environment to try out
c-lightning.
requires bitcoind to be installed. to use, you have to run it
via `source contrib/startup_regtest.sh`, so that the aliases
are set correctly.
make it a bit easier to track mutual channel closures by
adding broadcast txid to the listpeers billboard.
since lightningd manages the 'identity' of the closing tx we need
to send it back to closingd so it can update the billboard
appropriately.
This also allows plugins to do "hold invoices" a-la LND, useful for
just-in-time inventory handling.
We're careful to handle the invoice getting paid behind our backs, and
the incoming HTLC going away.
Once @cdecker's sphinx rework is in, we can also hand the raw payload
to the invoice_payment_hook, for special effects.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We're going to make it async, so start by moving the core code into
invoice.c and having that directly call fail/success functions for the
htlc.
We add an extra check in fulfill_htlc() that the HTLC state is correct:
that can't happen now, but may once we're async.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When we compact the store, we need to adjust the broadast index for
peers so they know where they're up to.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This requires some trickiness when we want to re-add unannounced channels
to the store after compaction, so we extract a common "copy_message" to
transfer from old store to new.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:36034-37853(37109.8+/-5.9e+02)
vsz_kb:577456
store_rewrite_sec:12.490000-13.250000(12.862+/-0.27)
listnodes_sec:1.250000-1.480000(1.364+/-0.09)
listchannels_sec:30.820000-31.480000(31.068+/-0.24)
routing_sec:26.940000-27.990000(27.616+/-0.39)
peer_write_all_sec:65.690000-68.600000(66.698+/-0.99)
MCP notable changes from previous patch (>1 stddev):
-vsz_kb:1202316
+vsz_kb:577456
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The next patch causes us to access the store while loading (we read
channel_updates for local peers), which messes up loading due to the
lseek involved.
Using pread() is atomic with seek & read, and also a bit more
efficient. Make the header contiguous too, while we're here.
We don't need pwrite: we always open with O_APPEND which means the
seek-to-end is implicit.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:36771-38289(37529.6+/-5.3e+02)
vsz_kb:1202316
store_rewrite_sec:12.460000-13.280000(12.784+/-0.29)
listnodes_sec:1.240000-1.410000(1.34+/-0.058)
listchannels_sec:29.850000-31.840000(30.908+/-0.69)
routing_sec:27.800000-31.790000(28.822+/-1.5)
peer_write_all_sec:66.200000-68.720000(67.44+/-0.84)
MCP notable changes from previous patch (>1 stddev):
-store_load_msec:39207-45089(41374.6+/-2.2e+03)
+store_load_msec:36771-38289(37529.6+/-5.3e+02)
-store_rewrite_sec:15.090000-16.790000(15.654+/-0.63)
+store_rewrite_sec:12.460000-13.280000(12.784+/-0.29)
-peer_write_all_sec:66.830000-76.850000(71.976+/-3.6)
+peer_write_all_sec:66.200000-68.720000(67.44+/-0.84)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When we no longer keep channel_updates in memory, there's a path where
we access them on load: when we promote a local channel to an
announced channel.
This breaks at the moment, since gs->fd == -1; change it to a writable
flag instead.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The txout_script field is unused; the local_disable only applies to
the handful of local channels, so move that into a hash table.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:39207-45089(41374.6+/-2.2e+03)
vsz_kb:1202316
store_rewrite_sec:15.090000-16.790000(15.654+/-0.63)
listnodes_sec:1.290000-3.790000(1.938+/-0.93)
listchannels_sec:30.190000-32.120000(31.31+/-0.69)
routing_sec:28.220000-31.340000(29.314+/-1.2)
peer_write_all_sec:66.830000-76.850000(71.976+/-3.6)
MCP notable changes from previous patch (>1 stddev):
-store_load_msec:35107-37944(36686+/-1e+03)
+store_load_msec:39207-45089(41374.6+/-2.2e+03)
-vsz_kb:1218036
+vsz_kb:1202316
-listchannels_sec:28.510000-30.270000(29.6+/-0.6)
+listchannels_sec:30.190000-32.120000(31.31+/-0.69)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We used to have a `struct chan` while we're waiting for an update; now we
keep that internally. So a `struct chan` without a channel_announcement
in the store is private, and other is public.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>