It can take a while if bitcoind has the regtest chain, and grossly
distorts our benchmarks!
Reported-by: Joe Netti <jnetti@blockstream.io>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
added sanity check to make sure scid of csv is the same as scid in gossip.
Revised style, mem allocation, and error checks
[ Minor fixups, and updated benchmark script -- RR ]
With data.tar.gz: 456609740 Apr 2 12:33
store_load_msec:35300-42354(37118.2+/-2.7e+03)
vsz_kb:582832
store_rewrite_sec:12.700000-13.430000(12.988+/-0.27)
listnodes_sec:3.000000-3.160000(3.076+/-0.057)
listchannels_sec:30.790000-31.690000(31.03+/-0.34)
routing_sec:0.00
peer_write_all_sec:63.640000-67.860000(66.294+/-1.4)
Channels have a htlc_minimum_msat of 10000, which is why we didn't
find routes.
This makes a significant speed drop:
-routing_sec:26.940000-27.990000(27.616+/-0.39)
+routing_sec:60.70
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
now we print the subtypes out when you call printwire
note that we have to reverse the order the subtypes appear in
because
1) they're static and,
2) a few of them are nested
the original version of the subtype generator emitted '$'
to designate that a field was a subtype; now it's got a different
format:
funding_type,8,num_inputs,2
funding_type,10,input_info,num_inputs*input_info
this patch updates our generator to understand this new format
This is needed so that some csv's can expose their subtype parsing
functions in their header. This gets used in a later PR where
we start replacing manually created 'subtype' definitions with
generated ones.
This doesn't result in a speedup for our benchmark, since we use the
cli tool which does the formatting.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:33422-36830(35196.2+/-1.2e+03)
vsz_kb:2637488
store_rewrite_sec:36.030000-37.630000(36.794+/-0.52)
listnodes_sec:0.720000-0.950000(0.86+/-0.077)
listchannels_sec:40.300000-41.080000(40.668+/-0.29)
routing_sec:30.440000-31.030000(30.69+/-0.2)
peer_write_all_sec:50.060000-52.800000(51.416+/-0.91)
MCP notable changes from 2 patches ago (>1 stddev):
-listchannels_sec:48.560000-55.680000(52.642+/-2.7)
+listchannels_sec:40.300000-41.080000(40.668+/-0.29)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Plugins don't do it right anyway, and we're about to remove it from
lightningd. Produces same format as json_pp.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We can save significant space by combining both sides: so much that we
can reduce the WIRE_LEN_LIMIT to something sane again.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:34467-36764(35517.8+/-7.7e+02)
vsz_kb:2637488
store_rewrite_sec:35.310000-36.580000(35.816+/-0.44)
listnodes_sec:1.140000-2.780000(1.596+/-0.6)
listchannels_sec:55.390000-58.110000(56.998+/-0.99)
routing_sec:30.330000-30.920000(30.642+/-0.19)
peer_write_all_sec:50.640000-53.360000(51.822+/-0.91)
MCP notable changes from previous patch (>1 stddev):
-store_rewrite_sec:34.720000-35.130000(34.94+/-0.14)
+store_rewrite_sec:35.310000-36.580000(35.816+/-0.44)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Don't turn them to/from pubkeys implicitly. This means nodeids in the store
don't get converted, but bitcoin keys still do.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:33934-35251(34531.4+/-5e+02)
vsz_kb:2637488
store_rewrite_sec:34.720000-35.130000(34.94+/-0.14)
listnodes_sec:1.020000-1.290000(1.146+/-0.086)
listchannels_sec:51.110000-58.240000(54.826+/-2.5)
routing_sec:30.000000-33.320000(30.726+/-1.3)
peer_write_all_sec:50.370000-52.970000(51.646+/-1.1)
MCP notable changes from previous patch (>1 stddev):
-store_load_msec:46184-47474(46673.4+/-4.5e+02)
+store_load_msec:33934-35251(34531.4+/-5e+02)
-vsz_kb:2638880
+vsz_kb:2637488
-store_rewrite_sec:46.750000-48.280000(47.512+/-0.51)
+store_rewrite_sec:34.720000-35.130000(34.94+/-0.14)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Outputs CSV. We add some stats for load times in developer mode, so we can
easily read them out.
peer_read_all_sec doesn't work, since we seem to reject about half the
updates for having bad signatures. It's also very slow...
routing fails, for unknown reasons, so that failure is ignored in routing_sec.
Results from 5 runs, min-max(mean +/- stddev):
store_load_msec,vsz_kb,store_rewrite_sec,listnodes_sec,listchannels_sec,routing_sec,peer_write_all_sec
39275-44779(40466.8+/-2.2e+03),2899248,41.010000-44.970000(41.972+/-1.5),2.280000-2.350000(2.304+/-0.025),49.770000-63.390000(59.178+/-5),33.310000-34.260000(33.62+/-0.35),42.100000-44.080000(43.082+/-0.67)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Header from folded patch 'fixup!_tools-bench-gossipd.sh__rough_benchmark_for_gossipd_and_the_million_channels_project-2.patch':
fixup! tools/bench-gossipd.sh: rough benchmark for gossipd and the million channels project
Suggested-by: @niftynei
Header from folded patch 'fixup!_tools-bench-gossipd.sh__rough_benchmark_for_gossipd_and_the_million_channels_project-1.patch':
fixup! tools/bench-gossipd.sh: rough benchmark for gossipd and the million channels project
MCP filename change.
Header from folded patch 'tools-bench-gossipd.sh__dont_print_csv_by_default.patch':
tools/bench-gossipd.sh: don't print CSV by default.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Header from folded patch 'fixup!_tools-bench-gossipd.sh__rough_benchmark_for_gossipd_and_the_million_channels_project.patch':
fixup! tools/bench-gossipd.sh: rough benchmark for gossipd and the million channels project
Make shellcheck happy.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
fixup printing methods in devtools/decodemsg such that TLV's can
now be printed as well. here's how you'd use it:
$ ./devtools/decodemsg --tlv opening_tlv 0120001E020202020202020202020202020202020202020202020202020202020202
> WIRE_OPTION_UPFRONT_SHUTDOWN_SCRIPT (size 32):
> shutdown_scriptpubkey=[020202020202020202020202020202020202020202020202020202020202]
TLV's use var_int's for messages sizes, both internally and
in the top level (you should really stack a var_int inside a var_int!!)
this updates our automagick generator code to understand 'var_ints'
passing back a null TLV was crashing here, because we tried to
dereference a null pointer. instead, we put it into a temporary
struct that we can check for NULL-ness, before assigning to the
passed in pointer.
let's let the fromwire__tlv methods allocate the tlv-objects and
return them. we also want to initialize all of their underlying
messages to NULL, and fail if we discover a duplicate mesage type.
if parsing fails, instead of returning a struct we return NULL.
Suggested-By: @rustyrussell
Since messages in TLV's are optional, the ideal way to deal with
them is to have a 'master struct' object for every defined tlv, where
the presence or lack of a field can be determined via the presence
(or lack thereof) of a struct for each of the optional message
types.
In order to do this, appropriately, we need a struct for every
TLV message. The next commit will make use of these.
Note that right now TLV message structs aren't namespaced to the
TLV they belong to, so there's the potential for collision. This
should be fixed when/where it occurs (should fail to compile).
Add tlv-messages to the general messages set so that their parsing
messages get printed out.
FIXME: figure out how to account for partial message length processing?
Version 1.1 of the lightning-rfc spec introduces TLVs for optional
data fields. This starts the process of updating our auto-gen'd
wireformat parsers to be able to understand TLV fields.
The general way to declare a new TLV field is to add a '+' to the
end of the fieldname. All field type declarations for that TLV set
should be added to a file in the same directory by the name
`gen_<field_name>_csv`.
Note that the FIXME included in this commit is difficult to fix, as
we currently pass in the csv files via stdin (so there's no easy
way to ascertain the originating directory of file)
Otherwise we can't really return a variable sized message with more than 65k
results. This was causing an integer overflow in `listchannels` (see #2504 for
details).
Signed-off-by: Christian Decker <decker.christian@gmail.com>