This will never be reliable under high load, without making it unable
to detect real errors. But the test is useful because if we don't
have this test we'll never notice if we break the const-timedness of
our implementation.
So, move the calloc out of the test loop (which seems to make it more
reliable), and then after we've run it, check the 1-minute load
average. Too high, we don't complain about results. It's not
perfect, but it's better.
Running 100 times (-O3) serially gave 100 successes with the following results:
Constant: Within 5% 562-926(832.89+/-73)/1000 times
Non-constant: More than 5% slower 860-990(956.35+/-26)/1000 times
More importantly, if we swap the const and non-const tests, we get
the expected 100 failures:
Non-constant: Within 5% 14-79(41.17+/-14)/1000 times
Constant: More than 5% slower 44-231(111.89+/-33)/1000 times
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is more reliable under load now: shorten the times so it is
likely to run in a single timeslice, and add a nanosleep so it's
likely to be at the start of the timeslice.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We check that memcmp *isn't* constant time, but that's only true under
-O2 or above: __OPTIMIZE__ doesn't distinguish.
So we need a finer-grained test. Also reduce verbosity by default.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
To be safe, we should never memcmp secrets. We don't do this
currently outside tests, but we're about to.
The tests to prove this as constant time are the tricky bit.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>