Use the source-map-fixtures package to provide a fixture which throws an error.
Stack traces should be corrected for this fixtures.
Add a script to generate a test fixture that comes with an input source map.
Errors from this fixture should be mapped to the input source. Having the script
makes it easier to update this fixture in the future.
The generated fixture now uses a source map file. Commenting out the
inputSourceMap code in lib/babel.js didn't cause the test to fail if an inline
source map was used. I'm not sure why that is. The test fails as expected when
using a map file.
Note that the original 'stack traces for exceptions are corrected using a source
map' test only tested the source map cached by lib/babel.js. This is already
covered by the new 'stack traces for exceptions are corrected using a source
map, taking an initial source map for the test file into account' test.
We throw the error from another fixture which uses a source map file. This
implicitly tests source-map-support. We're not testing a fixture which uses an
inline source map, firstly because that's somewhat superfluous and secondly
because the generated stack trace line is deemed not important due to the
logger's behavior and a source-map-support bug
(<https://github.com/evanw/node-source-map-support/pull/119>).
There were two problems with using destructuring on the `t` parameter
passed to tests:
1. `t.end` is enumerable, but throws when accessed. This created
problems when destructuring using rest params `{ok, ...t}`.
The solution: `t.end` is enumerable only in "callback mode".
2. `t._capt`, `t._expr` are not enumerable, and therefore not
exposed as a member of the `...t` rest param.
The only solution was to make them enumerable.
Not ideal, but better than the confusing error that previously occurred.
There are two possible errors that can result in no test data being
received.
1. The user never imports `ava` in their test file.
2. The user has not written any tests in the file yet.
We treat both as errors, but were handling them differently.
Most problematic, one caused the promise to resolve, the other
caused the promise to reject.
Nothing wrong with the implementation, but the test was only
checking test duration, not that the test passed. The test
was incorrecty using `t.end`, and so finished quicker than expected.
I want to start focusing on getting our require times down,
both in the main thread and forked tests.
This adds `time-require` support directly to the CLI to assist in that.
This is especially important for forked test processes, where I currently
have to manually add a line of code each time I want to test an optimization.
I don't think this should be a publicly documented move. I think we want
the freedom to remove this once we are happy with performance.
I have placed comments above all `time-require` references, indicating
it is for internal use only. Users have been given fair warning against
relying on its inclusion long term.
Usage:
```sh
$ DEBUG=ava ./cli.js test/fixture/es2015.js --sorted
```
`--sorted` is optional. It sorts the requires from most expensive to least. The default is sequential sorting.
The output will display multiple `time-require` reports.
One for each forked process, and the last one for the main thread.
AVA will no longer automatically end tests when you reach the plan limit.
Users must explicitly call t.end().
It is now an error to return an async object (promise/observable) from
a legacy `test.async` test.
Reference:
https://github.com/sindresorhus/ava/issues/244#issuecomment-159348943
This passes terminal width information to, and enables ansi-color
support in, the forked process. It makes the output of `time-require`
much prettier when used in a fork.
Accessing `t.end` now throws an error unless the test is first
declared async via `test.async([title], fn)`
Reference:
https://github.com/sindresorhus/ava/issues/244
(cherry picked from commit 28b1641)
Initial implementation of #244 - require explicit test.async