* package-hash@^2
* Allow precompiler setup to be asynchronous
* Consistently refer to babel-config module as babelConfigHelper
* Manage Babel config using hullabaloo
Fixes#707
* Disable Babel cache when precompiling
* Rename pkgDir to projectDir: this better communicates it's the
directory of the user's project.
* Remove unused cwd option from API.
* Remove default for resolveTestsFrom. It's confusing to see
process.cwd() in the code even though it's never actually used.
* Ensure API test properly creates the API instances.
* ava-files should honor a `cwd` option.
* Properly resolve test files in non-watch mode.
* update test/watcher.js to reflect new AvaFiles Api
* extract ava-files to it's own thing
Sometimes the API sends exceptions to RunStatus. Ensure the file path for those
exceptions is relative, just like the exceptions coming from the fork emitter.
If the exception doesn't apply to any specific file, ensure the `file` is
undefined. I haven't bothered adding tests for this though.
* extract files processing from api
* extract test-data class from API
* fix breakage from rebase
* pass testData to reporters in every method
* rename testData to runStatus
* runStatus.listenToTestRun => runStatus.observeFork
* rename test-data.js to run-status.js
* fix failing watcher test
clean-yaml-object is the error serialization tool used by node-tap.
It has some nice benefits over serailze-error including better stringification of functions and buffers.
More importantly, the shared code will help keep our tap output consistent with that of node-tap.
A second argument can be passed to Api#run(). If true the tests will be run
in exclusive mode, regardless of whether exclusive tests are detected.
The Api now remits the 'stats' event from the forks.
The watcher keeps track of exclusive tests. If all test files that contained
exclusive tests need to be rerun it runs them without forcing exclusive mode.
This means the exclusivity is determined by the tests themselves.
If a test file, containing exclusive tests, is not one of the files being rerun,
it forces exclusive mode. This ensures only exclusive tests are run in the
changed files, making .only sticky.
If all test files that contained exclusive tests are removed, sticky mode is
disabled. The same happens if there are no more exclusive tests after a run.
Fixes#593.
Catch exceptions when initially running files. Don't run any tests, just report
the exception.
Move test teardown into a callback for the run promise so it can be used when
--match is used but there are no matches, and when exceptions occur while
initially running files.
Teardown causes the test promise to reject, which leads to an attempt to run
the tests. Add a guard to prevent this.
Fixes#622.
If --match is used but no tests match, report an error rather than running all
tests.
Now exclusive tests are marked as no longer exclusive if they don't match. Set
defaults for the 'match' option in the Api and Runner.
Fixes#606.
* Move the bulk of tryRun() out of the forEach into a run() method
* Bail out early to reduce nesting
* Count down to 0 rather than up to the filesCount
* Guard more explicitly against repeated tries when stats are emitted *and*
the process crashes to prevent premature test runs
Explicitly normalize test file paths as they're discovered in `api.js`. They end
up being normalized implicitly in `fork.js` but it's saner if test file paths
always use backslashes on Windows.
Assume test file and source patterns use slashes. Ensure such when recursing
directories in `api.js` and when matching files in `watcher.js`.
Fix watcher tests to emit dependencies and chokidar changes using
platform-specific paths.
Test workers keep track of test dependencies for all registered file extensions.
These are sent back to the Api as part of the 'teardown' event, ensuring they're
captured irrespective of whether tests passed.
The watcher rejects any dependencies that are not source files, matching how
Chokidar watches for source file modifications. It maintains a list of source
dependencies for each test file.
The watcher will find the test files that depend on modified source files and
rerun those (along with any other modified test files). If any modified source
files cannot be mapped to test files all tests are rerun. This is necessary
because only `require()` dependencies are tracked, not file access.
The child processes determine whether the test had an error based on its `error`
property not being `undefined`. However they then change this property to an
empty object if there was no error.
The API determines whether a test had an error based on its (empty) error object
having a `message` property. If a test did not have an error, its `error`
property is set to `null`. This prevents errors without messages from being
reported correctly.
Instead set `error` to `null` in the child processes and rely on that in the
API.
I've added a test to validate errors without messages get reported.