* Make the Runner manage the snapshot state. Thread an accessor to the
`t.snapshot()` assertion.
* Save snapshots when runner has finished. Fixes#1218.
* Use jest-snapshot directly, without serializing values. Use jest-diff
to generate the diff if the snapshot doesn't match. This does mean the
output is not colored and has other subtle differences with how other
assertions format values, but that's OK for now. Fixes#1220, #1254.
* Pin jest-snapshot and jest-diff versions. This isn't ideal but we're
using private APIs and making other assumptions, so I'm not comfortable
with using a loose SemVer range.
Show instructions on how to use `t.throws()` with the test results,
without writing them to stderr. Detect the improper usage even if user
code swallows the error, meaning that tests will definitely fail.
Assume that errors thrown, or emitted from an observable, or if the
returned promise was rejected, that that error is due to the improper
usage of `t.throws()`.
Assume that if a test has a pending throws assertion, and an error leaks
as an uncaught exception or an unhandled rejection, the error was
thrown due to the pending throws assertion. Attribute it to the test.
Rather than keeping an infinite timer open, waiting for `t.end()` to be
called, fail the callback test if it is not ended when the event loop
empties.
Similarly, fail promise/observable returning tests if the promise hasn't
fulfilled or the observable hasn't completed when the event loop
empties.
Note that user code can keep the event loop busy, e.g. if it's listening
on a socket or starts long timers. Even after this change, async tests
may hang if the event loop is kept busy.
* Clarify responsibilities
* Consistently import dependencies
* Clarify significance of `exports.avaRequired`
* Stop mimicking process with process-adapter. Reference it using
`adapter` instead. Use the `process` global where applicable. Masking
the process global with a mimicked object is unnecessarily confusing.
* Remove superstitious delays in exiting workers
The worker now only exits when told by the main process. This means the
IPC channel must have drained before the main process can send the
instruction. There's no need to wait before sending the message that
teardown has completed.
The AppVeyor workaround was introduced to solve Node.js 0.10 issues.
We're no longer supporting that version. In theory, issues around
flushing I/O exist regardless of whether AVA is running in AppVeyor.
There is no clear way around this though, so let's assume it's not
actually an issue.
* Ensure Test#run() returns exit promise
This allows errors from the exit logic to propagate to the runner.
* Treat runner errors as uncaught exceptions
Errors that occur inside the runner are treated as uncaught exceptions.
This prevents them from being swallowed in the promise chain, causing
the test to hang.
* fixup! Treat runner errors as uncaught exceptions
* fixup! Treat runner errors as uncaught exceptions
The main process sets the AVA_PATH environment variable to the absolute
path of the index.js file. Workers are loaded with this variable
present. When test files require the AVA module (assuming it's a version
containing this commit of course), the index.js file redirects to the
one used by the worker if necessary by comparing AVA_PATH.
The redirect required most of index.js to be moved into a separate
module (lib/main.js).
Fixes#643.
* ava-files should honor a `cwd` option.
* Properly resolve test files in non-watch mode.
* update test/watcher.js to reflect new AvaFiles Api
* extract ava-files to it's own thing
* fix(cli): Remove default files from CLI
The default files should be in one place (`ava-files.js`).
Right now the defaults provided in `ava-files.js` aren't being used
because the CLI pre-populates them.
Closes#875
* Fixup #876
Since #713 the API no longer emits 'dependencies' events. These are emitted
instead by the RunStatus, which can be obtained by listening to the 'test-run'
event on the API.
The watcher tests use a mocked API object which wasn't updated to reflect these
changes. Consequently dependency tracking was broken since #713 was merged.
This commit fixes the watcher and the corresponding tests. I've also added an
integration test which does not rely on mocking, helping us detect breakage
sooner.
* detect improper use of t.throws
Protects against a common misuse of t.throws (Like that seen in #739).
This required the creation of a custom babel plugin.
https://github.com/jamestalmage/babel-plugin-ava-throws-helper
* relative file path and colors
* protect against null/undefined in _setAssertError
* use babel-code-frame to do syntax highlighting on the Error
* require `babel-code-frame` inline. It has a sizable dependency graph
* remove middle section of message. It is redundant given code-frame
* further tests and add documentation.
* update readme.md
* refactor `ok` to `truthy` and `notOk` to `falsy`
* update tests to be more explicit
* update docs to use a better assertion api
* realign power-assert output
* quick typo fix
* update assertions
clean-yaml-object is the error serialization tool used by node-tap.
It has some nice benefits over serailze-error including better stringification of functions and buffers.
More importantly, the shared code will help keep our tap output consistent with that of node-tap.
Catch exceptions when initially running files. Don't run any tests, just report
the exception.
Move test teardown into a callback for the run promise so it can be used when
--match is used but there are no matches, and when exceptions occur while
initially running files.
Teardown causes the test promise to reject, which leads to an attempt to run
the tests. Add a guard to prevent this.
Fixes#622.
If --match is used but no tests match, report an error rather than running all
tests.
Now exclusive tests are marked as no longer exclusive if they don't match. Set
defaults for the 'match' option in the Api and Runner.
Fixes#606.
Test workers keep track of test dependencies for all registered file extensions.
These are sent back to the Api as part of the 'teardown' event, ensuring they're
captured irrespective of whether tests passed.
The watcher rejects any dependencies that are not source files, matching how
Chokidar watches for source file modifications. It maintains a list of source
dependencies for each test file.
The watcher will find the test files that depend on modified source files and
rerun those (along with any other modified test files). If any modified source
files cannot be mapped to test files all tests are rerun. This is necessary
because only `require()` dependencies are tracked, not file access.
The child processes determine whether the test had an error based on its `error`
property not being `undefined`. However they then change this property to an
empty object if there was no error.
The API determines whether a test had an error based on its (empty) error object
having a `message` property. If a test did not have an error, its `error`
property is set to `null`. This prevents errors without messages from being
reported correctly.
Instead set `error` to `null` in the child processes and rely on that in the
API.
I've added a test to validate errors without messages get reported.
Ensures that skipped tests are also counted when determining whether or
not a file contains tests. My own use-case is a multi-file test suite
where one or more test files contain nothing but skipped tests due to
them testing functionality yet to be implemented.