If a test is rejected with a non-Error reason an AssertionError is created.
Like when a non-Error is thrown, use inspect() to include a string
representation of the rejection reason.
If a test throws synchronously, but the error is not an Error instance, create
an AssertionError like for rejected promises.
Use inspect() to include a string representation of the actual error, and
include it in the `actual` property.
Remove special handling which converted `undefined` error values to the
`'undefined'` string.
The child processes determine whether the test had an error based on its `error`
property not being `undefined`. However they then change this property to an
empty object if there was no error.
The API determines whether a test had an error based on its (empty) error object
having a `message` property. If a test did not have an error, its `error`
property is set to `null`. This prevents errors without messages from being
reported correctly.
Instead set `error` to `null` in the child processes and rely on that in the
API.
I've added a test to validate errors without messages get reported.
Ensures that skipped tests are also counted when determining whether or
not a file contains tests. My own use-case is a multi-file test suite
where one or more test files contain nothing but skipped tests due to
them testing functionality yet to be implemented.
This prevents interference between the mini logger and child processes that use `console.log`. The mini reporter previously used logUpdate which deletes lines previously written with the logUpdate API before writing a new one. This caused problems if lines from console.log were written inbetween logUpdate calls. It would delete the users log messages instead of the test status line. To fix this, we store the last written last log line, clear it, write the users log output, then restore the last log line. This keeps the miniReporter output always at the bottom of the log output.
It also fixes an incorrect use of the `child_process` API. We were using the `stdio` option for `child_process.fork`, but that option is ignored (it is honored for just about every other method in that API). See: 7b355c5bb3/lib/child_process.js (L49-L50)
It also adds a set of visual tests which can be run via `npm run visual`. They must be run manually, and should be run as part of our pre-release process.
This fixes that regression, and extends the promise returning behavior to all assertions.
This would be useful in `async` test functions to bail on the remaining tests if some precondition fails. Simply `await` the assertion.
There was an incorrect plan count in one of the tests. It's troubling that this did not cause a failure.
Issue filed in `tap` project:
https://github.com/isaacs/node-tap/issues/198
Don't ignore files starting with '_'
Ignore files starting with '_'
Fix path for fixtures
Add 'fixtures' path to tests
Move ignored fixtures to their own directory