test
provides a standard way of writing and running tests in Dart.
- Writing Tests
- Running Tests
- Asynchronous Tests
- Running Tests With Custom HTML
- Configuring Tests
- Tagging Tests
- Debugging
- Browser/VM Hybrid Tests
- Testing with
barback
- Further Reading
Tests are specified using the top-level test()
function, and test
assertions are made using expect()
:
import "package:test/test.dart";
void main() {
test("String.split() splits the string on the delimiter", () {
var string = "foo,bar,baz";
expect(string.split(","), equals(["foo", "bar", "baz"]));
});
test("String.trim() removes surrounding whitespace", () {
var string = " foo ";
expect(string.trim(), equals("foo"));
});
}
Tests can be grouped together using the group()
function. Each
group's description is added to the beginning of its test's descriptions.
import "package:test/test.dart";
void main() {
group("String", () {
test(".split() splits the string on the delimiter", () {
var string = "foo,bar,baz";
expect(string.split(","), equals(["foo", "bar", "baz"]));
});
test(".trim() removes surrounding whitespace", () {
var string = " foo ";
expect(string.trim(), equals("foo"));
});
});
group("int", () {
test(".remainder() returns the remainder of division", () {
expect(11.remainder(3), equals(2));
});
test(".toRadixString() returns a hex string", () {
expect(11.toRadixString(16), equals("b"));
});
});
}
Any matchers from the matcher
package can be used with expect()
to do complex validations:
import "package:test/test.dart";
void main() {
test(".split() splits the string on the delimiter", () {
expect("foo,bar,baz", allOf([
contains("foo"),
isNot(startsWith("bar")),
endsWith("baz")
]));
});
}
You can use the setUp()
and tearDown()
functions to
share code between tests. The setUp()
callback will run before every test in a
group or test suite, and tearDown()
will run after. tearDown()
will run even
if a test fails, to ensure that it has a chance to clean up after itself.
import "package:test/test.dart";
void main() {
var server;
var url;
setUp(() async {
server = await HttpServer.bind('localhost', 0);
url = Uri.parse("http://${server.address.host}:${server.port}");
});
tearDown(() async {
await server.close(force: true);
server = null;
url = null;
});
// ...
}
A single test file can be run just using pub run test path/to/test.dart
.
Many tests can be run at a time using pub run test path/to/dir
.
It's also possible to run a test on the Dart VM only by invoking it using dart path/to/test.dart
, but this doesn't load the full test runner and will be
missing some features.
The test runner considers any file that ends with _test.dart
to be a test
file. If you don't pass any paths, it will run all the test files in your
test/
directory, making it easy to test your entire application at once.
You can select specific tests cases to run by name using pub run test -n "test name"
. The string is interpreted as a regular expression, and only tests whose
description (including any group descriptions) match that regular expression
will be run. You can also use the -N
flag to run tests whose names contain a
plain-text string.
By default, tests are run in the Dart VM, but you can run them in the browser as
well by passing pub run test -p chrome path/to/test.dart
. test
will take
care of starting the browser and loading the tests, and all the results will be
reported on the command line just like for VM tests. In fact, you can even run
tests on both platforms with a single command: pub run test -p "chrome,vm" path/to/test.dart
.
Some test files only make sense to run on particular platforms. They may use
dart:html
or dart:io
, they might test Windows' particular filesystem
behavior, or they might use a feature that's only available in Chrome. The
@TestOn
annotation makes it easy to declare exactly which platforms
a test file should run on. Just put it at the top of your file, before any
library
or import
declarations:
@TestOn("vm")
import "dart:io";
import "package:test/test.dart";
void main() {
// ...
}
The string you pass to @TestOn
is what's called a "platform selector", and it
specifies exactly which platforms a test can run on. It can be as simple as the
name of a platform, or a more complex Dart-like boolean expression involving
these platform names.
You can also declare that your entire package only works on certain platforms by
adding a test_on
field to your package config file.
Platform selectors use the boolean selector syntax defined in the
boolean_selector
package, which is a subset of Dart's
expression syntax that only supports boolean operations. The following
identifiers are defined:
-
vm
: Whether the test is running on the command-line Dart VM. -
dartium
: Whether the test is running on Dartium. -
content-shell
: Whether the test is running on the headless Dartium content shell. -
chrome
: Whether the test is running on Google Chrome. -
phantomjs
: Whether the test is running on PhantomJS. -
firefox
: Whether the test is running on Mozilla Firefox. -
safari
: Whether the test is running on Apple Safari. -
ie
: Whether the test is running on Microsoft Internet Explorer. -
dart-vm
: Whether the test is running on the Dart VM in any context, including Dartium. It's identical to!js
. -
browser
: Whether the test is running in any browser. -
js
: Whether the test has been compiled to JS. This is identical to!dart-vm
. -
blink
: Whether the test is running in a browser that uses the Blink rendering engine. -
windows
: Whether the test is running on Windows. Ifvm
is false, this will befalse
as well. -
mac-os
: Whether the test is running on Mac OS. Ifvm
is false, this will befalse
as well. -
linux
: Whether the test is running on Linux. Ifvm
is false, this will befalse
as well. -
android
: Whether the test is running on Android. Ifvm
is false, this will befalse
as well, which means that this won't be true if the test is running on an Android browser. -
ios
: Whether the test is running on iOS. Ifvm
is false, this will befalse
as well, which means that this won't be true if the test is running on an iOS browser. -
posix
: Whether the test is running on a POSIX operating system. This is equivalent to!windows
.
For example, if you wanted to run a test on every browser but Chrome, you would
write @TestOn("browser && !chrome")
.
Tests can be run on Dartium by passing the -p dartium
flag. If you're
using Mac OS, you can install Dartium using Homebrew. Otherwise,
make sure there's an executable called dartium
(on Mac OS or Linux) or
dartium.exe
(on Windows) on your system path.
Similarly, tests can be run on the headless Dartium content shell by passing -p content-shell
. The content shell is installed along with Dartium when using
Homebrew. Otherwise, you can downloaded it manually from this
page; if you do, make sure the executable named content_shell
(on Mac OS or Linux) or content_shell.exe
(on Windows) is on your system path.
In the future, there will be a more explicit way to configure the location of both the Dartium and content shell executables.
Tests written with async
/await
will work automatically. The test runner
won't consider the test finished until the returned Future
completes.
import "dart:async";
import "package:test/test.dart";
void main() {
test("new Future.value() returns the value", () async {
var value = await new Future.value(10);
expect(value, equals(10));
});
}
There are also a number of useful functions and matchers for more advanced
asynchrony. The completion()
matcher can be used to test
Futures
; it ensures that the test doesn't finish until the Future
completes,
and runs a matcher against that Future
's value.
import "dart:async";
import "package:test/test.dart";
void main() {
test("new Future.value() returns the value", () {
expect(new Future.value(10), completion(equals(10)));
});
}
The throwsA()
matcher and the various throwsExceptionType
matchers work with both synchronous callbacks and asynchronous Future
s. They
ensure that a particular type of exception is thrown:
import "dart:async";
import "package:test/test.dart";
void main() {
test("new Future.error() throws the error", () {
expect(new Future.error("oh no"), throwsA(equals("oh no")));
expect(new Future.error(new StateError("bad state")), throwsStateError);
});
}
The expectAsync()
function wraps another function and has two
jobs. First, it asserts that the wrapped function is called a certain number of
times, and will cause the test to fail if it's called too often; second, it
keeps the test from finishing until the function is called the requisite number
of times.
import "dart:async";
import "package:test/test.dart";
void main() {
test("Stream.fromIterable() emits the values in the iterable", () {
var stream = new Stream.fromIterable([1, 2, 3]);
stream.listen(expectAsync1((number) {
expect(number, inInclusiveRange(1, 3));
}, count: 3));
});
}
By default, the test runner will generate its own empty HTML file for browser tests. However, tests that need custom HTML can create their own files. These files have three requirements:
-
They must have the same name as the test, with
.dart
replaced by.html
. -
They must contain a
link
tag withrel="x-dart-test"
and anhref
attribute pointing to the test script. -
They must contain
<script src="packages/test/dart.js"></script>
.
For example, if you had a test called custom_html_test.dart
, you might write
the following HTML file:
<!doctype html>
<!-- custom_html_test.html -->
<html>
<head>
<title>Custom HTML Test</title>
<link rel="x-dart-test" href="custom_html_test.dart">
<script src="packages/test/dart.js"></script>
</head>
<body>
// ...
</body>
</html>
If a test, group, or entire suite isn't working yet and you just want it to stop
complaining, you can mark it as "skipped". The test or tests won't be run, and,
if you supply a reason why, that reason will be printed. In general, skipping
tests indicates that they should run but is temporarily not working. If they're
is fundamentally incompatible with a platform, @TestOn
/testOn
should be used instead.
To skip a test suite, put a @Skip
annotation at the top of the file:
@Skip("currently failing (see issue 1234)")
import "package:test/test.dart";
void main() {
// ...
}
The string you pass should describe why the test is skipped. You don't have to include it, but it's a good idea to document why the test isn't running.
Groups and individual tests can be skipped by passing the skip
parameter. This
can be either true
or a String describing why the test is skipped. For example:
import "package:test/test.dart";
void main() {
group("complicated algorithm tests", () {
// ...
}, skip: "the algorithm isn't quite right");
test("error-checking test", () {
// ...
}, skip: "TODO: add error-checking.");
}
By default, tests will time out after 30 seconds of inactivity. However, this
can be configured on a per-test, -group, or -suite basis. To change the timeout
for a test suite, put a @Timeout
annotation at the top of the file:
@Timeout(const Duration(seconds: 45))
import "package:test/test.dart";
void main() {
// ...
}
In addition to setting an absolute timeout, you can set the timeout relative to
the default using @Timeout.factor
. For example, @Timeout.factor(1.5)
will
set the timeout to one and a half times as long as the default—45 seconds.
Timeouts can be set for tests and groups using the timeout
parameter. This
parameter takes a Timeout
object just like the annotation. For example:
import "package:test/test.dart";
void main() {
group("slow tests", () {
// ...
test("even slower test", () {
// ...
}, timeout: new Timeout.factor(2))
}, timeout: new Timeout(new Duration(minutes: 1)));
}
Nested timeouts apply in order from outermost to innermost. That means that "even slower test" will take two minutes to time out, since it multiplies the group's timeout by 2.
Sometimes a test may need to be configured differently for different platforms.
Windows might run your code slower than other platforms, or your DOM
manipulation might not work right on Safari yet. For these cases, you can use
the @OnPlatform
annotation and the onPlatform
named parameter to test()
and group()
. For example:
@OnPlatform(const {
// Give Windows some extra wiggle-room before timing out.
"windows": const Timeout.factor(2)
})
import "package:test/test.dart";
void main() {
test("do a thing", () {
// ...
}, onPlatform: {
"safari": new Skip("Safari is currently broken (see #1234)")
});
}
Both the annotation and the parameter take a map. The map's keys are platform
selectors which describe the platforms for which the
specialized configuration applies. Its values are instances of some of the same
annotation classes that can be used for a suite: Skip
and Timeout
. A value
can also be a list of these values.
If multiple platforms match, the configuration is applied in order from first to last, just as they would in nested groups. This means that for configuration like duration-based timeouts, the last matching value wins.
You can also set up global platform-specific configuration using the package configuration file.
Tags are short strings that you can associate with tests, groups, and suites. They don't have any built-in meaning, but they're very useful nonetheless: you can associate your own custom configuration with them, or you can use them to easily filter tests so you only run the ones you need to.
Tags are defined using the @Tags
annotation for suites and the tags
named
parameter to test()
and group()
. For example:
@Tags(const ["browser"])
import "package:test/test.dart";
void main() {
test("successfully launches Chrome", () {
// ...
}, tags: "chrome");
test("launches two browsers at once", () {
// ...
}, tags: ["chrome", "firefox"]);
}
If the test runner encounters a tag that wasn't declared in the
package configuration file, it'll print a warning, so be
sure to include all your tags there. You can also use the file to provide
default configuration for tags, like giving all browser
tests twice as much
time before they time out.
Tests can be filtered based on their tags by passing command line flags. The
--tags
or -t
flag will cause the test runner to only run tests with the
given tags, and the --exclude-tags
or -x
flag will cause it to only run
tests without the given tags. These flags also support
boolean selector syntax. For example, you can pass --tags "(chrome || firefox) && !slow"
to select quick Chrome or Firefox tests.
Note that tags must be valid Dart identifiers, although they may also contain hyphens.
For configuration that applies across multiple files, or even the entire
package, test
supports a configuration file called dart_test.yaml
. At its
simplest, this file can contain the same sort of configuration that can be
passed as command-line arguments:
# This package's tests are very slow. Double the default timeout.
timeout: 2x
# This is a browser-only package, so test on content shell by default.
platforms: [content-shell]
The configuration file sets new defaults. These defaults can still be overridden
by command-line arguments, just like the built-in defaults. In the example
above, you could pass --platform chrome
to run on Chrome instead of the
Dartium content shell.
A configuration file can do much more than just set global defaults. See the full documentation for more details.
Tests can be debugged interactively using platforms' built-in development tools. Tests running on browsers can use those browsers' development consoles to inspect the document, set breakpoints, and step through code. Those running on the Dart VM or Dartium can also use the Dart Observatory's debugger.
The first step when debugging is to pass the --pause-after-load
flag to the
test runner. This pauses the runner after each test suite has loaded, so that
you have time to open the development tools and set breakpoints. For the Dart VM
and Dartium, the test runner will print the Observatory URL for you. For
PhantomJS, it will print the remote debugger URL. For content shell, it'll print
both!
Once you've set breakpoints, you can press Enter in your terminal to start the tests running. Some platforms also have shortcuts for this: you can start browser tests by clicking on the "play" button in the middle of the window, and you can start VM tests by unpausing the Observatory. When you hit a breakpoint, the runner will open its own debugging console in the terminal that controls how tests are run. You can type "restart" there to re-run your test as many times as you need to figure out what's going on.
Normally, browser tests are run in hidden iframes. However, when debugging, the
iframe for the current test suite is expanded to fill the browser window so you
can see and interact with any HTML it renders. Note that the Dart animation may
still be visible behind the iframe; to hide it, just add a background-color
to
the page's HTML.
Code that's written for the browser often needs to talk to some kind of server. Maybe you're testing the HTML served by your app, or maybe you're writing a library that communicates over WebSockets. We call tests that run code on both the browser and the VM hybrid tests.
Hybrid tests use one of two functions: spawnHybridCode()
and
spawnHybridUri()
. Both of these spawn Dart VM
isolates that can import dart:io
and other VM-only libraries.
The only difference is where the code from the isolate comes from:
spawnHybridCode()
takes a chunk of actual Dart code, whereas
spawnHybridUri()
takes a URL. They both return a
StreamChannel
that communicates with the hybrid isolate. For
example:
// ## test/web_socket_server.dart
// The library loaded by spawnHybridUri() can import any packages that your
// package depends on, including those that only work on the VM.
import "package:shelf/shelf_io.dart" as io;
import "package:shelf_web_socket/shelf_web_socket.dart";
import "package:stream_channel/stream_channel.dart";
// Once the hybrid isolate starts, it will call the special function
// hybridMain() with a StreamChannel that's connected to the channel
// returned spawnHybridCode().
hybridMain(StreamChannel channel) async {
// Start a WebSocket server that just sends "hello!" to its clients.
var server = await io.serve(webSocketHandler((webSocket) {
webSocket.sink.add("hello!");
}), 'localhost', 0);
// Send the port number of the WebSocket server to the browser test, so
// it knows what to connect to.
channel.sink.add(server.port);
}
// ## test/web_socket_test.dart
@TestOn("browser")
import "dart:html";
import "package:test/test.dart";
void main() {
test("connects to a server-side WebSocket", () async {
// Each spawnHybrid function returns a StreamChannel that communicates with
// the hybrid isolate. You can close this channel to kill the isolate.
var channel = spawnHybridUri("web_socket_server.dart");
// Get the port for the WebSocket server from the hybrid isolate.
var port = await channel.stream.first;
var socket = new WebSocket('ws://localhost:$port');
var message = await socket.onMessage.first;
expect(message.data, equals("hello!"));
});
}
Note: If you write hybrid tests, be sure to add a dependency on the
stream_channel
package, since you're using its API!
Packages using the barback
transformer system may need to test code that's
created or modified using transformers. The test runner handles this using the
--pub-serve
option, which tells it to load the test code from a pub serve
instance rather than from the filesystem.
Before using the --pub-serve
option, add the test/pub_serve
transformer to
your pubspec.yaml
. This transformer adds the necessary bootstrapping code that
allows the test runner to load your tests properly:
transformers:
- test/pub_serve:
$include: test/**_test{.*,}.dart
Note that if you're using the test runner along with polymer
, you
have to make sure that the test/pub_serve
transformer comes after the
polymer
transformer:
transformers:
- polymer
- test/pub_serve:
$include: test/**_test{.*,}.dart
Then, start up pub serve
. Make sure to pay attention to which port it's using
to serve your test/
directory:
$ pub serve
Loading source assets...
Loading test/pub_serve transformers...
Serving my_app web on http://localhost:8080
Serving my_app test on http://localhost:8081
Build completed successfully
In this case, the port is 8081
. In another terminal, pass this port to
--pub-serve
and otherwise invoke pub run test
as normal:
$ pub run test --pub-serve=8081 -p chrome
"pub serve" is compiling test/my_app_test.dart...
"pub serve" is compiling test/utils_test.dart...
00:00 +42: All tests passed!
Check out the API docs for detailed information about all the functions available to tests.
The test runner also supports a machine-readable JSON-based reporter. This reporter allows the test runner to be wrapped and its progress presented in custom ways (for example, in an IDE). See the protocol documentation for more details.