Skip to content
Rich Chiodo edited this page Jul 19, 2022 · 16 revisions

Types of tests

There are two types of tests in the jupyter extension

  • Unit
  • Integration

Unit tests are in files under the src\tests ending with the extension *.unit.test.ts Integration tests are in files under the 'src\tests' folder ending with the extension *.vscode.*.test.ts

Unit tests are generally written to test stateless complicated internal logic. We don't generally unit test everything. Integration tests are generally written for every use case that we might have. We attempt to have an integration test for every use case.

How to write a new unit test

Writing unit tests is pretty straightforward but there are some things to consider.

Unit test file

The unit test for a particular class/file should have the same name as the class under test but end with unit.test.ts instead. This makes it get picked up automatically when our tests run.

Test structure

They generally follow a pattern like so:

suite(`My new unit test', () => {
    let foo: IFoo;

    setup(() => {
       // setup mocks
       foo = mock(FooClass); // Using ts-mockito to mock
       when(foo.bar).thenReturn(true);
    });

    test(`Test baz`, async () => {
       const baz = new Baz(foo);
       assert.ok(baz.bar, `Bar is not correct');
    });
});

Mostly we use ts-mockito to generate mock objects and sinon to stub out method calls.

Try to test public interface

In unit tests we try to follow the pattern of testing the public output of a particular class or function. Embedding internal details of a function class into a unit test ends up meaning the test is a copy of the function itself and makes it hard to maintain.

Debugging a unit test failure

When a unit test fails (or you're just starting to write one), you can debug the test.

From the debug drop down in VS code, pick this launch.json entry:

image

Then click on the gear icon and edit the --grep parameter to match your test.

You should be able to set a breakpoint directly in the test code.

How to write a new integration test

Integration tests look very similar to unit tests but just test higher level functionality:

Example test - widget loads

    test('Can run a widget notebook (webview-test)', async function () {
        const { notebook, editor } = await openNotebook(testWidgetNb);
        await waitForKernelToGetAutoSelected(editor, PYTHON_LANGUAGE);
        const cell = notebook.cellAt(0);

        // This flag will be resolved when the widget loads
        const flag = createDeferred<boolean>();
        flagForWebviewLoad(flag, vscodeNotebook.activeNotebookEditor?.notebook!);

        // Execute cell. It should load and render the widget
        await runCell(cell);
        await waitForCellExecutionToComplete(cell);

        // Wait for the flag to be set as it may take a while
        await waitForCondition(
            () => flag.promise,
            defaultNotebookTestTimeout,
            'Widget did not load successfully during execution'
        );
    });

High level helpers

Integration tests tend to look like unit tests, but they use higher level rollup functions to do a lot of their work.

Some of the more useful things are described below:

  • waitForCondition - generic way to wait for something to be true. Generally use this instead of just asserting something happened due to the nature of VS code being very async.
  • waitForCellExecutionToComplete - similar to waitForCondition, this waits for a cell to complete but in a manner that doesn't assume as soon as execution is done, the UI is updated.
  • waitForKernelToGetAutoSelected - similar to waitForCondition, this waits for a kernel to get selected.
  • ITestWebViewHost - custom web view that allows pulling back the html rendered. This allows the testing of changes to a webview.
  • Common Test API - set of functions to do things like get an object in the DI container, capture a screenshot during a failing test, start a remote jupyter server.

How to debug an integration test locally

How to debug an integration test during CI

  • What a failure looks like
  • Data we capture
  • Looking at logs
  • Adding new log data
  • Looking at the test notebook
Clone this wiki locally