Skip to content
Rich Chiodo edited this page Jul 19, 2022 · 16 revisions

Types of tests

There are two types of tests in the jupyter extension

Unit tests are in files under the src\tests ending with the extension *.unit.test.ts Integration tests are in files under the 'src\tests' folder ending with the extension *.vscode.*.test.ts

Unit tests are generally written to test stateless complicated internal logic. We don't generally unit test everything.

Integration tests are generally written for every use case that we might have. We attempt to have an integration test for every use case.

How to write a new integration test

Integration tests look very similar to unit tests but just test higher level functionality:

Example test - widget loads

    test('Can run a widget notebook (webview-test)', async function () {
        const { notebook, editor } = await openNotebook(testWidgetNb);
        await waitForKernelToGetAutoSelected(editor, PYTHON_LANGUAGE);
        const cell = notebook.cellAt(0);

        // This flag will be resolved when the widget loads
        const flag = createDeferred<boolean>();
        flagForWebviewLoad(flag, vscodeNotebook.activeNotebookEditor?.notebook!);

        // Execute cell. It should load and render the widget
        await runCell(cell);
        await waitForCellExecutionToComplete(cell);

        // Wait for the flag to be set as it may take a while
        await waitForCondition(
            () => flag.promise,
            defaultNotebookTestTimeout,
            'Widget did not load successfully during execution'
        );
    });

High level helpers

Integration tests tend to look like unit tests, but they use higher level rollup functions to do a lot of their work.

Some of the more useful things are described below:

  • waitForCondition - generic way to wait for something to be true. Generally use this instead of just asserting something happened due to the nature of VS code being very async.
  • waitForCellExecutionToComplete - similar to waitForCondition, this waits for a cell to complete but in a manner that doesn't assume as soon as execution is done, the UI is updated.
  • waitForKernelToGetAutoSelected - similar to waitForCondition, this waits for a kernel to get selected.
  • ITestWebViewHost - custom web view that allows pulling back the html rendered. This allows the testing of changes to a webview.
  • Common Test API - set of functions to do things like get an object in the DI container, capture a screenshot during a failing test, start a remote jupyter server.

Integration test file name

Like unit tests, integration tests should end with *.vscode.*.test.*.ts, but there are differences beyond that.

Here's an example:

src\test\datascience\kernelProcess.vscode.test.node.ts

This test is:

  • For testing kernel process creation/destruction etc
  • an integration test (has .vscode..test..ts in it)
  • a node only test (ends with .node.ts)

Another example:

src\test\datascience\interactiveWindow.vscode.common.test.ts

This test is:

  • For testing the interactive window
  • an integration test (has .vscode..test..ts in it)
  • Works in both web and node (has .common.test..ts in it)

How to debug an integration test locally

Integration tests are slightly more complicated to debug. They usually make assumptions about your system in order to run.

Setup

To setup for integration tests you'll need:

  • Python extension installed into your version of VS code
  • Python environment with jupyter installed
  • 3 python venvs (created with this file)
  • Have built all of the test files (Compile build task will do this)

Start Debugging

There's a launch.json entry that you pick here:

image

If you edit the json for that entry, you can setup a bunch of stuff:

        {
            // Run this first: https://github.com/microsoft/vscode-jupyter/blob/main/src/test/datascience/setupTestEnvs.cmd
            // Then specify either a grep below or mark a test as 'test.only' to run the test that's failing.
            "name": "Tests (Jupyter+Python Extension installed, *.vscode.test.ts)",
            "type": "extensionHost",
            "request": "launch",
            "runtimeExecutable": "${execPath}",
            "args": [
                "${workspaceFolder}/src/test/datascience",
                "--enable-proposed-api",
                "--extensionDevelopmentPath=${workspaceFolder}",
                "--extensionTestsPath=${workspaceFolder}/out/test/index.node.js"
            ],
            "env": {
                "VSC_JUPYTER_FORCE_LOGGING": "1",
                "VSC_JUPYTER_CI_TEST_GREP": "", // Leave as `VSCode Notebook` to run only Notebook tests.
                "VSC_JUPYTER_CI_TEST_INVERT_GREP": "", // Initialize this to invert the grep (exclude tests with value defined in grep).
                "CI_PYTHON_PATH": "", // Update with path to real python interpereter used for testing.
                "VSC_JUPYTER_CI_RUN_NON_PYTHON_NB_TEST": "", // Initialize this to run tests again Julia & other kernels.
                "VSC_JUPYTER_WEBVIEW_TEST_MIDDLEWARE": "true", // Initialize to create the webview test middleware
                "VSC_JUPYTER_LOAD_EXPERIMENTS_FROM_FILE": "true",
                // "TF_BUILD": "", // Set to anything to force full logging
                "TEST_FILES_SUFFIX": "*.vscode.test,*.vscode.common.test",
                "VSC_JUPYTER_REMOTE_NATIVE_TEST": "false", // Change to `true` to run the Native Notebook tests with remote jupyter connections.
                "VSC_JUPYTER_NON_RAW_NATIVE_TEST": "false", // Change to `true` to run the Native Notebook tests with non-raw kernels (i.e. local jupyter server).
                "XVSC_JUPYTER_INSTRUMENT_CODE_FOR_COVERAGE": "1",
                "XVSC_JUPYTER_INSTRUMENT_CODE_FOR_COVERAGE_HTML": "1", //Enable to get full coverage repor (in coverage folder).
                "VSC_JUPYTER_EXPOSE_SVC": "1"
            },
            "sourceMaps": true,
            "outFiles": ["${workspaceFolder}/out/**/*.js", "!${workspaceFolder}/**/node_modules**/*"],
            "preLaunchTask": "Compile",
            "skipFiles": ["<node_internals>/**"],
            "presentation": {
                "group": "2_tests",
                "order": 6
            }
        },

The settings here tend to map to environment variables used by the test runner.

Troubleshooting debugging

Problem: Test doesn't fail the same way as it did on CI.

Solution: You might not have your environment setup correctly. Try debugging the test setup or suite setup to see if something fails earlier.

Problem: Test passes some of the time

Solution: Make sure the test is waiting for the assertions it's making. Sometimes expected outcomes are async.

Solution (2): Test may also be dependent upon a previous test to cause it to fail. See what test comes before it and try running that one too.

How to debug an integration test during CI

  • What a failure looks like
  • Data we capture
  • Looking at logs
  • Adding new log data
  • Looking at the test notebook
Clone this wiki locally