Skip to content

Commit

Permalink
Enable browser tests (#1248)
Browse files Browse the repository at this point in the history
  • Loading branch information
Andres Martinez Gotor authored Oct 28, 2019
1 parent 0aeb411 commit 60e29a1
Show file tree
Hide file tree
Showing 17 changed files with 4,276 additions and 1 deletion.
14 changes: 13 additions & 1 deletion .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,13 @@ run_e2e_tests: &run_e2e_tests
DEV_TAG=$(latestReleaseTag)
IMG_MODIFIER=""
fi
./script/e2e-test.sh $DEV_TAG $IMG_MODIFIER
if ./script/e2e-test.sh $DEV_TAG $IMG_MODIFIER; then
# Test success
echo "export TEST_RESULT=$?" >> $BASH_ENV
else
# Test failed
echo "export TEST_RESULT=$?" >> $BASH_ENV
fi
gke_test: &gke_test
docker:
- image: circleci/golang:1.11
Expand Down Expand Up @@ -193,6 +199,9 @@ gke_test: &gke_test
# Install helm
- <<: *install_helm_cli
- <<: *run_e2e_tests
- store_artifacts:
path: integration/reports
- run: exit $TEST_RESULT
- run:
name: Cleanup GKE Cluster
command: gcloud container clusters delete --async --zone $GKE_ZONE $ESCAPED_GKE_CLUSTER
Expand Down Expand Up @@ -266,6 +275,9 @@ jobs:
at: /tmp/images
- run: for image in /tmp/images/*; do kind load image-archive "$image"; done
- <<: *run_e2e_tests
- store_artifacts:
path: integration/reports
- run: exit $TEST_RESULT
GKE_1_13_MASTER:
<<: *gke_test
environment:
Expand Down
63 changes: 63 additions & 0 deletions docs/developer/end-to-end-tests.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# End-to-end tests in the project

In every CI build, a set of end-to-end tests are run to verify, as much as possible, that the changes don't include regressions from an user point of view. The current end-to-end tests are executed in two steps (or categories):

- Chart tests
- Browser tests

These tests are executed by the script [scripts/e2e-test.sh](../../script/e2e-test.sh). This script:

1. Installs Tiller using a certificate
2. Installs Kubeapps using the images built during the CI process
3. Waits for the different deployments to be ready
4. Execute the Helm tests (see the section below for more details).
5. Execute the web browser tests (see the section below for more details).

If all of the above succeeded, the control is returned to the CI with the proper exit code.

## Chart tests

Chart tests in the project are defined using the testing functionality [provided by Helm](https://helm.sh/docs/developing_charts/#chart-tests). The goal of these tests is that the chart has been successfully deployed and that the basic functionality for each of the microservices deployed work as expected. Specific functionality tests should be covered by either unit tests or browser tests if needed.

You can find the current chart tests in the [chart folder](../../chart/kubeapps/templates/tests).

## Web Browser tests

Apart from the basic functionality tests run by the chart tests, this project contains web browser test that you can find in the [integration](../../integration) folder.

These tests are based on [Puppeteer](https://github.com/GoogleChrome/puppeteer). Puppeteer is a NodeJS library that provides a high-level API to control Chrome or Chromium (in headless mode by default).

On top of Puppeteer we are using the `jest-puppeteer` module that allow us to execute these tests using the same syntax than in the rest of unit-tests that we have in the project.

The `integration` folder pointed above is self-contained. That means that the different dependencies required to run the browser tests are not included in the default `package.json`. In that folder, it can be found a `Dockerfile` used to generate an image with all the dependencies needed to run the browser tests.

It's possible to run these tests either locally or in a container environment.

### Runing browser tests locally

To run the tests locally you just need to install the required dependencies and set the required environment variables:

```bash
cd integration
yarn install
INTEGRATION_ENTRYPOINT=http://kubeapps.local LOGIN_TOKEN=foo yarn start
```

If anything goes wrong, apart from the logs of the test, you can find the screenshot of the failed test in the folder `reports/screenshots`.

### Running browser tests in a pod

Since the CI environment don't have the required dependencies and to provide a reproducible environment, it's possible to run the browser tests in a Kubernetes pod. To do so, you can spin up an instance running the image `kubeapps/integration-tests`. This image contains all the required dependencies and it waits forever so you can execute commands within it. The goal of this setup is that you can copy the latest tests to the image, run the tests and extract the screenshots in case of failure:

```bash
cd integration
# Deploy the executor pod
kubectl apply -f manifests/executor.yaml
pod=$(kubectl get po -l run=integration -o jsonpath="{.items[0].metadata.name}")
# Copy latest tests
kubectl cp ./use-cases ${pod}:/app/
# Run tests
kubectl exec -it ${pod} -- /bin/sh -c 'INTEGRATION_ENTRYPOINT=http://kubeapps.kubeapps LOGIN_TOKEN=foo yarn start'
# If the tests fail, get report screenshot
kubectl cp ${pod}:/app/reports ./reports
```
13 changes: 13 additions & 0 deletions integration/.eslintrc
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
{
"env": {
"jest": true
},
"globals": {
"page": true,
"browser": true,
"context": true,
"jestPuppeteer": true,
"ENDPOINT": true,
"getUrl": true
}
}
3 changes: 3 additions & 0 deletions integration/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
node_modules
# Reports
reports/screenshots
6 changes: 6 additions & 0 deletions integration/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
FROM bitnami/node:12.10.0
RUN install_packages gconf-service libasound2 libatk1.0-0 libc6 libcairo2 libcups2 libdbus-1-3 libexpat1 libfontconfig1 libgcc1 libgconf-2-4 libgdk-pixbuf2.0-0 libglib2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libpangocairo-1.0-0 libstdc++6 libx11-6 libx11-xcb1 libxcb1 libxcomposite1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrandr2 libxrender1 libxss1 libxtst6 ca-certificates fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils wget

ADD . /app/
RUN yarn install
CMD [ "yarn", "start" ]
4 changes: 4 additions & 0 deletions integration/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
default: build

build:
docker build -t kubeapps/integration-tests:latest .
9 changes: 9 additions & 0 deletions integration/args.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
module.exports = {
// Endpoint is required!
endpoint: process.env.INTEGRATION_ENTRYPOINT,
waitTimeout: process.env.INTEGRATION_WAIT_TIMEOUT || 60000,
headless: process.env.INTEGRATION_HEADLESS != "false",
retryAttempts: process.env.INTEGRATION_RETRY_ATTEMPTS || 0,
screenshotsFolder:
process.env.INTEGRATION_SCREENSHOTS_FOLDER || "reports/screenshots"
};
8 changes: 8 additions & 0 deletions integration/jest-puppeteer.config.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
const { headless } = require("./args");

module.exports = {
launch: {
headless,
args: ["--no-sandbox"]
}
};
9 changes: 9 additions & 0 deletions integration/jest.config.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
module.exports = {
rootDir: "./",
testMatch: ["<rootDir>/use-cases/*.js"],
globalSetup: "jest-environment-puppeteer/setup",
globalTeardown: "jest-environment-puppeteer/teardown",
testEnvironment: "./jest.environment.js",
testRunner: "jest-circus/runner",
setupFilesAfterEnv: ["./jest.setup.js"]
};
76 changes: 76 additions & 0 deletions integration/jest.environment.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
const path = require("path");
const fs = require("fs");
const waitOn = require("wait-on");
const PuppeteerEnvironment = require("jest-environment-puppeteer");
require("jest-circus");

const {
retryAttempts,
endpoint,
waitTimeout,
screenshotsFolder
} = require("./args");

// Create an environment to store a screenshot of the page if the current test
// failed.
class ScreenshotOnFailureEnvironment extends PuppeteerEnvironment {
async generateScreenshotsFolder() {
try {
// Create the report folder if it's not there
if (!fs.existsSync(screenshotsFolder)) {
await fs.promises.mkdir(screenshotsFolder, { recursive: true });
}
} catch (err) {
console.error(`The ${screenshotsFolder} folder couldn't be created`);
process.exit(1);
}
}

async waitOnService() {
try {
// Check the server is up before running the test suite
console.log(
`Waiting ${endpoint} to be ready before running the tests (${waitTimeout /
1000}s)`
);
await waitOn({
resources: [endpoint],
timeout: waitTimeout
});
console.log(`${endpoint} is ready!`);
} catch (err) {
console.error(`The ${endpoint} URL is not accessible due to:`);
console.error(err);
process.exit(1);
}
}

async setup() {
await this.generateScreenshotsFolder();
await this.waitOnService();
await super.setup();
}

async teardown() {
// Wait a few seconds before tearing down the page so we
// have time to take screenshots and handle other events
await this.global.page.waitFor(2000);
await super.teardown();
}

async handleTestEvent(event, state) {
if (event.name == "test_fn_failure") {
if (state.currentlyRunningTest.invocations > retryAttempts) {
const testName = state.currentlyRunningTest.name
.toLowerCase()
.replace(/ /g, "-");
// Take a screenshot at the point of failure
await this.global.page.screenshot({
path: path.join(__dirname, `${screenshotsFolder}/${testName}.png`)
});
}
}
}
}

module.exports = ScreenshotOnFailureEnvironment;
14 changes: 14 additions & 0 deletions integration/jest.setup.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
require("expect-puppeteer");
const { endpoint } = require("./args");

// endpoint argument is mandatory
if (endpoint == null || endpoint == "") {
console.error("The INTEGRATION_ENDPOINT environment variable is mandatory");
process.exit(1);
}

// Initialize globals
global.endpoint = endpoint;

// Helper to get the proper endpoint
global.getUrl = path => `${global.endpoint}${path}`;
28 changes: 28 additions & 0 deletions integration/manifests/executor.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: integration
name: integration
spec:
replicas: 1
selector:
matchLabels:
run: integration
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: integration
spec:
containers:
- args:
- tail
- -f
- /dev/null
image: kubeapps/integration-tests:latest
name: integration
resources: {}
status: {}
19 changes: 19 additions & 0 deletions integration/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
{
"name": "kubeapps-integration",
"version": "1.0.0",
"description": "Kubeapps integration tests",
"main": "index.js",
"private": true,
"scripts": {
"start": "jest",
"start:window": "HEADLESS=false jest"
},
"dependencies": {
"expect-puppeteer": "^4.3.0",
"jest": "^24.9.0",
"jest-circus": "^24.9.0",
"jest-puppeteer": "^4.3.0",
"puppeteer": "^1.20.0",
"wait-on": "^3.3.0"
}
}
Empty file added integration/reports/.notempty
Empty file.
22 changes: 22 additions & 0 deletions integration/use-cases/default-deployment.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
jest.setTimeout(120000);

test("Deploys an application with the values by default", async () => {
page.setDefaultTimeout(2000);
await page.goto(getUrl("/#/login"));

await expect(page).toFillForm("form", {
token: process.env.ADMIN_TOKEN
});

await expect(page).toClick("button", { text: "Login" });

await expect(page).toClick("a", { text: "Catalog" });

await expect(page).toClick("a", { text: "aerospike", timeout: 60000 });

await expect(page).toClick("button", { text: "Deploy" });

await expect(page).toClick("button", { text: "Submit" });

await expect(page).toMatch("Ready", { timeout: 60000 });
});
Loading

0 comments on commit 60e29a1

Please sign in to comment.