Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(integ-runner): test update path when running tests #19915

Merged
merged 3 commits into from
Apr 14, 2022
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 22 additions & 0 deletions packages/@aws-cdk/integ-runner/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,8 @@ to be a self contained CDK app. The runner will execute the following for each f
If this is set to `true` then the list of tests provided will be excluded
- `--from-file`
Read the list of tests from this file
- `--disable-update-workflow` (default=`false`)
If this is set to `true` then the [update workflow](#update-workflow) will be disabled

Example:

Expand Down Expand Up @@ -150,6 +152,26 @@ Test Results:
Tests: 1 passed, 1 total
```

#### Update Workflow

By default, integration tests are run with the "update workflow" enabled. This can be disabled by using the `--disable-update-workflow` command line option.

If an existing snapshot is being updated, the integration test runner will first deploy the existing snapshot and then perform a stack update
with the new changes. This is to test for cases where an update would cause a breaking change only on a stack update.

The `integ-runner` will also attempt to warn you if you are making any destructive changes with a message like:

```bash
!!! This test contains destructive changes !!!
Stack: aws-cdk-lambda-1 - Resource: MyLambdaServiceRole4539ECB6 - Impact: WILL_DESTROY
Stack: aws-cdk-lambda-1 - Resource: AliasAliasPermissionAF30F9E8 - Impact: WILL_REPLACE
Stack: aws-cdk-lambda-1 - Resource: AliasFunctionUrlDC6EC566 - Impact: WILL_REPLACE
Stack: aws-cdk-lambda-1 - Resource: Aliasinvokefunctionurl4CA9917B - Impact: WILL_REPLACE
!!! If these destructive changes are necessary, please indicate this on the PR !!!
```

If the destructive changes are expected (and required) then please indicate this on your PR.

### integ.json schema

See [@aws-cdk/cloud-assembly-schema/lib/integ-tests/schema.ts](../cloud-assembly-schema/lib/integ-tests/schema.ts)
Expand Down
52 changes: 44 additions & 8 deletions packages/@aws-cdk/integ-runner/lib/cli.ts
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
// Exercise all integ stacks and if they deploy, update the expected synth files
import * as path from 'path';
import * as chalk from 'chalk';
import * as workerpool from 'workerpool';
import * as logger from './logger';
import { IntegrationTests, IntegTestConfig } from './runner/integ-tests';
import { runSnapshotTests, runIntegrationTests, IntegRunnerMetrics } from './workers';
import { runSnapshotTests, runIntegrationTests, IntegRunnerMetrics, IntegTestWorkerConfig, DestructiveChange } from './workers';

// https://github.com/yargs/yargs/issues/1929
// https://github.com/evanw/esbuild/issues/1492
Expand All @@ -27,14 +28,16 @@ async function main() {
.options('max-workers', { type: 'number', desc: 'The max number of workerpool workers to use when running integration tests in parallel', default: 16 })
.options('exclude', { type: 'boolean', desc: 'All tests should be run, except for the list of tests provided', default: false })
.options('from-file', { type: 'string', desc: 'Import tests to include or exclude from a file' })
.option('disable-update-workflow', { type: 'boolean', default: 'false', desc: 'If this is "true" then the stack update workflow will be disabled' })
.argv;

const pool = workerpool.pool(path.join(__dirname, '../lib/workers/extract/index.js'), {
maxWorkers: argv['max-workers'],
});

// list of integration tests that will be executed
const testsToRun: IntegTestConfig[] = [];
const testsToRun: IntegTestWorkerConfig[] = [];
const destructiveChanges: DestructiveChange[] = [];
const testsFromArgs: IntegTestConfig[] = [];
const parallelRegions = arrayFromYargs(argv['parallel-regions']);
const testRegions: string[] = parallelRegions ?? ['us-east-1', 'us-east-2', 'us-west-2'];
Expand All @@ -43,7 +46,7 @@ async function main() {
const fromFile: string | undefined = argv['from-file'];
const exclude: boolean = argv.exclude;

let failedSnapshots: IntegTestConfig[] = [];
let failedSnapshots: IntegTestWorkerConfig[] = [];
if (argv['max-workers'] < testRegions.length * (profiles ?? [1]).length) {
logger.warning('You are attempting to run %s tests in parallel, but only have %s workers. Not all of your profiles+regions will be utilized', argv.profiles*argv['parallel-regions'], argv['max-workers']);
}
Expand All @@ -65,14 +68,19 @@ async function main() {
testsFromArgs.push(...(await new IntegrationTests(argv.directory).fromCliArgs(argv._.map((x: any) => x.toString()), exclude)));
}

// If `--force` is not used then first validate the snapshots and gather
// the failed snapshot tests. If `--force` is used then we will skip snapshot
// tests and run integration tests for all tests
// always run snapshot tests, but if '--force' is passed then
// run integration tests on all failed tests, not just those that
// failed snapshot tests
failedSnapshots = await runSnapshotTests(pool, testsFromArgs);
for (const failure of failedSnapshots) {
destructiveChanges.push(...failure.destructiveChanges ?? []);
}
if (!argv.force) {
failedSnapshots = await runSnapshotTests(pool, testsFromArgs);
testsToRun.push(...failedSnapshots);
} else {
testsToRun.push(...testsFromArgs);
// if any of the test failed snapshot tests, keep those results
// and merge with the rest of the tests from args
testsToRun.push(...mergeTests(testsFromArgs, failedSnapshots));
}

// run integration tests if `--update-on-failed` OR `--force` is used
Expand All @@ -85,6 +93,7 @@ async function main() {
clean: argv.clean,
dryRun: argv['dry-run'],
verbose: argv.verbose,
updateWorkflow: !!argv['disable-update-workflow'],
corymhall marked this conversation as resolved.
Show resolved Hide resolved
});
if (!success) {
throw new Error('Some integration tests failed!');
Expand All @@ -101,13 +110,28 @@ async function main() {
void pool.terminate();
}

if (destructiveChanges.length > 0) {
printDestructiveChanges(destructiveChanges);
throw new Error('Some changes were destructive!');
}
if (failedSnapshots.length > 0) {
let message = '';
if (!runUpdateOnFailed) {
message = 'To re-run failed tests run: yarn integ-runner --update-on-failed';
}
throw new Error(`Some snapshot tests failed!\n${message}`);
}

}

function printDestructiveChanges(changes: DestructiveChange[]): void {
if (changes.length > 0) {
logger.warning('!!! This test contains %s !!!', chalk.bold('destructive changes'));
changes.forEach(change => {
logger.warning(' Stack: %s - Resource: %s - Impact: %s', change.stackName, change.logicalId, change.impact);
});
logger.warning('!!! If these destructive changes are necessary, please indicate this on the PR !!!');
}
}

function printMetrics(metrics: IntegRunnerMetrics[]): void {
Expand All @@ -134,6 +158,18 @@ function arrayFromYargs(xs: string[]): string[] | undefined {
return xs.filter(x => x !== '');
}

/**
* Merge the tests we received from command line arguments with
* tests that failed snapshot tests. The failed snapshot tests have additional
* information that we want to keep so this should override any test from args
*/
function mergeTests(testFromArgs: IntegTestConfig[], failedSnapshotTests: IntegTestWorkerConfig[]): IntegTestWorkerConfig[] {
const failedTestNames = new Set(failedSnapshotTests.map(test => test.fileName));
const final: IntegTestWorkerConfig[] = failedSnapshotTests;
final.push(...testFromArgs.filter(test => !failedTestNames.has(test.fileName)));
return final;
}

export function cli() {
main().then().catch(err => {
logger.error(err);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,19 @@ import * as path from 'path';
import { AssemblyManifest, Manifest, ArtifactType, AwsCloudFormationStackProperties, ArtifactManifest, MetadataEntry } from '@aws-cdk/cloud-assembly-schema';
import * as fs from 'fs-extra';

/**
* Trace information for stack
* map of resource logicalId to trace message
*/
export type StackTrace = Map<string, string>;

/**
* Trace information for a assembly
*
* map of stackId to StackTrace
*/
export type ManifestTrace = Map<string, StackTrace>;

/**
* Reads a Cloud Assembly manifest
*/
Expand Down Expand Up @@ -64,6 +77,17 @@ export class AssemblyManifestReader {
return stacks;
}

/**
* Write trace data to the assembly manifest metadata
*/
public recordTrace(trace: ManifestTrace): void {
const newManifest = {
...this.manifest,
artifacts: this.renderArtifacts(trace),
};
Manifest.saveAssemblyManifest(newManifest, this.manifestFileName);
}

/**
* Clean the manifest of any unneccesary data. Currently that includes
* the metadata trace information since this includes trace information like
Expand All @@ -77,26 +101,51 @@ export class AssemblyManifestReader {
Manifest.saveAssemblyManifest(newManifest, this.manifestFileName);
}

private renderArtifactMetadata(metadata: { [id: string]: MetadataEntry[] } | undefined): { [id: string]: MetadataEntry[] } {
private renderArtifactMetadata(artifact: ArtifactManifest, trace?: StackTrace): { [id: string]: MetadataEntry[] } | undefined {
const newMetadata: { [id: string]: MetadataEntry[] } = {};
for (const [metadataId, metadataEntry] of Object.entries(metadata ?? {})) {
if (!artifact.metadata) return artifact.metadata;
for (const [metadataId, metadataEntry] of Object.entries(artifact.metadata ?? {})) {
newMetadata[metadataId] = metadataEntry.map((meta: MetadataEntry) => {
if (meta.type === 'aws:cdk:logicalId' && trace && meta.data) {
const traceData = trace.get(meta.data.toString());
if (traceData) {
trace.delete(meta.data.toString());
return {
type: meta.type,
data: meta.data,
trace: [traceData],
};
}
}
// return metadata without the trace data
return {
type: meta.type,
data: meta.data,
};
});
}
if (trace && trace.size > 0) {
for (const [id, data] of trace.entries()) {
newMetadata[id] = [{
type: 'aws:cdk:logicalId',
data: id,
trace: [data],
}];
}
}
return newMetadata;
}

private renderArtifacts(): { [id: string]: ArtifactManifest } | undefined {
private renderArtifacts(trace?: ManifestTrace): { [id: string]: ArtifactManifest } | undefined {
const newArtifacts: { [id: string]: ArtifactManifest } = {};
for (const [artifactId, artifact] of Object.entries(this.manifest.artifacts ?? {})) {
let stackTrace: StackTrace | undefined = undefined;
if (artifact.type === ArtifactType.AWS_CLOUDFORMATION_STACK && trace) {
stackTrace = trace.get(artifactId);
}
newArtifacts[artifactId] = {
...artifact,
metadata: this.renderArtifactMetadata(artifact.metadata),
metadata: this.renderArtifactMetadata(artifact, stackTrace),
};
}
return newArtifacts;
Expand Down
28 changes: 28 additions & 0 deletions packages/@aws-cdk/integ-runner/lib/runner/private/utils.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
// Helper functions for CDK Exec
import { spawnSync } from 'child_process';

/**
* Our own execute function which doesn't use shells and strings.
*/
export function exec(commandLine: string[], options: { cwd?: string, verbose?: boolean, env?: any } = { }): any {
const proc = spawnSync(commandLine[0], commandLine.slice(1), {
stdio: ['ignore', 'pipe', options.verbose ? 'inherit' : 'pipe'], // inherit STDERR in verbose mode
env: {
...process.env,
...options.env,
},
cwd: options.cwd,
});

if (proc.error) { throw proc.error; }
if (proc.status !== 0) {
if (process.stderr) { // will be 'null' in verbose mode
process.stderr.write(proc.stderr);
}
throw new Error(`Command exited with ${proc.status ? `status ${proc.status}` : `signal ${proc.signal}`}`);
}

const output = proc.stdout.toString('utf-8').trim();

return output;
}
Loading