The RDMP test plan outlines the questions that should be asked of new changes to help ensure the correct changes are being made at the correct time. These questions are designed to help highlight any steps in the testing process that may have been missed.
If you are looking to release a new version with this change, please look at the Release Testing section below.
There is functionality to generate example data built into RDMP. This functionality has a 'nightmare' mode, which can generate large amounts of data for testing purposes. This mode can be enabled when creating platform databases by adding the 'Nightmare' flag to the example data settings during platform creation. Additional data can be used by using the 'Factor' flag to increase the amount of data, this will multiply the number of objects created by your specified Factor.
If the change adds a database migration, ask yourself the following questions:
- Is the change backwards compatable?
- Have I tested the change with a fresh install of RDMP?
- Have I tested the change through the upgrade path with populated data that my change affects?
- Should this change be included in a patch release? or be part of the next minor release? Or even a major release?
- Patch releases contain small improvements and should be usable interchangably with other versions within the same minor version e.g v8.1.4 and v8.1.5
- Minor released contain new functionality and should be backwards compatable with other versions in the same major version e.g. v8.1.0 and v8.2.0
- Major releases contain changes that are not backwards compatable
If the change adds new functionality, ask yourself the following questions:
- Is the functionality usable via the GUI? If no ,why not?
- Is the functionality usable via the CLI? If no, why not?
- Has the functionalty been covered via unit tests?
- Has the functionalty been manually tested?
- Has the happy path been tested?
- This is the expected path, where users are paying attention and are on their best behaviour
- Has the sad path been tested?
- This is where the user tries to be as obtuse as possible
- Has the happy path been tested?
- Does this change do any data processing? If so, check the performance questions below
If the change adds or amends functionality that processes data
- Is the functionality performant?
- Can the space/time complexity of the functions be reduced?
- Does the functionality handle large datasets (>1GB) efficiently?
- Have any assumptions about how this functionality will be used been made?
- Have any assumptions about the input data been made? Can these assumptions be extracted out into configuration?
The release testing process should be completed once the release is feature complete and no more code changes are due to take place. This testing is to ensure that all code changes made during this release play nicely together and have no unintended side effcts. It may be useful at this point to revisit each piece of functionality and ensure they work as expected and perform some user acceptance testing on them, in light with the questions above.
- Does the changelog accuratly reflect the pull request changes made?
- If there are database migrations, are they all correctly sequenced and not overlapping?
- Does the release work as epxected with a fresh install and via the upgrade path?
- Is all functionality documented?
- Are all version numbers bumped correctly?
- Do all of the managed plugins work with the new release without issues or warnings?