This directory contains the
WoT Thing Description Implementation report
and the sources for it.
The report itself is in report.html.
Do not edit this file,
it is autogenerated by executing npm run assertions
from the main directory,
and any edits would be overwritten the next time you generate it.
If you want to update your input for a Member organization, please go to WoT IG testing and see the README there.
Test inputs are collected via the WoT IG and are currently in wot/testing/tests/2019-05
.
To update a test run to incorporate new results or test cases, first check out the wot
repo,
git clone https://github.com/w3c/wot-testing.git
and install the dependencies for the AssertionTester tool:
git clone https://github.com/thingweb/thingweb-playground.git
npm install
Go to the wot/testing/tests/2019-05
directory.
It contains a subdirectory for descriptions
and one for inputs
, which in turn contains TDs (TDs of the same implementation must be grouped in a further subdirectory) and manual assertion results as CSV files.
Each is organized by Member organization.
To run all tests, while in the wot/testing/tests/2019-05
directory,
run the following:
./update.sh | tee update.log
This puts all the results in the outputs
directory and also logs
diagnostic output to update.log
. To check for errors do
grep ERROR update.log > errors.log
grep INVALID update.log >> errors.log
If there are no fatal errors (for example, a TD that does not validate or is not JSON), this file should be empty.
The output data files are now in outputs/*/*.csv
.
Check that there are no zero-length files in this list using ls -l
.
If there are it means something has gone wrong, most likely an input
CSV file with a syntax error. An easy way to check for CSV syntax errors
is to open the file in github. If it does not render properly as a table,
there is an error, and GitHub will even point to the offending line.
Go back to the wot-thing-description/testing
directory and update the
inputs for the report generator in the inputs/implementations
subdirectory:
cd ./wot-thing-description/testing/inputs/implementations
rm *.html
cp ../../../../wot/testing/tests/2019-05/descriptions/*/*.html .
And in the inputs/results
subdirectory:
cd ../results
rm *.csv
cp ../../../../wot/testing/tests/2019-05/outputs/*/*.csv .
Now return to the main wot-thing-description
directory and regenerate the report:
cd ../../../
npm run assertions
If you need to update the at-risk highlighting in the TD specification,
you may also need to
run npm run render
afterwards.
If new assertions have been added to the
TD specification then you will need to run npm run render
BEFORE generating
the report.
The file suppressed.csv can be used to list assertions for which test results should be ignored. Such results will also not be used for "roll-up" results (when a child assertion, one with an underscore in its name, is used to break down assertions with multiple options into simpler, separately testable assertions).
As already mentioned, manual.csv can be used to identify assertions that need manual testing or declarations.
The file atrisk.css provides highlighting for at-risk elements in the report but should NOT be edited directly as it is autogenerated. Instead edit inputs/atrisk.csv.
The HTML template for the Implementation Report is in inputs/template.html. If you wish to edit the main explanatory text of the report or update the metadata (date, authors, etc.), do so here.
Implementations are described in HTML files in inputs/implementations, with one file per contributing organization. The IDs declared in these descriptions should be unique and descriptive as they will be used elsewhere, for example, in the interoperability data files referring to those implementations. To add a new description, use the template in inputs/implementations/template.html.t.
Implementations should also be entered into the table in inputs/impl.csv with identifiers, titles, and roles consistent with those assigned in the above HTML files. An implementation can either be a consumer, a producer (eg exposing a TD, a Thing), or both (eg a Servient).
A procedure for testing each normative assertion should be given in inputs/testspec.html. These may be included as an appendix in the report, but are currently suppressed.
Test specifications can be given both for assertions given in the specification (see inputs/results/template.csv.t for an automatically generated list of identifiers) and for any extra assertions in inputs/extra-asserts.html.
Assertions used for testing but not (yet) included in the specification may be listed in inputs/extra-asserts.html. The intention is that these assertions should eventually, and before final release, be inserted into the final specification.
Assertions related to features that are at risk of being deleted from the final CR should be identified in the inputs/atrisk.csv file. The assertion text for these will be given a special color in the report table.
Assertions can be assigned to a category in inputs/categories.csv. Eventually the table may be sorted to group categories into sections; for now it is just an extra column.
Dependencies between assertions can be recorded in
inputs/depends.csv.
The "Parents" column relates detailed assertions to more general assertions.
the "Contexts" column indicates assertions that only need to be considered in
a particular context (either syntactic, if pointing at another vocabulary item,
or logical, if pointing at another optional assertion).
Both "Parents" and "Contexts" may have multiple items separated by spaces.
Entries should be IDs of other assertions.
Use "null"
if there is no dependency.
Each implementation should record
which features they have implemented and tested under the
inputs/results directory.
All data will be read and merged into the report.
Mark each implemented feature with a status of either
"pass"
(if it satisfies the specification)
or
"fail"
(if it does not).
If you have not implemented a feature, list its status as "not-impl"
.
Features not listed will not be included in the sums; this is
distinct from "not-impl"
as absence of a feature is meant to be used
to allow different features to be reported in different files.
If you did not implement a feature on purpose please indicate this explicitly.
Any other status will be ignored (e.g., "null"
as used in the template).
If you have tested a feature in multiple
implementations check in one file per implementation, using as a filename
the id given in the template for the implementations' description.
Use a convention
like ORG-IMPL-MODULE.csv
for the filename.
The filename should also be used as an id in the
description of each implementation.
The inputs/results/template.csv
file lists all features but with a "null"
status.
Do not edit this file; it is autogenerated.
It is provided so
you can use it as a reference and as a basis for your own data files.
Files should be in CSV format, including headers as defined in
inputs/results/template.csv,
and will be parsed by the csvtojson
Node.js library.
- Test specs need to be completed.
- Interop and results data needs to be collected and collated.
- Sort categories together in the output and put the category in a section header instead of a column.
- Sanity check that parent assertions are only marked as "pass" if all sub-assertions are also marked as "pass".