Skip to content

README: Document the reasons/uses of each type of example #784

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Dec 17, 2018
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 9 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -156,9 +156,17 @@ Examples are named `<type>-<name>`.
There are three possible types of examples:

* success: The example is expected to pass the tests.
There should be at least one of these per exercise.
* There _must_ be at least `success` example per exercise, in order to confirm that it is possible to solve the tests.
* There _may_ be more than one `success` example for a given exercise, but these are intended for use when we want to confirm that multiple type signatures for a given solution will compile and pass the tests.
* We do not intend to use multiple `success` examples just to showcase a wide variety of possible solutions, since that is not in the goals of this repository.
* fail: The example is expected to build, but fail the tests.
* These are intended for use when we want to make sure that the track tests have coverage: Whether the tests find certain classes of incorrect or inefficient solutions.
* It's suggested that these only be used for tests that are specific to the track. This is under the assumption that tests sourced from problem-specifications have already been judged to have appropriate coverage by the reviewers of the problem-specifications repository.
* error: The example is expected to fail to build.
* There is only one intended use of this so far, and that is a single check that a solution without a type signature will fail to build (because CI builds with `--pedantic`).
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aside: This is of questionable usefulness as actionable information for us, as explained as follows:

If the error-nosig example ever successfully builds, that means there is a problem with stack --pedantic. What would we do differently in that situation?

Wait for stack to solve this problem, potentially reporting it if nobody has before?

We would have to be on our guard that other instances of --pedantic are not being respected, I suppose? So, the single error example sort of serves as a canary in the cole coal mine?

So I guess it's useful info, but given that the most likely mode of failure is out of our control, having that cause all further CI to fail would be unfortunate. My ideal would be keeping the information available but not having it break CI.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't mind that we have it now that we do.

* We do not intend for any additional uses of this type of example.

These example types were proposed and accepted in https://github.com/exercism/haskell/issues/397.

### Test suite
The test suite should be derived from the respective `problem-specifications/exercises/<exercise-name>/canonical-data.json` and comply to some formatting and coding standards (to get an idea you may look at some of the existing tests).
Expand Down