-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make tests verify inference (not any
!)
#123
Comments
@tycho01 You might look at how TypeScript itself does its tests. They have a harness where the tests are TypeScript files, and when they run the tests the appropriate error, sourcemap and JS is generated in an output directory. You tweak the output files so they look like what you expect, and the next time you run the tests they'll fail because the expected output files don't match the transpilation artifacts from the TypeScript. In your case you'd expect certain errors to appear in the errors file. |
Interesting, thanks! I'll try to look into this. :) |
I assume you're referring to the internal tests for the compiler API. I'm not sure that sort of thing would really be needed, but it's always worth looking at. Seems easier just to write the output to a snapshot and make sure the snapshots don't change here (e.g. the error stays in the right place, or there's no errors, etc). |
@blakeembrey The tests I'm referring to are not related to the compiler API (I think those are separate). The tests I'm talking about test core TypeScript features for various use cases (e.g. do all ES6 features transpile correctly? do the appropriate type errors appear for various type checking features? etc). Your suggestion to ensure output snapshot remains consistent is actually what they use. There's a bunch of TypeScript files representing test scenarios in the tests/cases folder, and snapshots of the corresponding expected output in the tests/baselines/reference folder. To prevent regressions, they simply ensure the output for each TypeScript file hasn't deviated from what's stored in baselines/reference in an unexpected way. However, since this codebase is fairly new and your primary concern is verifying correctness of new code rather than preventing regressions, you'd have to examine and tweak the output files by hand to get any failing tests. |
You can even assert for the type of various expressions, which should be useful when testing whether inference works correctly. |
That looks great, I love the baseline thing, seems like a great way of getting a better idea of changes in inference. Looks like I could borrow some inspiration from their associated tasks here. |
@masaeedu: had you had experience with this test harness? I've been attempting to make a similar gulp file mimicking how they're doing it, but I'm having trouble actually reproducing their way. Specifically, it seems internally they're relying on the availability of this Otherwise I might wanna go ask over there... |
still failing: Harness is not defined
The TS guys pointed me to a repo that did this, still an outstanding issue but otherwise usable. Just pushed this in, with results committed to allow checking diffs. |
If TS infers a function to, say, return
any
rather thannumber
, can we make that a failing test?@blakeembrey:
This taught me something new: we can verify that certain commands output errors as expected.
In retrospect though, the problem we're faced with here is that tests are passing while they shouldn't (no type properly inferred, so it'll just accept anything). I've yet to come up with a solution here.
The text was updated successfully, but these errors were encountered: