-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot test error reporting #136
Comments
Other issues affected by this. |
If we could build out and standardize the testing setup a bit more, perhaps we could standardize a way of capturing console output to log files. I currently use the following gulpfile inside a |
I've update the gist linked above but still having difficult figuring out how to capture Anyway it is also not properly streaming it seems, cause I can't get it to work with autoprefixer. I also obviously wrote this cause I'm impatient with the node-sass update lag ; ) Recopied below: var gulp = require('gulp');
var run = require('gulp-run');
var rename = require("gulp-rename");
gulp.task('sassc', function () {
gulp.src('test.scss', { buffer: false })
.pipe(run('../bin/sassc -s', {verbosity: 1}))
.on('error', function (err) { this.end(); })
.pipe(rename(function (path) { path.extname = ".css"; }))
.pipe(gulp.dest('.'))
});
gulp.task('watch', function () {
gulp.watch('test.scss', ['sassc']);
});
gulp.task('default', ['watch']); CC @mgreter |
Just in case that helps, at sass-compatibility/sass-compatibility.github.io, we have a way to revert a condition for such a test; basically unexpect. For instance, "make sure this output is not equal to this". Testing @error "ok"; Then we made sure once compiled that the output is not In case that helps. |
Another way could be to use custom functions. |
I have something working here, but wanted to get some feedback first on the proposed format. @function throw($msg){
@error $msg;
@return null;
}
.test {
foo: mock-errors(true);
bar: throw("fetched");
content: last-error();
baz: mock-errors(false);
} Expected result:
The functions for Please let me know what you think!? |
Can the custom extension define mixins? Ideally we could test @function throw($msg) {
@error $msg;
@return null;
}
@include mock-errors {
.test {
bar: throw("fetched");
content: last-error();
}
} |
I think this should work. I guess the functions are always executed for every use of the extend. But remember that we need to tell it somehow to "ignore" the actual error (like a try or eval block). That's why I had |
My thinking was the
|
My test indicates that this works:
Expected result:
|
I guess my question was could the mixing definition itself be declared in
|
Will this pick up all error warnings, or only those thrown by |
Please forgive any mistake I could tell on this issue, but do we really need to capture compiler errors? Can't we simply assert that the input should not match the (un)expected_output (as I mentioned earlier in the discussion)? |
@xzyfer: You should be able to first compile/add the boilerplate mixin to the context and afterwards you can use it as normal. I don't see why this shouldn't work! The code is called whenever a |
I was hoping the boiler plate mixin could be defined in the extension itself. Needing to bootstrap specs isn't ideal.
As to @lunelson point, this doesn't sound like it catch the cases where the compiler itself encounters an error i.e. when |
Adding an extra piece of information to discussion. Now that sass-compatibility uses sass-spec, we cannot do reverse testing, using some kind of Consider the following @error "Oops!"; A Sass engine supporting Oops! Would this be possible to implement? |
Sass has an output format for errors that results in CSS output. IMO, Errors should be compared using that output. We also need to track warnings. For this, I think there should be a file |
I don't think an unexpected output is required. Simply making an output file that is empty should suffice. |
This. |
@chriseppstein I used pretty much the same approach for my local test runner. But IMHO there could a use case for having |
Perhaps it would be a good idea to replace Ruby minitest gem with something like C++ minitest to have more native control over memory and std streams to tests other aspects of libsass, beyond Sass code-generation. This way we can also write independent tests for source-map and external plugins in future. One thing we would probably need to write is the coverage report, for which we can hook native lcov or gcov. Also, this is slightly related: sass/node-sass#633. |
Before we rewrite a whole thing, I just had a look at the ruby code with my basic understanding of the language and found the following:
It seems to be trivial to add testing against std error, std output or both. You could even have something like "I want this in standard output and not this in standard error". The logic can be pretty arbitrary. The only real issue is to specify how to express those rules. Something like:
plus checking of some flags. As I understand the ruby code right now, options 1, 4 and 5 are currently implemented. There is also a possibility to generate "expected" output files with a command line option; we should decide the logic in which order those files are written, for example:
As I understand the ruby code right now, option 2 is currently implemented. So once we agree how to specify the expected error output and the logic behind it, it should be trivial to implement. |
IMHO we just need someone to do the actual heavy lifting! I only fixed/updated about 5% of all error messages yet to conform to ruby sass! So there is another heavy lifting involved! |
I have a preliminary fix. My first ruby coding! :) |
comparing stacktraces will be a challenge. sass/libsass#1555 just a tip of the iceberg :) |
If files "error" and "status" are existing in the test directory, do expect errors and test against them. Also create error and status files on --nuke. An error will be reported in this case anyway. Fixes: sass#136
First shot: #494 |
@saper I've done some work to normalise Ruby Sass and LibSass error messages in https://github.com/sass-compatibility/sass-compatibility.github.io/blob/master/Rakefile#L94-L95 Might be of help. |
@saper to be honest, as a first pass, just be able to assert that an error should have occurred and did it a big step. Currently when Ruby Sass error the spec fails which is the core problem. Asserting the error message match can come later. |
@xzyfer Initially I wants to just test the status code, but it was so easy just to fix it all. What is the proper incarnation of the test suite with Ruby Sass? I think I get 80 failures using
This is on ruby sass stable 4ef8e31 |
If files "error" and "status" are existing in the test directory, do expect errors and test against them. Also create error and status files on --nuke. An error will be reported in this case anyway. Fixes: sass#136
If files "error" and "status" are existing in the test directory, do expect errors and test against them. Also create error and status files on --nuke. An error will be reported in this case anyway. Fixes: sass#136
That looks right, although you don't need --ignore-todo since there are technically no todos Ruby Sass. Those failure might be correct if you're using a newer version of Ruby than what's defined in the |
If files "error" and "status" are existing in the test directory, do expect errors and test against them. Also create error and status files on --nuke. An error will be reported in this case anyway. Fixes: sass#136
I have went through all error-test issues, tested them under #494 and I have posted the results to #502. |
@saper my preference would be that error message mismatches aren't considered failures for time being. To be quite honest we cannot match some of Ruby Sass error messages without a good chunk of work. I instead propose that an error test is successful if they both produce errors. Add a --strict flag for failing on error message mismatches. This allows us to continue moving forward without introducing regressions again and without losing a lot of time on exactly match error messages at the cost of feature work. |
If files "error" and "status" are existing in the test directory, do expect errors and test against them. Also create error and status files on --nuke. An error will be reported in this case anyway. Fixes: sass#136 Conflicts: lib/sass_spec/runner.rb
If files "error" and "status" are existing in the test directory, do expect errors and test against them. Errors returnd by the Sass engines are now treates as test failuers, not errors. Also create error and status files with --nuke. An error will be reported in this case anyway. Fixes: sass#136
If files "error" and "status" are existing in the test directory, do expect errors and test against them. The errors returned by the Sass engines are now treated as test failures (not fatal errors). --unexpected-pass reports an error whenever a test marked as "todo" does pass. Failing "todo" errors are silently marked as "passed". Also re-create output test files with --nuke. "status" and "error" files are re-created if necessary. Any failures will be reported in this case normally. Fixes: sass#136
Closed via #494. Thank you everyone! |
As it stands it's not possible to assert an error condition. i.e
error on duplicate keys in maps
sass/libsass#628There are cases where a spec like the following should be possible https://github.com/sass/sass-spec/pull/133/files.
Maybe we can redirect error output in the assertion some how. Thoughts?
/cc @lunelson @hcatlin
The text was updated successfully, but these errors were encountered: