-
Notifications
You must be signed in to change notification settings - Fork 429
Suggestions and requirements for automating the test infrastructure #261
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@vielmetti, OOF, take care about the voluntary work done by others... over years... Of course, always No idea about About But as copiously explained several times before, some of which I added to the test/README.md, we do not really have Somewhere else you wrote about a test that failed for you. When I looked at the reason for the test, it was about a crash/segfault because of You reported a small difference in output, using your version of tidy... not really part of the test... but when I run the tests in Windows 64-bit, and Ubuntu, using tidy 5.1.9, I have no problem. So if your The reason I want to improve the test process is so that any developer that adds to tidy code has a way to quickly test their work before offering it as say a PR. Thus expect it to have been tested before merging to master... If I think some developer has not done this, then I will usually try to find the extra time to do it for him/her... And, if it does cause a So at this time do not see how But what can you contribute to the current scripts? They do need some TLC here and there... like another thing I read about a horrid Always open to what else can be done... |
Just to be very clear - I'm not saying that the current test infrastructure for win64, win32, Ubutnu 64 bit is broken. It seems to fit @geoffmcl 's needs very well. Take that as given. What I'm missing is test infrastructure for Raspberry Pi 2: ARM7 (not Intel x86), running on 32 bits (not 64 bits), and Raspbian (not Ubuntu). It's a test of the whole system on a completely new chip, and thus it requires some care and scrutiny because none of those tests have been run by the lead developer or really by much of anyone. And thus any difference even tiny between the output of a test and its expected output might yield some insight about a portability problem. Happy to do all of the test tooling myself in a fork, to make sure that I understand how all the pieces parts fit together. I do guarantee that it's going to generate some results where an Intel-only developer test will not find the same problems. |
Just to also be very clear - What I hear is that you are only interested in what fits @vielmetti needs, and maybe finding some difference between Intel and ARM7 testing... no problem... Agree we very much welcome extended porting to every system... And when I am sure we are talking about the same code, meaning the same version of tidy, then will listen more carefully about differences in output when running the tests... But more importantly how the code can be adjust to accommodate, fix the output change, and not break other major ports... That could include some MACRO in the code like -
Hopefully minimal... We would then also need some help from CMakeLists.txt configuration to generate say And as read somewhere else, also some discussion, recommendations on package naming conventions, to include a CPU string... which would also start with something similar to build the full name string... |
I have no intention of suggesting that the tests be peculiar to myself. The ARM platform is on millions of Raspbery Pi's, and I want to get that platform adequately tested so that the people who are packaging i'm not ready to send in a PR yet, because too many of those have been rejected. You can look at https://github.com/vielmetti/tidy-html5/blob/bats-firsttest/test/firsttest.bats for the test rig I'm putting together with https://travis-ci.org/vielmetti/tidy-html5/jobs/80416894 for a I had hoped that the Raspberry Pi (ARM) passes all of the tests that aren't skipped, but it doesn't, so investigating. I'll open an issue if I find something concrete. |
@vielmetti looks like you are having fun with travis-ci... Enjoy... While suggestions for improving the test infrastructure are always welcome, not so much for automating, as already explained... Anyway, before you can automate We have already fixed some the old scripts, but there is more to do here first... Solving some currently identified problems with just using the diff of two files as decision ctiteria... and not talking about just skipping it... but... In the past I have had to exclude some original tests, and maybe there are more that are not relevant now... maybe some could come back... need to identify, discuss, and decide... And probably some other improvements to the test infrastructure... But the point is to not automatically run it as is, at this time, yet... but it's your travis-ci to do as you like... As you state, please open an issue if you find something concrete. Of course if the something is related to current issues, then just add it appropriately... accordingly closing this. As indicated, I prefer |
Thanks for making your intentions clear, @geoffmcl . I wish you luck in your project. |
There is a whole set of test cases in https://github.com/htacg/tidy-html5/tree/master/test, and some scripts there to start to automate the test process. There could be more.
What I'm interested in is modernizing this test infrastructure. One possibility is to use something that uses TAP - see http://testanything.org/producers.html for a whole range of tooling - including tools that run in Windows.
TAP output looks like this:
A tool that I have used on a different project is
bats
at https://github.com/sstephenson/bats which is installable on Linux and OS X from within Travis and that runs as simplebash
scripts that should be suitably easy to handle.That all said, if anyone has strong feelings about automated testing, please speak up.
The text was updated successfully, but these errors were encountered: