-
-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Continuous integration/testing #1068
Comments
Argh, a firefox crash ate my long reply, so please forgive my ultra-brevity while I read your links: someone offered a CI server once: http://forum.openframeworks.cc/index.php/topic,5340.msg26506.html#msg26506 unit tests are under consideration, the folder is not there simply cause nobody made it yet. so, posting before it crashes again, I'm sure other people will pitch in, too. Edit: Is this our build script which exits with 0 even though there are errors, or is this from travis or whatever? ./install_dependencies.sh must be executed as sudo. passwordless sudo seems to be available on travis. Edit2: you should probably work with the OF develop branch, not master. Many things have changed by now. |
One thing I think will be a big issue is that Travis only seems to offer Ubuntu 32bit as build environment, but OF is heavily cross-platform. I think it's impossible to test linux64, Windows, macOS versions - this would be a k.o.-criterion I guess. Also, it seems to be heavily ruby/web oriented... |
This is just my opinion but I'm not sure I see how this would really be all that useful. Business logic, server connections, DB scripts, all these things are definitely testable, but I'm not sure how you would test "image loading works", or "vertices are out of order" without writing something automated that compares a screenshot to a screenshot of the running application pixel by pixel. That's not that hard to do trivially but I have a hard time imagining that it's really worth the effort for all 5 platforms that OF supports. That doesn't even get into testing "sound is ok", "geometry shader uniforms work properly", etc. This might be part of the reason that TDD is really popular in the web world but not in the games world. |
Hey !
---> it is like upload, we do not verify that the uploaded photo of the dog is really a dog, but we verify that the avatar that photo is attribute to does get a photo and the photo that we show is not the default image nore a broken link
---> this one is tricky are require a default image that we use as reference. We have to compare as you remark. Well, it's not hard but it takes time.
---> well, we can test that we can load a file, access the file, gor at second 00:50 and check if value of volume or FFT is as we should expected.
---> hum, i do not know :) Well, these cases are not the easiest and as always we can begin with littles ones, and thus avoid regression. As for Travis ! I bump them about plateform issues :), see what they got to give. Thanks |
Well, we still have to make sure images are loaded correctly, right? :-) Wouldn't do to have people have blue faces cause the channels were switched, or upside down, or something. So I fear it's not as easy as you imagine.. |
hum, well just for images, don't we have to check pixel order on the R G B A channels with the corresponding ARRAYS that we had generated and we know are well formatted.
Lot's of function are related to inside manipulation and not concerning display stuff. g |
I don't really have any objection to this, and I doubt anyone else does either, but there are other things that I'd rather have the core team spend time working on, and I'd rather be working on other things. If someone, for instance, you, wants to handle this, that'd be really awesome and it could be really helpful. |
Yeah It sure would be useful to have, I only wanted to point out that there would be some things which would be difficult to test - the question is how useful it would be to have test only for parts of OF? So as joshua said, if you want to go ahead and start working on this and find out how far we can take it, please do, it is appreciated. The automated builds, though, that's definitely on the table afaik (but cross-platform is a must). Probably @arturoc can comment further on this. |
While we're at it, regarding the multi-platform problem, it probably makes more sense to use a Jenkins server (which is cross-platform). I found a hosted Jenkins server solution which is free for FOSS projects, and also has github support etc., apparently can even do Android via emulator? http://jenkinshosting.com/ |
I don't know how applicable it is, but the processing.js team has a nice set of unit tests. Part of them do pixel comparisons. While it is really nice there, I don't know the complexity of moving those features to OF. I would love to see them moved though. There is also teamcity, which I believe is multiplatform: http://www.jetbrains.com/teamcity/ |
edit: forgot to ask if someone is already working on this. (?) hi, i think it would be really nice to have unit tests on OF. in some cases, it can be a place to learn some tricks and it should make easy to tests fixes on cross platform, tests new features and it can make easy to test on other platforms. i've started a new app (probably, it's a good way to make it testable in all platforms). https://github.com/diasbruno/ofCoreTest (works as a normal OF app - clone in myApps or devApps). for now, it's xcode only. maybe later, we can move to cloud service. |
afaik, nobody is working on this at the moment. yes, tests (not necessarily unit tests) would be awesome to have. I think the major question is: who will write all the tests? ^^ edit: probably it's worthwhile to check how Processing.js does testing, as @jonbro indicated above. |
the sintax is very commom in a lot of frameworks in ruby, python, java, flash!! there is also CATCH which is on github, it is for c, c++, obj-c. it's really good, but the syntax looks quite different. so, i though it would be better to use cpptest. it's small, simple, popular...so, it looks good. if someone has experience with another test tool, more robust, or whatever..we can try it. :) i think we can easily tests all the small pieces, vectors, allocations,...maybe, we can find some stuff...maybe not. load image locally | fail loading locally so, i think we won't spend much time testing and get the benefits. i have some time available, so i think i can start. :) i'll take a look at processing.js... |
ah ic. also take a look through the forum thread you posted in - jonbro elaborates on what he did in more detail (he apparently used gunittest), and there are some keywords like "reftest" which sounds like what we could do with pictures etc. I think it's ok/best to put all the test-related material in a /tests/ folder in the openframeworks root directory. Great to see some movement on this at last! Now that the build server is getting more and more stable, this is the next step! |
yep! i saw the jonbro's branch with gunittest...it's looks great and i didn't know what was the status of this discussion...so i decided to make a new idea that could be simple enough to implement this...without dependencies on comp. libraries and stuff, and everybody can 'clone' and then just write the tests. (hope i'm not doing something wrong :) ). i think it would be better to write as an app, cause it uses the compiled version with everything on OF, so we don't need to rewrite configuration and whatever... edit: this is not difficult to setup. so, we can discuss about which solution can be used, than we can move the app to the root/test folder. i'll look in more detail about gunittest from jonbro and if it's looks better than cpptest, we can continue his work... |
I had totally forgotten about that branch! back from the dead! So I don't have a great memory of why I picked GUnitTest, and I don't know what the maintenance level of that is. Also, I am not really sure what processing.js is doing these days, but their integration testing back in the day was super awesome, basically pixel checking against a set of expected images. Mozilla also does this type of testing for their rendering engine: https://developer.mozilla.org/en-US/docs/Creating_reftest-based_unit_tests Good luck with this everyone! |
@jonbro, this is great! thanks. i picked cpptest for the simplicity and convention: class ... : public Test::Suite {
public:
setup()...
teardown()...
test_a()...
test_b()...
} since this integration will take some time, i think we can use cpptest for now, and move later for gunittest if needed. suggestions? |
I think most testing frameworks have more or less the structure above. Yes, I think the best course of action would be to look at a couple of testing frameworks, choose the most suited one and go with it. |
Another additional idea: We got a load of examples already. maybe we can use/recycle those as reftests somehow? That way we could probably get significant code coverage. |
I've predominantly used pytest, for the following reasons. Firstly, I'm usually building with SCons, so the build dev is python the begin with, plus python is easier to script little quick tests and can leverage modules easily (sys/os, easy regex, etc). a simple 'py.test-2.7 --tb=short -v' and it globs all test_regression.py files in any subdirs and executes them, reporting any failures etc. These are just reported to console, but again since its python, we could easily set it up to notify/network/post the results or whathaveyou. |
thanks for you insight, Keith! yeah python is practical, I learned to really like it lately. :-) |
@bilderbuchi great idea! +1 @kpasko python is really great! also, the customization of the output can be done in cpptest. so, i think we have great options to go with. |
@bilderbuchi i'm still trying to find a better way to make this automated tests work. if anyone have time to take a look, it would be awesome. this should be a test for adding new features. (also, this should help with #1847, but i'm still working on it.) |
[update] i'm working on automated testing for the math core. https://github.com/diasbruno/ofCoreTest/tree/templating-of-math-core/src |
hey! great work, I'll try to take a look soon, hopefully over the weekend. |
yeah, finally got sometime to work on this... it's now running on travis-ci, but i'm still trying to set up the environment to build. https://travis-ci.org/diasbruno/openFrameworks/builds/7304578 |
ha, travis-ci integration is awesome! I was hoping to get this one day, but always thought that compiling OF and all the tests takes too long and travis (meant for more web/script based stuff afaik) would time out your build. is this not the case anymore? last time I checked I think not even OF lib compile time fit into the permissible build duration. |
it seems to be compiling well, close to get it working. the OF library is getting compiled, just needs to set up travis environment to run all tests. :) the makefile was made with console stuff from codeblock on linux. when it works, i'll try to make it use the .../OFcompiled/project makefiles. |
are you using OF master or develop branch? the new makefile system in develop is pretty powerful... |
btw as you are still collecting the tests: in light of a possible future integration into OF, I think it would be better to name the test cpp files e.g. |
develop. i didn't use them just for simplicity (take the log from codeblocks and make it just work on linux). but i'll try make use of them. sure. this could be a problem. maybe: test_of*.cpp? |
|
up and running on travis-ci! https://travis-ci.org/diasbruno/openFrameworks/builds/7390042 |
great job! |
i need to change the exit status for travis, so it will invalidate the current build... |
@arturoc @bakercp do you see an easy way to adapt this to use the new makefile system to run the tests? Some of the stuff in here feels like it duplicates a lot of the makefile build process. Also, I'd love to be able to |
also, pinging @benben - I think you will find this interesting. |
generally, I really like where this is going. Btw, is it easy to add coverage reporting later on? that will be useful when we start to add more tests. |
i still have the previous implementation with make (removed just for clean up),...i tried to implement with the OF makefiles...but there are some rules that i couldn't find a way to make it work well :/ i will put the old branch back later... i was thinking to write the apis to work with reports and analysis in ruby (for this 'study'), and travis has support for everything, so it would be nice to keep everything in one place, with ruby and rake, in this case...but, of course, any setup would work as well, i just though it would be more consistent. ideas are welcomed. no no, i'm not a ruby dev, but it's nice to write in ruby - has good docs... :) |
[update] clean up everything and now running with the OF makefiles. :) |
nice to see this up and running! those linking Fails are still there, though, that's curious... |
maybe, it's because of that ofParameter friend typename - class, blah blah blah...but, i'll check it... there are two things to do, and them, people who can help testing can start use it: 1 - create more logs. 2 - config.h it's getting better... :) |
For Windows testing we can use http://www.appveyor.com/ |
After seeing @bilderbuchi 's comment about automated testing.. I setup a quick test branch for this:
Limitations we have with Travis are the time per build (max 1 hour). Testing this I created another branch from this, one which should soon fail. |
Ah nice, when I checked Travis in the beginning, this was 10min or so, so I discarded the idea of using it for OF. Nice work! One thing: could you leverage the make system instead of xcodebuild on the travis job? (this does work in osx already, right?) That way we could also detect errors in our makefiles, and the build code would be more homogeneous (could easily include linux, too). |
ha, I just found out that you can create a virtual framebuff (xvfb) with Travis, now: http://docs.travis-ci.com/user/gui-and-headless-browsers/#Using-xvfb-to-Run-Tests-That-Require-GUI-%28e.g.-a-Web-browser%29 |
can this be closed since CI is basically working now, and #4259 is the latest work on unit tests? |
Yeah I guess, this issue has become very long anyway, so it will be better to open new issues when further work arises. |
Related issue: #4162 |
Hey !
Me again. Didn't find anywhere this infos. I'm currently testing the travis-ci.org service.
As many people are working on the project and as many many issue are raised, and fixed, I thought it could be great to add some test and thus the travis service.
It would avoid the problem met here :
7ca7833
and the discussion here :
#804
or there :
#921
I read that :
But didn't find them.
The travis service will permit to get the little badge we know well :
![Build Status](https://camo.githubusercontent.com/1b6e3038034d682c59991ab88e6c765aaf58a6b6ec0cc8eb542dae99cfe2c2a1/68747470733a2f2f7365637572652e7472617669732d63692e6f72672f736f6978616e746563697263756974732f6f70656e4672616d65776f726b732e706e673f6272616e63683d6d6173746572)
Also, why not adding some spec test, to ensure future development of features.
I'm currently reading :
http://www.squidoo.com/cplusplus-behaviour-driven-development-tools#module124841511
http://sourceforge.net/apps/mediawiki/turtle/index.php?title=Turtle
For those who want some other nice reading :
http://gamesfromwithin.com/exploring-the-c-unit-testing-framework-jungle
The travis test is here, currently it is a total fake, but I just proposed the idea of a test driven development
http://travis-ci.org/#!/soixantecircuits/openFrameworks/builds/854259
The text was updated successfully, but these errors were encountered: