-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[stevens] Build from Megamodels #13
Comments
Dear @PerditaStevens , Importing the megamodelbuild project in eclipse was straightforward thanks to maven, and I also easily executed the unit tests to validate that everything was working well. However, I tried to execute the bxexample.sh script out of curiosity and obtained the following error.
May I ask if the script is still relevant to the project? Sincerely, Manuel Leduc |
** Summary As-is, the examples are early stage and are illustrations that are backing the theoretical elements in the paper. They do not aim at implementing and validating the author’s approach on a concrete case study, or elaborate on an example of a reasonable size and complexity. with regard to artifact evaluation criteria: ** As complete as possible ** Well-documented ** Easy to (re)use |
Summary Documented Consistent Complete Exercisable Availability Minor comments
|
Summary This artifact accompanies a paper to improve build systems using megamodels and a notion of consistency (consistency can be used for checking which parts of the system need to be re-built). The artefact consist of Java classes with some documentation and a couple of unit tests. Assessment Coverability: Met the expectation The java classes cover the implementation of the two examples given in the paper. Packaging: Met expectations The both the Java classes and the unit test can easily be built using maven. Reproducibility and consistency: Fell below expectations I found this artifact very difficult to evaluate in terms of reproducibility or consistency w.r.t the paper. Documentation: Fell below expectations There is no documentation on what can be done with the artifact besides "reading the code and executing the unit tests". The reviewer has to guess what needs to be done. |
I am away without good email access (or any chance to update the artefact)
right now - but I appreciate the comments about where more documentation
would be useful, and will improve matters on my return.
Perdita
…On Wed, 18 Jul 2018, 13:11 Zeta, < ***@***.***> wrote:
*Summary*
This artifact accompanies a paper to improve build systems using
megamodels and a notion of consistency (consistency can be used for
checking which parts of the system need to be re-built).
The artefact consist of Java classes with some documentation and a couple
of unit tests.
*Assessment*
*Coverability: Met the expectation*
The java classes cover the implementation of the two examples given in the
paper.
*Packaging: Met expectations*
The both the Java classes and the unit test can easily be built using
maven.
*Reproducibility and consistency: Fell below expectations*
I found this artifact very difficult to evaluate in terms of
reproducibility or consistency w.r.t the paper.
For the java code, the author claims that the code is expected to be read
rather than executed.
However despite the fact that there are classes for each of the examples,
I found not way to relate the code (for example the restoreConsistency
method of each class) to the paper.
Something similar happens to the unit tests. I found no way (and there is
not documentation) to relate this to the paper besides the fact that there
is a class with unit tests for each example.
*Documentation: Fell below expectations*
There is no documentation on what can be done with the artifact besides
"reading the code and executing the unit tests". The reviewer has to guess
what needs to be done.
Reading other reviewers comments, it seems that several of us were
confused and e.g. tried to execute the bxexample.sh script. A nice
documentation would have avoided this kind of confusion.
For reading the code, I would have expected some comments on how to
related the code, to the paper.
For the unit tests, I would have expected some explanation of what each
test is actually verifying or validating.
Also, the documentation is distributed between different README files or
comments in different classes, making it very difficult to follow.
Also, it took me a while to find out that the unit test could be executed
by running "mvn clean test".
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#13 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AISrEKhlXNYn0nI6Xifl8YcAr_0t2Q6Lks5uHxfDgaJpZM4VOISe>
.
|
Dear @PerditaStevens, Based on all the comments and the reviews provided by the members of the Artifact Evaluation Committee of MoDELS 2018, we have reached the conclusion that this artifact conforms to the expectations and is hereby approved. Please use the badge instructions page to add the badge of approval to your article, and add the link to the ReMoDD entry with URI http://remodd.org/node/582 to the camera ready version of the paper. Thank you very much for putting extra effort into the preparation and finalising of the artifact. If any of the comments above are still not addressed, please try to accommodate them before the conference. In particular, documentation could be made more reader-friendly and help potential reusers to navigate through the artifact internals. |
Submitted by @PerditaStevens to https://github.com/modelsconf2018/artifact-evaluation/tree/master/stevens
Paper: http://homepages.inf.ed.ac.uk/perdita/MegamodelBuild/modelspaper.pdf
The text was updated successfully, but these errors were encountered: