You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The evaluation criteria for each submission can be found here.
A convenience translation into English:
The jury of the competition is evaluating all submissions based on the criteria
theoretical foundations / approach
documentation
software engineering
UI/UX
functionality
The order of these criteria is not the order of importance, you should put your focus on the areas as demanded in the actual challenge description. The following points are meant to give you a rough idea what the criteria are about and should not be interpreted too literally.
Theoretic foundations/approach
Research
Modelling
Complexity analysis
Effort estimation
Documentation
consistency, completenesst and soundness
structuring and layout
code documentation
Software engineering
concise documentation of requirements, design considerations and decisions
concise documentation of tests, results and conclusions drawn
quality of the software development approach, design decisions and test concept
UI / UX
Usability
Exciting extensions
Functionality:
functional correctness and quality of the solution (according to the challenge description)
verifiability of the test results
The text was updated successfully, but these errors were encountered:
While trying to gather test samples, we became more and more unclear about how the functionality of our algorithm will be tested. The given classes (DEV, HW etc.) don't seem to be present in equal amount at Github.
Having that information in mind, has the algorithm to face an amount of per-class-samples that correlates to the reality when tested or will they be nearly equal (like: 10 tests per class)?
@Ichaelus: Your are 💯 that the different classes are not equally represented across GitHub. The test data will reflect that "unbalance" although we will not disclose the actual ratio. Your rationale and documentation around your algorithm will be drawn into the consideration at least as much as the "pure" evaluation results.
The evaluation criteria for each submission can be found here.
A convenience translation into English:
The jury of the competition is evaluating all submissions based on the criteria
The order of these criteria is not the order of importance, you should put your focus on the areas as demanded in the actual challenge description. The following points are meant to give you a rough idea what the criteria are about and should not be interpreted too literally.
Theoretic foundations/approach
Documentation
Software engineering
UI / UX
Functionality:
The text was updated successfully, but these errors were encountered: