In 'generate_database' execute the following scripts in the given order. The data set contains the respective png images of the musical notation and can be found in 'generate_dataset/png_objects'.
- generate_svg_notes_with_rotation.py
- generate_svg_notes_grouped_with_rotation.py
- generate_svg_symbols_with_rotation.py
- convert_svg_to_png.py
In 'object_detection' execute the following scripts in the given order. The output can be found in 'object_detection/separated_notes'. There is on directory for each tune containing its rows as subdirectories.
- identify_lines.py
- separate_notes.py
- identify_groups_of_notes.py
- separate_groups_of_notes.py
There are parameters that can be controlled. idms locations: training data location net(resnet50/googlenet) net to be trained diroutput: location of outputs of notes dirtest: location of object detection(seperated notes)
n 'process_output' execute the following script. The audio files can be accessed in 'process_output/audios'
- output_to_wav.py
In 'process_output' execute the following script. It will compare each label to the ground truth given in 'process_output\label_files' and print the result to the console.
- compare_labels.py