You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello mimbres, really appreciate the work on this repo and your paper!
If possible I would like to get your opinion on whether or not the following use case is viable for neural-audio-fp:
Extract the matches contained in each query (as well as their matching time frame) and the number of matches/different audio file segment per query is unknown and variable.
Example: query000 [0,24]s matches with db_audio042 [3,27]s query000 [25, 76]s also matches with db_audio007 [55, 106]s. query001 [3, 89]s matches with db_audio110 [0, 86]s.
What I've done so far (all with default config):
Trained with your dataset mini;
Converted my reference and query files to wav with 8kHz sampling rate;
Generated fingerprints for the dataset mini, the reference audios (db) and the queries;
Removed all dummy_db implementations in eval_faiss.py as you suggested in #38 ;
As you also suggested in #46, I ran eval_faiss.py with a copy of my query.mm and query_shape.npy as my db.mm and db_shape.npy respectively, just to make sure everything is working properly which returned 100%s across the board as expected. Ran it with python eval/eval_faiss.py --nogpu --k_probe 20 --test_ids all /projects/neural-audio-fp/logs/emb/attempt01/1/;
When running eval_faiss.pyas mentioned before or python run.py evaluate attempt01 1 --test_ids all --nogpu with my actual db and query set I get 0% on all tests.
From what I've read in different issues it is not mandatory to train using the custom data unless the data type is far too different from music correct?
Do you have any suggestion in how can i advance? Or is this usage (queries with multiple true positive matches) not meant for neural-audio-fp?
Hello mimbres, really appreciate the work on this repo and your paper!
If possible I would like to get your opinion on whether or not the following use case is viable for neural-audio-fp:
Extract the matches contained in each query (as well as their matching time frame) and the number of matches/different audio file segment per query is unknown and variable.
Example:
query000
[0,24]s matches withdb_audio042
[3,27]squery000
[25, 76]s also matches withdb_audio007
[55, 106]s.query001
[3, 89]s matches withdb_audio110
[0, 86]s.What I've done so far (all with default config):
dummy_db
implementations ineval_faiss.py
as you suggested in #38 ;eval_faiss.py
with a copy of myquery.mm
andquery_shape.npy
as mydb.mm
anddb_shape.npy
respectively, just to make sure everything is working properly which returned 100%s across the board as expected. Ran it withpython eval/eval_faiss.py --nogpu --k_probe 20 --test_ids all /projects/neural-audio-fp/logs/emb/attempt01/1/
;eval_faiss.py
as mentioned before orpython run.py evaluate attempt01 1 --test_ids all --nogpu
with my actual db and query set I get 0% on all tests.From what I've read in different issues it is not mandatory to train using the custom data unless the data type is far too different from music correct?
Do you have any suggestion in how can i advance? Or is this usage (queries with multiple true positive matches) not meant for neural-audio-fp?
In case you want to see the dataset I am using, pexafb_easy_small, was also generated from the FMA dataset for the audio-fingerprinting-benchmarking-toolkit by Pexeso.
Thank you for reading.
The text was updated successfully, but these errors were encountered: