You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Soft-dtw looks like the perfect solution for my deep-learning model. However the speed is a major bottleneck in training (minibatches of 64 samples, w. 2000 positions x 25 classes).
Would it be possible to add a parameter for greedy scoring which would scale better in time?
For example, I never need alignments with more than a few insertions/deletions. Perhaps this can be achieved by controlling the maximum recursion depth?
The text was updated successfully, but these errors were encountered:
I think the right way to do it would be to add a band constraint, as done in Fast Global Alignment Kernels by @marcocuturi. This would allow to only compute distances for pairs of observations not too far from the diagonal. This should be fairly straightforward but we haven't got around to doing it yet.
Soft-dtw looks like the perfect solution for my deep-learning model. However the speed is a major bottleneck in training (minibatches of 64 samples, w. 2000 positions x 25 classes).
Would it be possible to add a parameter for greedy scoring which would scale better in time?
For example, I never need alignments with more than a few insertions/deletions. Perhaps this can be achieved by controlling the maximum recursion depth?
The text was updated successfully, but these errors were encountered: