You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have some questions about the functions used to calculate the loss in Bonito, which unfortunately I cannot answer myself from the Bonito or Koi code or other issues like #101.
Can you explain in more detail how the forward and backward calculations work?
Do you use a transition matrix somewhere, similar to how it is done in HMMs?
Are the stay_scores and move_scores interleaved at any point, as it is done with the blanks and labels in CTC loss?
I do not yet understand how the normalisation is calculated via 'logZ_cu_sparse'. Can you provide further information here?
It is very interesting how quickly Bonito reaches high accuracies after only a few epochs. Do you set any special constraints for the forward and backward calculations to achieve this?
For decoding the output during inference, the Dorado implementation appears to use a special variant of Beam Search. Does your version of the Viterbi algorithm work similarly during training with Bonito?
Are there any papers or other resources that explain the specific idea behind the functions used to calculate the loss?
I hope that these questions can also help other developers. Thank you very much for your time.
With kind regards
Nick
The text was updated successfully, but these errors were encountered:
Dear Bonito developers,
I have some questions about the functions used to calculate the loss in Bonito, which unfortunately I cannot answer myself from the Bonito or Koi code or other issues like #101.
Can you explain in more detail how the forward and backward calculations work?
Do you use a transition matrix somewhere, similar to how it is done in HMMs?
Are the stay_scores and move_scores interleaved at any point, as it is done with the blanks and labels in CTC loss?
I do not yet understand how the normalisation is calculated via 'logZ_cu_sparse'. Can you provide further information here?
It is very interesting how quickly Bonito reaches high accuracies after only a few epochs. Do you set any special constraints for the forward and backward calculations to achieve this?
For decoding the output during inference, the Dorado implementation appears to use a special variant of Beam Search. Does your version of the Viterbi algorithm work similarly during training with Bonito?
Are there any papers or other resources that explain the specific idea behind the functions used to calculate the loss?
I hope that these questions can also help other developers. Thank you very much for your time.
With kind regards
Nick
The text was updated successfully, but these errors were encountered: