-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Decouple ext scorer init & inference & decoding for the convenience o… #122
Conversation
infer.py
Outdated
language_model_path=args.lang_model_path, | ||
num_processes=args.num_proc_bsearch, | ||
feeding_dict=data_generator.feeding) | ||
if args.decoding_method == "ctc_beam_search": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If ctc_greedy
doesn't require the external score, I think it's better to move this if
after line 104
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
model_utils/model.py
Outdated
language_model_path, num_processes, feeding_dict): | ||
"""Model inference. Infer the transcription for a batch of speech | ||
utterances. | ||
def infer_probs_batch(self, infer_data, feeding_dict): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please keep the naming consistent. Like below decode_batch_beam_search
, please change infer_probs_batch
to infer_batch_probs
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
rnn_size=args.rnn_layer_size, | ||
use_gru=args.use_gru, | ||
share_rnn_weights=args.share_rnn_weights) | ||
keep_transcription_text=True) | ||
|
||
batch_reader = data_generator.batch_reader_creator( | ||
manifest_path=args.tune_manifest, | ||
batch_size=args.batch_size, | ||
sortagrad=False, | ||
shuffle_method=None) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why removed the validation for model path here ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. Updated.
infer.py
Outdated
language_model_path=args.lang_model_path, | ||
num_processes=args.num_proc_bsearch, | ||
feeding_dict=data_generator.feeding) | ||
if args.decoding_method == "ctc_beam_search": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
model_utils/model.py
Outdated
language_model_path, num_processes, feeding_dict): | ||
"""Model inference. Infer the transcription for a batch of speech | ||
utterances. | ||
def infer_probs_batch(self, infer_data, feeding_dict): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
Resolve #121
Fix #120
Fix #117