Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Decouple ext scorer init & inference & decoding for the convenience o… #122

Merged
merged 8 commits into from
Jan 15, 2018

Conversation

kuke
Copy link
Contributor

@kuke kuke commented Jan 12, 2018

Resolve #121
Fix #120
Fix #117

infer.py Outdated
language_model_path=args.lang_model_path,
num_processes=args.num_proc_bsearch,
feeding_dict=data_generator.feeding)
if args.decoding_method == "ctc_beam_search":
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If ctc_greedy doesn't require the external score, I think it's better to move this if after line 104

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

language_model_path, num_processes, feeding_dict):
"""Model inference. Infer the transcription for a batch of speech
utterances.
def infer_probs_batch(self, infer_data, feeding_dict):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please keep the naming consistent. Like below decode_batch_beam_search, please change infer_probs_batch to infer_batch_probs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

rnn_size=args.rnn_layer_size,
use_gru=args.use_gru,
share_rnn_weights=args.share_rnn_weights)
keep_transcription_text=True)

batch_reader = data_generator.batch_reader_creator(
manifest_path=args.tune_manifest,
batch_size=args.batch_size,
sortagrad=False,
shuffle_method=None)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why removed the validation for model path here ?

Copy link
Contributor

@pkuyym pkuyym left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Copy link
Contributor Author

@kuke kuke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. Updated.

infer.py Outdated
language_model_path=args.lang_model_path,
num_processes=args.num_proc_bsearch,
feeding_dict=data_generator.feeding)
if args.decoding_method == "ctc_beam_search":
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

language_model_path, num_processes, feeding_dict):
"""Model inference. Infer the transcription for a batch of speech
utterances.
def infer_probs_batch(self, infer_data, feeding_dict):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@kuke kuke merged commit 422f55a into PaddlePaddle:develop Jan 15, 2018
Jackwaterveg pushed a commit to Jackwaterveg/DeepSpeech that referenced this pull request Jan 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants