Replies: 1 comment
-
There is no direct parameter to support the above behavior, but you can try to implement it like this:
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm wondering if it is possible to deploy multiple language models at once with pdserving?
Ideally I would like to have multiple recognition inference models available so I can conditionally do recognition based on a language parameter.
I see the inference model location is defined in in the config.yml. Is it possible to change this location at runtime? I am using the python mode for deployment.
Beta Was this translation helpful? Give feedback.
All reactions