You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For each task, such as IMAGE_CLASSIFICATION, we need a web UI to demo it.
Therefore, when we create a task, we should create the corresponding web UI.
The UI should be general for all applications under the same task.
This website should has a configuration field for the backend service url generated by singa-auto.
When we launch the service (inference) on singa-auto, singa-auto returns the url (ip:port), then we copy and paste the url to the GUI.
The GUI accepts the user input (e.g., uploading image or type in text, etc), and then sends the request to the backend service.
When the response comes, it display the results on the webpage.
(optional) We aggregate all existing webpages into a single website as the Application Zoo.
Currently, once we launch the inference job, we can do the query inside singa-auto.
This approach is not scalable when we have more and more models and tasks.
To resolve this issue, we should deploy the application UI separately (outside of singa-auto).
The text was updated successfully, but these errors were encountered:
For each task, such as IMAGE_CLASSIFICATION, we need a web UI to demo it.
Therefore, when we create a task, we should create the corresponding web UI.
The UI should be general for all applications under the same task.
This website should has a configuration field for the backend service url generated by singa-auto.
When we launch the service (inference) on singa-auto, singa-auto returns the url (ip:port), then we copy and paste the url to the GUI.
The GUI accepts the user input (e.g., uploading image or type in text, etc), and then sends the request to the backend service.
When the response comes, it display the results on the webpage.
(optional) We aggregate all existing webpages into a single website as the Application Zoo.
Currently, once we launch the inference job, we can do the query inside singa-auto.
This approach is not scalable when we have more and more models and tasks.
To resolve this issue, we should deploy the application UI separately (outside of singa-auto).
The text was updated successfully, but these errors were encountered: