-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix model type for classification #773
Conversation
inference/core/cache/serializers.py
Outdated
] | ||
predictions = [] | ||
for pred in response.predictions: | ||
if type(pred) == str: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please use isinstance(pred, str)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if the clause is supposed to handle multi-label cls - then this is wrong approach whatsoever - please refer to response schemas
inference/core/cache/serializers.py
Outdated
for pred in response.predictions: | ||
if type(pred) == str: | ||
predictions.append( | ||
{"class": pred, "confidence": request.confidence} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is not that clear for me - why to put a request confidence into predicted class name? That's the model which dictates confidence of prediction, request do only have threshold
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That was mistake, good catch
) | ||
|
||
|
||
class TestSerializersClassification(unittest.TestCase): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we do not use unittest, but pytest - please adjust to the standard tool in repo
Description
model_type
which is causing downstream issues in Model Monitoring (missing label/misreporting)Type of change
Please delete options that are not relevant.
How has this change been tested, please provide a testcase or example of how you tested the change?
Any specific deployment considerations
For example, documentation changes, usability, usage/costs, secrets, etc.
Docs