Skip to content

Load by request in BaseTask property

Compare
Choose a tag to compare
@OmidSa75 OmidSa75 released this 09 Dec 08:52
· 8 commits to main since this release

Workflow

Instead of loading in the function directly, A BaseTask class is implemented which has a property to return the model, But still, it will be loaded by the first request and all the functionality is as same as the v1.0.0.

A property is implemented to return the AI model.

class BaseTask(celery.Task):
    def __init__(self) -> None:
        super().__init__()
        self._ai_model = None
        

    @property
    def ai_model(self):
        if self._ai_model is None:  # load the model if it has not
            self._ai_model = load_model()
            print("Load AI Model")
        return self._ai_model

The inference_model is the same as before but the model load section is deleted.

def inference_model(self):
    # if not hasattr(_self, 'ai_model'):
    #     _self.ai_model = _self.load_model()
    #     print('Load AI model')

    input_x = [torch.rand(3, 300, 400).to(
        device), torch.rand(3, 500, 400).to(device)]
    prediction = self.ai_model(input_x)
    print('Hi, this is a inference function')
    return str(type(prediction))

And the task registration is as below:

    celery_app.task(name='inference_model', bind=True,
                    base=BaseTask)(inference_model)