Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fastt5 not working with FastAPI gunicorn and docker #64

Open
kklivil opened this issue Nov 17, 2022 · 1 comment
Open

fastt5 not working with FastAPI gunicorn and docker #64

kklivil opened this issue Nov 17, 2022 · 1 comment

Comments

@kklivil
Copy link

kklivil commented Nov 17, 2022

I am using fastt5 with FastAPI web framework and gunicorn as a server within a docker container. The server doesn't startup completely i.e. it hangs during the startup process.
command to start the server: gunicorn app.my_app:app --bind 0.0.0.0:${PORT} --reload --timeout 120 --access-logfile -

requirements.txt:

anyio==3.6.2
certifi==2022.9.24
charset-normalizer==2.1.1
click==8.1.3
fastapi==0.85.1
filelock==3.8.0
h11==0.14.0
huggingface-hub==0.10.1
idna==3.4
numpy==1.23.4
omegaconf==2.2.3
packaging==21.3
pydantic==1.10.2
pyparsing==3.0.9
PyYAML==6.0
regex==2022.9.13
requests==2.28.1
sentencepiece==0.1.97
sniffio==1.3.0
starlette==0.20.4
tokenizers==0.13.1
torch==1.12.1
tqdm==4.64.1
transformers==4.23.1
typing_extensions==4.4.0
urllib3==1.26.12
uvicorn==0.19.0
gunicorn==20.1.0
httptools==0.5.0
python-dotenv==0.21.0
uvloop==0.17.0
watchfiles==0.18.0
websockets==10.4
fastt5==0.1.4
six==1.16.0

There is NO error in the output during the start-up. It just hangs.

[2022-11-17 14:19:38 +0000] [7] [INFO] Listening at: http://0.0.0.0:8000 (7)
[2022-11-17 14:19:38 +0000] [7] [INFO] Using worker: sync
[2022-11-17 14:19:38 +0000] [8] [INFO] Booting worker with pid: 8
Downloading: 100%|██████████| 1.20k/1.20k [00:00<00:00, 473kB/s]
Downloading: 100%|██████████| 242M/242M [00:28<00:00, 8.41MB/s] 
In-place op on output of tensor.shape. See https://pytorch.org/docs/master/onnx.html#avoid-inplace-operations-when-using-tensor-shape-in-tracing-mode
In-place op on output of tensor.shape. See https://pytorch.org/docs/master/onnx.html#avoid-inplace-operations-when-using-tensor-shape-in-tracing-mode
Exporting to onnx... |################################| 3/3
Quantizing... |################################| 3/3
Downloading:   0%|          | 0.00/792k [00:00<?, ?B/s]Setting up onnx model...
Done!
Downloading: 100%|██████████| 792k/792k [00:00<00:00, 1.41MB/s] 

Could you please have a look into this?

@cm2435
Copy link

cm2435 commented Dec 23, 2022

Bumping this. same issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants