-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about Latency Limit #19
Comments
Hi Zhipeng, That should be fine, because we will test on better hardware. If you want to double check, you can submit this model now and I can test the latency on my end. This doesn't have to be your final submission, but you should follow the submission instructions so that it is easy for me to run it. |
OK, Thank you so much. This is our bandle id 0xbbe65de9855b4a058e1d333f28c46dad (now it can only read by mrqa group). |
The bundle of run-predictions/predictions.json is predictions-LatencyTest(bandle id 0xefa76771a566486096af8f39c241793e ) |
Hello Robin, |
Hi Zhipeng, I encounter the following error when running your code:
It seems that you are trying to install flask but it fails. Note that when we run the submissions, we run them in a container that does not have network access. I recommend that you have everything installed on the docker image you are using, so that you don't have to install as part of your code. |
I see. Yes I'm using the same docker image. This is very strange, let me try to get some help from the codalab people to understand what happened. Comparing your bundle with my run, the difference is that for yours: Requirement already satisfied: Werkzeug>=0.7 in /usr/local/lib/python3.6/dist-packages (from Flask==0.12.1) Whereas in mine it says it's not satisfied, tries to download it, and fails. |
Thanks. |
Hello Robin, |
I am still debugging this. Just to help me out: is the docker image kevin898y/tensorflow_py36 something you created? Have you changed the docker image recently? |
Hi Zhipeng, Could you try submitting another version that uses the |
Actually, we think we have found the issue inside codalab that was causing the problem. Once it is fixed I will try again and hopefully it will work. So you do not need to take further action at this time. |
Ok, thank you. |
Hello Robin, |
Hi Zhipeng, Yes in the worst case we can just run on the public codalab instances. If it's not too much trouble, could you try submitting another bundle with |
Ok. I will submit another bundle right now. |
Hello Robin, |
I am trying it now, initially it looks like it works! I will keep you updated. Please still finish the normal submission procedure. |
Ok. Thank you for your remind. Our final submition is already prepared. But I didn't add --no-deps into the script. Should I upload a new one with all pip install with --no-deps. |
Yes, that would be great if you can have your final submission use --no-deps. |
Ok. I'm already uploading a new one. |
Hello Robin, |
Dear MRQA group,
We test our model(single model) on the out-of-domain dataset(official data on Codalab) with official predict_server.py on Codalab with one GPU(Tesla K80)and get the right result we expected. But the time we used was 3h(bout 1.12s a question), I'd like to confirm that our model meet your Latency Limit.
Best,
Zhipeng
The text was updated successfully, but these errors were encountered: