Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I run OmniGen online? #113

Open
R-iscool opened this issue Nov 9, 2024 · 4 comments
Open

How can I run OmniGen online? #113

R-iscool opened this issue Nov 9, 2024 · 4 comments

Comments

@R-iscool
Copy link

R-iscool commented Nov 9, 2024

I don't really know how to run this on colab. I put in this code
!git clone https://github.com/staoxiao/OmniGen.git %cd OmniGen !pip install -e . !pip install gradio spaces !python app.py --share
And this is the last output it gave me. After that, it stopped
Screenshot_20241109_172458_Chrome

Btw I ran this on colab on a phone (Samsung m34)
Also, is it possible to use omnigen online? If yes please tell me how

@HarjotSingh-b18055
Copy link

I tried running on colab. seems that RAM allowed by Colab is not enough to run this. My notebook crashed at this step due to insufficient memory. So seems it is not possible on free tier resources by colab.

@R-iscool
Copy link
Author

R-iscool commented Nov 9, 2024

can you suggest alternative method to run online or a fix to run on colab?

@R-iscool R-iscool changed the title How can I run OmniGen on Colab? How can I run OmniGen online? Nov 9, 2024
@HarjotSingh-b18055
Copy link

can you suggest alternative method to run online or a fix to run on colab?

@R-iscool running online is not a problem. The problem is resources. The resources that OmniGen demands are not available for free. If you can pay then running online is not a problem.

@able2608
Copy link

For anyone attempting to run on colab free tier, you'll need to use this PR #151 instead of the main repo because of GPU resource constraints. The app.py also need to be patched to use the prequantized weight provided by the PR's author so that colab RAM will not run out when loading the model. If you're just trying it and are not doing anything serious with it, use the huggingface demo instead. You might only be able to generate one or two images per 2 hours due to ZeroGPU compute time limit set for each user, but for now it is not that worth it to do all the manual work and yield a 30 minutes per image speed on colab (yeah T4's aren't that new right now and the speed is definitely quite slow, even compared to consumer GPUs nowadays).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants