-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Community Pipelines] #841
Comments
So there is no init image support or mask support in |
Hey @WASasquatch, Sorry I don't really understand your question here - what does |
Hey @patrickvonplaten , I'm just curious why the From a "community examples" perspective, coming across this, I wonder; "how do I make this work in a typical diffusers workflow?", using img2img and inpainting refinement and experimentation, not just a txt2img which I feel more a demo or starting point in a lot of techniques (if not starting with an init image) |
Looking at all these pipelines. Things are quickly falling apart, imo. Pipelines doing one little thing, and then its own pipeline to manage, is horribly counter-productive. This is falling back to the fundamental issue with diffusers itself, and needing manage tons of pipes in code just to take on certain tasks (like img2img, inpainting), and now what about, clip guided diffusion (still limited to just text2img), or wildcard pipeline being worked on just for wildcards and again, based on limited DiffusionPipeline. The idea behind community pipelines does not seem flushed. If this was a modular plugin idea, where a pipeline could have plugins that add functionality, it would to make far more sense. None of this seems to think of the end developers/users and how this is implemented in an efficient, and minimal API philosophy. Give it a couple months, and in order to use all the cool things community pipelines has to offer, you'll have a script 10x the size it needs to be just to implement all these pipes, put in the logic to use the right pipes for the right tasks, etc, etc, essentially building your own API, off an API, to do something... Like you released the mega community pipeline, which is still kinda useless. It's just a community pipeline which conveniently has img2img and inpainting, but is incompatible with the features one may want, like CLIP Guided Diffusion not being limited to just text2img because it is its own pipe. So what is the point in Mega if it's not useful for any of this? |
Hey @patrickvonplaten, Sorry if it already exists. I did not find it. I was wondering: How would you like a pipeline with multilingual support? Something like:
|
Great idea! We could/should definitely add such a pipeline :-) |
@patrickvonplaten, great! I could give it a try if it is ok... |
I recently developed a community pipeline and found it a bit awkward to develop and test before it gets pulled into main. I think the issue is with how If you provide just the pipeline file name, like the documentation suggests, behind the scenes it seems to look for that file on githubcontent on the main branch, which won't work during development. If you pass your hub repo name, it looks for a I'm sure my problems were mostly self-inflicted, but I didn't see an obviously better workflow. Is there a suggested workflow for developing community pipelines? |
True it'd make sense to allow this! Could you maybe open a seperate issue for it? I'll try to have a look into it soon :-) |
See #1141 for further discussion |
HI there I am considering adding Also, I didn't find an obvious way to test this locally using the Thanks for all the great work you are doing to democratize ML. |
Thanks @patrickvonplaten ! Regarding the issue of inheritance, do you have a recommendation for how to approach this? |
@teticio could you open a first draft PR maybe to show the design? I'm not 100% sure exactly how the inheritance problem looks like at the moment. |
Sure @patrickvonplaten I'm actually in the process of doing this. Although I am integrating it into the main repo. Once it is done (next day or two) you'll be able to see it and it will be clear whether there is a good way to move it to the community pipelines or, if not, maybe it can stay where it is. |
Wow that's awesome - thanks for being so quick on that! |
Hi @patrickvonplaten, the PR is ready to go here #1334, for when you have a chance to look at it. |
Very cool, just reviewed it :-) |
Hi @patrickvonplaten, thanks for this great guide. I wonder if it make sense to allow loading custom pipelines (CP) not as a filename
A use case for that would be a notebook where one can both write a CP class and test it in stiu. |
Hey @RELNO, Wouldn't: pipe = CustomPipeline.from_pretrained("xxx/yyy") make more sense then? Think this should work :-) |
Hi Diffuser Pipeline Community @patrickvonplaten, Recently our Intel PyTorch extension IPEX has enabled the Flash Attention which can significantly speedup the inference of Stable-Diffusion on CPU with BF16 precision. Since it requires to run torch.jit.trace on UNet and an additional installation of IPEX, I wonder if there is any chance that we can contribute to the Pipeline to accelerate S-D inference on CPU? I am not sure adding the above steps to the Pipeline is allowed or not. Thanks! |
Hey @Wei-Lin-Intel, Feel free to add a community pipeline :-) |
|
Hey @patrickvonplaten when opening the link you have used above It gives an error |
Hey @patrickvonplaten I'm trying to incorporate a custom pipeline named lpw-stable-diffusion_xl into my application. I've noticed that the following snippet does not produce the desired outcome: In my search for a workaround, I stumbled upon a method that seems promising. However, I'm facing a challenge with specifying the correct argument for the first parameter in the |
Community Pipelines
As of
diffusers==0.4.0
, you can make use of Community Pipelines.The goal with community pipelines is to have a community-driven offering of an exotic variety of features built on top of
diffusers
which is maintained by the communityHow to use community pipelines
Load community pipelines passing the
custom_pipeline
argument toDiffusionPipeline
, as one of the files in diffusers/examples/community.Contribute to the community pipelines by sending a PR with your own pipelines to diffusers/examples/community, we will merge them quickly.
Why community pipelines?
While the code of community pipelines will not be in official PyPI releases, the code is usable from the
diffusers
package >= 0.4.0. The reason community pipelines are not under the officially maintained pipelines but instead under the examples/community folder is:So by providing community pipelines, we allow community members to contribute and share their work while having flexibility and development speed while making their work easily accessible to the rest of the ecosystem.
What pipelines to contribute?
Feel free to contribute with any pipeline that is exciting to you! If you want some ideas, we've compiled some ideas in different issues here, but you can do other pipelines as well! Some examples:
How to contribute to a community pipeline?
Let's make an example! Say you want to define a pipeline that just does a single forward pass to a U-Net and then calls a scheduler only once (Note, this doesn't make any sense from a scientific point of view, but only represents an example of how things work under the hood).
Cool! So you open your favorite IDE and start creating your pipeline 💻. First, what model weights and configurations do we need? We have a U-Net and a scheduler, so our pipeline should take a U-Net and a scheduler as an argument. Also, as stated above, you'd like to be able to load weights and the scheduler config for Hub and share your code with others, so we'll inherit from
DiffusionPipeline
:Now, we must save the
unet
andscheduler
in a config file so that you can save your pipeline withsave_pretrained
. Therefore, make sure you add every component that is save-able to theregister_modules
function:Cool, the init is done! 🔥 Now, let's go into the forward pass, which we recommend defining as
__call__
. Here you're given all the creative freedom there is. For our amazing "one-step" pipeline, we simply create a random image and call the unet once and the scheduler once:Cool, that's it! 🚀 You can now run this pipeline by passing a unet and a scheduler to the init:
But what's even better is that you can load pre-existing weights into the pipeline if they match exactly your pipeline structure. This is e.g. the case for https://huggingface.co/google/ddpm-cifar10-32 so that we can do the following:
We want to share this amazing pipeline with the community, so we would open a PR request to add the following code under
one_step_unet.py
to https://github.com/huggingface/diffusers/tree/main/examples/community .Our amazing pipeline got merged here: #840.
Now everybody that has
diffusers >= 0.4.0
installed can use our pipeline magically 🪄 as follows:Another way to upload your
custom_pipeline
, besides sending a PR, is uploading the code that contains it to the Hugging Face Hub, as exemplified here.Try it out now - it works!
In general, you will want to create much more sophisticated pipelines, so we recommend looking at existing pipelines here: https://github.com/huggingface/diffusers/tree/main/examples/community
IMPORTANT:
You can use whatever package you want in your community pipeline file - as long as the user has it installed, everything will work fine. Make sure you have one and only one pipeline class that inherits from
DiffusionPipeline
as this will be automatically detected.How do community pipelines work?
A community pipeline is a class that has to inherit from
DiffusionPipeline
: https://huggingface.co/docs/diffusers/api/diffusion_pipeline#diffusers.DiffusionPipeline and that has been added to https://github.com/huggingface/diffusers/tree/main/examples/community.The community can load the pipeline code via the
custom_pipeline
argument fromDiffusionPipeline
. See docs here: https://huggingface.co/docs/diffusers/api/diffusion_pipeline#diffusers.DiffusionPipelinehttps://huggingface.co/docs/diffusers/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained.custom_pipelineThis means:
pretrained_model_name_or_path
argument: https://huggingface.co/docs/diffusers/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path whereas the code that powers the community pipeline is defined in a file added in https://github.com/huggingface/diffusers/tree/main/examples/communityNow, it might very well be that only some of your pipeline components weights can be downloaded from an official repo. The other components should then be passed directly to init as is the case for the ClIP guidance notebook here)
The magic behind all of this is that we load the code directly from GitHub. You can check it out in more detail if you follow the functionality defined here:
diffusers/src/diffusers/pipeline_utils.py
Line 405 in d3eb3b3
diffusers
packages.The text was updated successfully, but these errors were encountered: