-
Notifications
You must be signed in to change notification settings - Fork 6.5k
Description
Hello folks 👋
What❓
We’re excited to bring a contributor-focused program to you! In this program, we want to work and collaborate with serious contributors to Diffusers and reward them for their time and energy spent with us. Keep reading this thread if that sounds interesting to you.
To ease the process, we have compiled a list of issues (more below) where we believe we require the most assistance. These issues also require contributors to consider solid and robust API design that is sustainable and maintainable in the long term.
To be considered for this program, we kindly request that you select only one issue at a time, engage with us by proposing potential design RFCs, and proceed from there. For now, we have a one-contributor-per-one-issue policy, but we can always make exceptions.
Issue list 🐞
- Qwen-Image-Edit Inferior Results Compared to ComfyUI #12216
- Fal Flashpack #12550
- Qwen Image prompt encoding is not padding to max seq len #12075
- Qwen Image: txt_seq_lens is redundant and not used #12344
- [Qwen-image] encoder_hidden_states_mask is not used #12294
- [Qwen-image-edit] Batch Inference Issue / Feature Request #12458
- Context parallel bug when using QwenImage #12568 (@Ratish1: fix(hooks): Add padding support to context parallel hooks #12595)
- [feature] implement TaylorSeer #12569
- [feature] implement TeaCache #12589
- [feature] help us implement unified attention #12570
- Add KV Cache for Autoregressive Inference #12600
- Broken group offloading using block_level #12319
- WanVACEPipeline - doesn't work with apply_group_offloading #12096
- How about forcing the first and last block on device when groupoffloading is used? #11966
- Using leaf_level together with delete_adapters can get some errors. #12396
We will mark the issues accordingly once contributors start working on them so that it’s easier to track.
Working together 🤝
After we have had a chance to interact with potential candidates through issues and PRs, as well as discussions on our GitHub, we will make our best judgment and invite them to a shared Slack channel to facilitate better collaboration.
To ensure everyone has an opportunity to contribute, we kindly ask that contributors work on one issue at a time before claiming another. If you have a PR open but really want to work on a second issue, please check in with us: we can either try to wrap up the existing PR or make exceptions for cases when it makes sense. This is to ensure everyone has a chance to contribute with a balanced workload. Thanks for understanding!
What’s in it for you? 👀
This is quite flexible. All selected participants are entitled to receive the “Diffusers MVP” title, a certificate, HF compute credits, and recommendation letters. You will also be invited to a shared channel on our Slack workspace where we can discuss more mutually beneficial collaboration opportunities. We're starting this program to identify open-source contributors who would like to be included deeper in the strategy and direction of diffusers. We'll be happy to provide you with tools that enable your contribution and growth further, like compute grants and others; we're flexible.