Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add color correction toggle for img2img #936

Merged
merged 1 commit into from
Sep 9, 2022

Conversation

xaedes
Copy link
Contributor

@xaedes xaedes commented Sep 9, 2022

Color correction is already used for loopback to prevent color drift with the first image as correction target #698 .
The new toggle allows to use the color correction even without loopback mode.
It helps keeping the colors similar to the input image, useful when refactoring parts of images.

color correction is already used for loopback to prevent color drift with the first image as correction target.
the option allows to use the color correction even without loopback mode.
it helps keeping the colors similar to the input image.
Copy link
Collaborator

@codedealer codedealer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems to be working

@codedealer codedealer merged commit 0706294 into Sygil-Dev:dev Sep 9, 2022
hlky added a commit that referenced this pull request Sep 18, 2022
* resolve conflict with master

* - Added option to select custom models instead of just using the default one, if you want to use a custom model just place your .ckpt file in "models/custom" and the UI will detect it and let you switch between stable diffusion and your custom model, make sure to give the filename a proper name that is easy to distinguish from other models because that name will be used on the UI.
- Implemented basic Text To Video tab, will continue to improve it as it is really basic right now.
- Improved the model loading, you now should see less frequently errors about it not been loaded correctly.

* fix: advanced editor (#827), close #811

refactor js_Call hook to take all gradio arguments

* Added num_inference_steps to config file and fixed incorrectly calls to the config file from the txt2vid tab calling txt2img instead.

* update readme as per installation step & format

* proposed streamlit code organization changes

I want people of all skill levels to be able to contribute
This is one way the code could be split up with the aim of making it easy to understand and contribute especially for people on the lower end of the skill spectrum
All i've done is split things, I think renaming and reorganising is still needed

* Fixed missing diffusers dependency  for Streamlit

* Streamlit: Allow user defaults to be specified in a userconfig_streamlit.yaml file.

* Changed Streamit yaml default configs

Changed `update_preview_frequency` from every 1 step to every 5 steps. This results in a massive gain in performance (roughly going from 2-3 times slower to only 10-15% slower) while still showing good image generation output.

Changed default GFPGAN and realESRGAN settings to be off by default. That way, users can decide if they want to use them on, and what images they wish to do so.

* Made sure img2txt and img2img checkboxes respect YAML defaults

* Move location of user file to configs/webui folder

* Fixed the path in webui_streamlit.py

* Display Info and Stats when render is complete, similar to what Gradio shows.

* Add info and stats to img2img

* chore: update maintenance scripts and docs  (#891)

* automate conda_env_name as per name in yaml

* Embed installation links directly in README.md

Include links to Windows, Linux, and Google Colab installations.

* Fix conda update in webui.sh for pip bug

* Add info about new PRs

Co-authored-by: Hafiidz <3688500+Hafiidz@users.noreply.github.com>
Co-authored-by: Tom Pham <54967380+TomPham97@users.noreply.github.com>
Co-authored-by: GRMrGecko <grmrgecko@gmail.com>

* Improvements to the txt2vid tab.

* Urgent Fix to PR:860

* Update attention.py

* Update FUNDING.yml

* when in outcrop mode, mask added regions and fill in with voroni noise for better outpainting

* frontend: display current device info (#889)

Displays the current device info at the bottom of the page.

For users who run multiple instances of `sd-webui` on the same system (for multiple GPUs), it helps to know which of the active `CUDA_VISIBLE_DEVICES` is being used.

* Fixed aspect ratio box not being updated on txt2img tab, for issue 219 from old repo (#812)

* Metadata cleanup - Maintain metadata within UI (#845)

* Metadata cleanup - Maintain metadata within UI

This commit, when combined with Gradio 3.2.1b1+, maintains image
metadata as an image is passed throughout the UI. For example,
if you generate an image, send it to Image Lab, upscale it, fix faces,
and then drag the resulting image back in to Image Lab, it will still
remember the image generation parameters.

When the image is saved, the metadata will be stripped from it if
save-metadata is not enabled. If the image is saved by *dragging*
out of the UI on to the filesystem it may maintain its metadata.

Note: I have ran into UI responsiveness issues with upgrading Gradio.
Seems there may be some Gradio queue management issues. *Without* the
gradio update this commit will maintain current functionality, but
will not keep meetadata when dragging an image between UI components.

* Move ImageMetadata into its own file

Cleans up webui, enables webui_streamlit et al to use it as well.

* Fix typo

* Add filename formatting argument (#908)

* Update webui.py

Filename formatting argument

* Update scripts/webui.py

Co-authored-by: Thomas Mello <work.mello@gmail.com>

* Tiling parameter (#911)

* tiling

* default to False

* fix: filename format parameter (#923)

* For issue :884, ensure webui.cmd before init src

* Remove embeddings file path

* Add mask_restore to restore images based on mask, fixing #665 (#898)

* Add mask_restore option to give users the option to restore images based on mask, fixing #665.

Before commit c73fdd7  (Implement masking during sampling to improve blending, #308)
image mask was applied after sampling, resulting in masked parts that are not regenerated
to actually stay the same.
Since c73fdd7 the masked img2img will change the whole image, even in masked areas.
It gives better looking results at first glance, but will result in image degredation when
applied a few times. See issue #665.

In the workflow of using repeated masked img2img, users may want to use this options to keep the parts
of image they actually want to keep without image degradation. A final masked img2img or whole image img2img with mask_restore disabled
will give the better blending of "Implement masking during sampling".

* revert changes of a7be43b in change_image_editor_mode

* fix ui_functions.change_image_editor_mode by adding gr.update to the end of the list it returns

* revert inserted newlines and whitespaces to match format of previous code

* improve caption of new option mask_restore

"Only modify regenerated parts of image"

* fix ui_functions.change_image_editor_mode by adding gr.update to the end of the list it returns

an old copy of the function exists in webui.py, this superflous function mistakenly was changed by the earlier commit b6a9e16

* remove unused functions that are near duplicates of functions in ui_functions.py

* Added CSS to center the image in the txt2img interface

* add img2img option for color correction. (#936)

color correction is already used for loopback to prevent color drift with the first image as correction target.
the option allows to use the color correction even without loopback mode.
it helps keeping the colors similar to the input image.

* Image transparency is used as mask for inpainting

* fix: lost imports from #921

* Changed StreamIt to `k_euler` 30 steps  as default

* Fixed an issue with the txt2vid model.

* Removed old files from a split test we deed that are not needed anymore, we plan to do the split differently.

* Changed the scheduler for the txt2vid tab back to LMS, for now we can only use that.

* Better support for large batches in optimized mode

* Removed some unused lines from the css file for the streamlit version.

* Changed the diffusers version to be 0.2.4 or lower as a new version breaks the txt2vid generation.

* Added the models/custom folder to gitignore to ignore custom models.

* Added two new scripts that will be used for the new implementation of the txt2vid tab which uses the latest version of the diffusers library.

* - Improved the progress bar for the txt2vid tab, it now shows more information during generation.
- Changed the guidance_scale variable to be cfg_scale.

* Perform masked image restoration for GFPGAN, RealESRGAN, fixing #947

* Perform masked image restoration when using GFPGAN or RealESRGAN, fixing #947.
Also fixes bug in image display when using masked image restoration with RealESRGAN.

When the image is upscaled using RealESRGAN the image restoration can not use the
original image because it has wrong resolution. In this case the image restoration
will restore the non-regenerated parts of the image with an RealESRGAN upscaled
version of the original input image.

Modifications from GFPGAN or color correction in (un)masked parts are also restored
to the original image by mask blending.

* Update scripts/webui.py

Co-authored-by: Thomas Mello <work.mello@gmail.com>

* fix: sampler name in GoBig #988

* add sampler_name defaults to img2img

* add metadata to other file output file types

* remove deprecated kwargs/parameter

* refactor: sort out dependencies

Co-Authored-By: oc013 <101832295+oc013@users.noreply.github.com>
Co-Authored-By: Aarni Koskela <akx@iki.fi>
Co-Authored-By: oc013 <101832295+oc013@users.noreply.github.com>
Co-Authored-By: Aarni Koskela <akx@iki.fi>

* webui: detect scoped-down GPU environment (#993)

* webui: detect scoped-down GPU environment

check if we're using a scoped-down GPU environment (pynvml does not listen to CUDA_VISIBLE_DEVICES) so that we can measure memory on the correct GPU

* remove unnecessary import

* Added piexif dependency.

* Changed the minimum value for the Sampling Steps and Inference Steps to 10 and added step with a value of 10 to make it easier to move the slider as it will require a higher maximum value than in other tabs for good results on the text2vid tab.

* Commented an import that is not used for now but will be used soon.

* write same metadata to file and yaml

* include piexif in environment needed for exif labelling of non-png files

* fix individual image file format saves

* introduces a general config setting save_format similar to grid_format for individual file saves

* Add NSFW filter to avoid unexpected (#955)

* Add NSFW filter to avoid unexpected

* Fix img2img configuration numbering

* Added some basic layout for the Model Manager tab and added there the models that most people use to make it easy to download instead of having to go do the wiki or searching through discord for links, it also shows the path where you are supposed to put those models for them to work.

* webui: display the GPU in use during startup (#994)

* webui: display the GPU in use during startup

tell the user which GPU the code is actually going to use before spending lots of time loading everything onto the GPU

* typo

* add some info messages

* evaluate current GPU properly

* add debug flag gating

not everyone wants or needs to see debug messages :)

* add in stray debug msg

* Docker updates - Add LDSR, streamlit, other updates for new repository

* Update util.py

* Docker - Set PYTHONPATH to parent directory to avoid `No module named frontend` error

* Add missing comma for nsfw toggle in img2img (#1028)

* Multiple improvements to the txt2vid tab.
- Improved txt2vid speed by 2 times.
- Added DDIM scheduler.
- Added sliders for beta_start and beta_end to have more control over these parameters on the scheduler.
- Added option to select the scheduler type from scaled_linear or linear.
- Added option to save info files for the txt2vid tab and improved the information saved to include most of the parameters used to run the generation.
- You can now download any model from the huggingface website to use on the txt2vid tab, just add the name to the custom_models_list on the config file.

* webui: add prompt output to console (#1031)

* webui: add prompt output to console

show the user what prompt is currently being rendered

* fix prompt print location

* support negative prompts separated by ###

e.g. "shopping mall ### people" will try to generate an image of a mall
without people in it.

* Docker validate model files if not a symlink in case user has VALIDATE_MODELS=false set (#1038)

* - Added changes made by @hafiidz on the ui-improvements branch to the css for the streamli-on-hover-tabs component.

* Added streamlit-on-Hover-tabs and streamlit-option-menu dependencies to the environment.yaml file.

* Changed some values to be dynamic instead of a fixed value so they are more responsive.

* Changed the cmd script to use the dark theme by default when launching the streamlit UI.

* Removed the padding at the top of the sidebar so we can have more free space.

* - Added code for @hafiidz's changes made on the css.

* Fixed an error with the metadata not able to be saved because of the seed was not converted to a string before so it had no attribute encode on it.

* add masking to streamlit img2img, find_noise_for_image, matched_noise

* Use the webui script directories as PWD (#946)

* add Gradio API endpoint settings (#1055)

* add Gradio API endpoint settings

* Add comments crediting code authors. (probably not enough, but better than none)

* Renamed the save_grid option for txt2vid on the config file to be save_video, this will be used to determine if the user wants to save a video at the end of the generation or not, similar to the save_grid that is used on txt2img and img2img but for video.

* -Added the Update Image Preview option to be part of the current tab options under Preview Settings.
- Added Dynamic Preview Frequency option for the txt2vid tab which tries to find the lowest value for update_preview_frequency at which we can update the preview image during generation while at the same time minimizing the impact it has in performance.
- Added option to save a video file on the outputs/txt2vid-samples folder after the generation is complete similar to how the save_grid option works on other tabs.
- Added a video preview which shows a video on the txt2vid tab when the generation is completed.
- Formated some lines of code to make it use less space and fit on the a single screen.
- Added a script called Settings.py to the script folder in which Settings for the Setting page will be placed. Empty for now.

* Commented some print statements that were used for debugging and forgot to remove previously.

* fix: disable live prompt parsing

* Fix issue where loopback was using batch mode

* Fix indentation error that prevents mask_restore from working unless ESRGAN is turned on

* Fixed Sidebar CSS for 4K displays

* img2img mask fixes and fix image2noise normalization

* Made it so the sampling_steps is added to num_inference_steps, otherwise it would not match the value you set for it on the slider.

* Changed the loading of the model on the txt2vid tab so the half models are only loaded if the no_half option on the config file is set to False.

* fix: launcher batch file fix #920, fix #605

- Allow reading environment.yaml file in either LF or CRLF
- Only update environment if environment.yaml changes
- Remove custom_conda_path to discourage changing source file
- Fix unable to launch webui due to frontend module missing (#605)

* Update README.md (#1075)

fix typo

* half precision streamlit txt2vid

`RuntimeError: expected scalar type Half but found Float` with both `torch_dtype=torch.float16` and `revision="fp16"`

* Add mask restore feature to streamlit, prevent color correction from modifying initial image when mask_restore is turned on

* Add mask_restore to streamlit config

* JobManager: Fix typo breaking jobs close #858 close #1041

* JobManager: Buttons skip queue (#1092)

Have JobManager buttons skip Gradio's queue, since otherwise
they aren't sending JobManager button presses.

* The webui_streamlit.py file has been split into multiple modules containing their own code making it easier to work with than a single big file.
The list of modules is as follow:
- webuit_streamlit.py: contains the main layout as well as the functions that load the css which is needed by the layout.
- webui_streamlit_old.py: contains the code for the previous version of the WebUI. Will be removed once the new UI code starts to get used and if everything works as it should.
- txt2img.py: contains the code for the txt2img tab.
- img2img.py: contains the code for the img2img tab.
- txt2vid.py: contains the code for the txt2vid tab.
- sd_utils.py: contains utility functions used by more than one module, any function that meets such condition should be placed here.
- ModelManager.py: contains the code for the Model Manager page on the sidebar menu.
- Settings.py: contains the code for the Settings page on the sidebar menu.
- home.py: contains the code for the Home tab, history and gallery implemented by @devilismyfriend.
- imglab.py: contains the code for the Image Lab tab implemented by @devilismyfriend

* fix: patch docker conda install pip requirements (#1094)

(cherry picked from commit fab5765)

Co-authored-by: Sérgio <smaisidoro@gmail.com>

* Using the Optimization from Dogettx  (#974)

* Update attention.py

* change to dogettx

Co-authored-by: hlky <106811348+hlky@users.noreply.github.com>

* Update Dockerfile (#1101)

add expose for streamlit port

* Publish Streamlit ports (#1102)

(cherry picked from commit 833a910)

Co-authored-by: Charlie <outlookhazy@users.noreply.github.com>

* Forgot to call the layout() for the Model Manager tab after the import so it was not been used and the tab was shown as empty.

* Removed the "find_noise_for_image.py" and "matched_noise.py" scripts as their content is now part of "sd_utils.py"

* - Added the functions to load the optimized models, this "should" make it so optimized and turbo mode work now but needs to be tested more.
- Added function to load LDSR.

* Fixed some imports.

* Fixed the info message on the txt2img tab not showing the info but just showing the text "Done"

* Made the defaults settings from the config file be stored inside st.session_state to avoid loading it multiple times when calling the "sd_utils.py" file from other modules.

* Moved defaults to the webui_streamlit.py file and fixed some imports.

* Removed condition to check if the defaults are in the st.session_state dictionary, this is not needed and would cause issues with it not being reloaded when the user changes something on it.

* Modified the way the defaults settings are loaded from the config file so we only load them on the webui_strealit.py file and use st.session_state to access them from anywhere else, this makes it so the config can be modified externally like before the code split and the changes will be updated on next rerun of the UI.

* fix: [streamlit] optimization mode

* temp disable nvml support for multiple gpus

* Fixed defaults not being loaded correctly or missing in some places.

* Add a separate update script instead of git pull on startup (#1106)

* - Fixed max_frame not being properly used and instead sampling_steps was the variable being use.
- Fixed several issues with wrong variable being used on multiple places.
- Addd option to toggle some extra option from the config file for when the model is loading on the txt2vid tab.

* Re-merge #611 - View/Cancel in-progress diffusions (#796)

* JobManager: Re-merge #611

PR #611 seems to have got lost in the shuffle after
the transition to 'dev'.

This commit re-merges the feature branch. This adds
support for viewing preview images as the image
generates, as well as cancelling in-progress images
and a couple fixes and clean-ups.

* JobManager: Clear jobs that fail to start

Sometimes if a job fails to start it will get stuck in the active job
list. This commit ensures that jobs that raise exceptions are cleared,
and also adds a start timer to clear out jobs that fail to start
within a reasonable amount of time.

* chore: add breaks to cmds for readability (#1134)

* Added custom models list to the txt2img tab.

* Small fix to the custom model list.

* Corrected breaking issues introduced in #1136 to txt2img and
made state variables consistent with img2img.

Fixed a bug where switching models after running would not reload
the used model.

* Formatted tabs as spaces

* Fixed update_preview_frequency and update_preview using defaults from
webui_streamlit.yaml instead of state variables from UI.

* Prompt user if they want to restore changes (#1137)

- After stashing any changes and pulling updates, ask user if they wish to pop changes
- If user declines the restore, drop the stash to prevent the case of an ever growing stash pile

* Added streamlit_nested_layout component as dependency and imported on the webui_streamli.py file to allow us to use nested columns and expanders.

* - Added the Home tab made by @devilismyfriend
- Added gallery tab on txt2img.

* Added case insensitivity to restore prompt (#1152)

* Calculate aspect ratio and pixel count on start (#1157)

* Fix errors rendering galleries when there are not enough images to render

* Fix the gallery back/next buttons and add a refresh button

* Fix invalid invocation of find_noise_for_image

* Removed the Home tab until the gallery is fixed.

* Fixed a missing import on the ModelManager script.

* Added discord server link to the Readme.md

* - Increased the max value for the width and height sliders on the txt2img tab.
- Fixed a leftover line from removing the home tab.

* Update conda environment on startup always (#1171)

* Update environment on startup always

* Message to explicitly state no environment.yaml update required

Co-authored-by: hlky <106811348+hlky@users.noreply.github.com>

* environment update from .cmd

* Update .gitignore

* Enable negative prompts on streamlit

* - Bumped the version of diffusers used on the txt2vid tab to be now v0.3.0.
- Added initial file for the textual inversion tab.

* add missing argument to GoBig sample function, fixes #1183 (#1184)

* cherry-pick @Any-Winter-4079's invoke-ai/InvokeAI#540. this is a collaboration incorporating a lot of people's contributions -- including for example @Doggettx and the original code from @neonsecret on which the Doggetx optimizations were based (see invoke-ai/InvokeAI#431, https://github.com/sd-webui/stable-diffusion-webui/pull/771\#issuecomment-1239716055). Takes exactly the same amount of time to run 8 steps as original CompVis code does (10.4 secs, ~1.25s/it). (#1177)

Co-authored-by: Alex Birch <birch-san@users.noreply.github.com>

* allow webp uploads to img2img tab #991

* Don't attempt mask restoration when there is no mask given (#1186)

* When running a batch with preview turned on, produce a grid of preview images

* When early terminating, generation_callback gets invoked but st.session_state is empty. When this happens, just bail.

* Collect images for final display

This is a collection of several changes to enhance image display:

* When using GFPGAN or RealESRGAN, only the final output will be
  displayed.
* In batch>1 mode, each final image will be collected into an image grid
  for display
* The image is constrained to a reasonable size to ensure that batch
  grids of RealESRGAN'd images don't end up spitting out a massive image
  that the browser then has to handle.
* Additionally, the progress bar indicator is updated as each image is
  post-processed.

* Display the final image before running postprocessing, and don't preview when i=0

* Added a config option to use embeddings from the huggingface stable diffusion concept library.

* Added option to enable enable_attention_slicing and enable_minimal_memory_usage, this for now only works on txt2vid which uses diffusers.

* Basic implementation for the Concept Library tab made by cloning the Home tab.

* Temporarily hide sd_concept_library due to missing layout()

* st.session_state["defaults"] fix

* Used loaded_model state variable in .yaml generation (#1196)

Used loaded_model state variable in .yaml generation

* Streamlit txt2img page settings now follow defaults (#1195)

* Some options on the Streamlit txt2img page now follow the defaults from the relevant config files.

* Fixed a copy-paste gone wrong in my previous commit.

* st.session_state["defaults"] fix

Co-authored-by: hlky <106811348+hlky@users.noreply.github.com>

* default img2img denoising strength increased

* slider_steps and slider_bounds in defaults config

slider_steps and slider_bounds in defaults config

* fix: copy to clipboard button

Co-authored-by: ZeroCool940711 <alejandrogilelias940711@gmail.com>
Co-authored-by: ZeroCool <ZeroCool940711@users.noreply.github.com>
Co-authored-by: Hafiidz <3688500+Hafiidz@users.noreply.github.com>
Co-authored-by: hlky <106811348+hlky@users.noreply.github.com>
Co-authored-by: Joshua Kimsey <jkimsey95@gmail.com>
Co-authored-by: Tony Beeman <beeman@gmail.com>
Co-authored-by: Tom Pham <54967380+TomPham97@users.noreply.github.com>
Co-authored-by: GRMrGecko <grmrgecko@gmail.com>
Co-authored-by: TingTingin <36141041+TingTingin@users.noreply.github.com>
Co-authored-by: Logan zoellner <nagolinc@gmail.com>
Co-authored-by: M <mchaker@users.noreply.github.com>
Co-authored-by: James Pound <jamespoundiv@gmail.com>
Co-authored-by: cobryan05 <13701027+cobryan05@users.noreply.github.com>
Co-authored-by: Michoko <michoko@hotmail.com>
Co-authored-by: VulumeCode <2590984+VulumeCode@users.noreply.github.com>
Co-authored-by: xaedes <xaedes@googlemail.com>
Co-authored-by: Michael Hearn <git@mikehearn.net>
Co-authored-by: Soul-Burn <sugoibaka@gmail.com>
Co-authored-by: JJ <jjisnow@gmail.com>
Co-authored-by: oc013 <101832295+oc013@users.noreply.github.com>
Co-authored-by: Aarni Koskela <akx@iki.fi>
Co-authored-by: osi1880vr <87379616+osi1880vr@users.noreply.github.com>
Co-authored-by: Rae Fu <rraefu@gmail.com>
Co-authored-by: Brian Semrau <brian.semrau@gmail.com>
Co-authored-by: Matt Soucy <git@msoucy.me>
Co-authored-by: endomorphosis <endomorphosis@users.noreply.github.com>
Co-authored-by: unnamedplugins <79282950+unnamedplugins@users.noreply.github.com>
Co-authored-by: Syahmi Azhar <prsyahmi@gmail.com>
Co-authored-by: Ahmad Abdullah <83442967+ahmad1284@users.noreply.github.com>
Co-authored-by: Sérgio <smaisidoro@gmail.com>
Co-authored-by: Charlie <outlookhazy@users.noreply.github.com>
Co-authored-by: protoplm <protoplmz@gmail.com>
Co-authored-by: Ascended <dspradau@gmail.com>
Co-authored-by: JuanLagu <32816584+JuanLagu@users.noreply.github.com>
Co-authored-by: Chris Heald <cheald@gmail.com>
Co-authored-by: Charles Galant <cgalant@gmail.com>
Co-authored-by: Alex Birch <birch-san@users.noreply.github.com>
Co-authored-by: protoplm <57930981+protoplm@users.noreply.github.com>
Co-authored-by: Dekker3D <dekker3d@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants