Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

clarification for Forgedit: text guided image editing via learning and forgetting #12

Open
witcherofresearch opened this issue Apr 6, 2024 · 1 comment

Comments

@witcherofresearch
Copy link

witcherofresearch commented Apr 6, 2024

First of all, congratulations for such an awesome and complete image editing survey and special thanks for including our paper Forgedit: text guided image editing via learning and forgetting in the survey. I am the first author of this paper and I think there might be some misunderstandings of our method in Table 1 of this survey and mistakes for editing results presented Figure 13.

First, our Forgedit was designed to tackle general text-guided image editings so in fact Forgedit is capable of conducting most tasks in Table 1, which I will show in the following examples.
image

Second, the editing results with Forgedit in Figure 13 is incorrect. Recently I have refined the Forgedit code to make it easier to reproduce our results. Next, I will show the results of all editing examples from editeval-v1 in Figure 13 from your paper and list the hyperparameters to reproduce our results. For fair comparison, all results are obtained with Stable Diffusion 1.4. The success rate and editing quality could be improved if the base model is switched to realistic vision series. Only target prompt and input image of editeval-v1 are used since Forgedit could use BLIP to generate source prompt.
image
Editing type: action change
Input Image, target prompt='A polar bear raising its hand'
2

forgedit command:
accelerate launch src/sample_forgedit_batch_textencoder.py --train=True --edit=True --save=True --forget='encoderattn' --interpolation=vs --gammastart=11 --gammaend=15 --numtest=7
3_encoderattn_A polar bear raising its hand_guidance_scale=7 5__textalpha=0 0_alpha=1 2000000000000002_
7_encoderattn_A polar bear raising its hand_guidance_scale=7 5__textalpha=0 0_alpha=1 2000000000000002_2 jpg
8_encoderattn_A polar bear raising its hand_guidance_scale=7 5__textalpha=0 0_alpha=1 2000000000000002_2 jpg
6_encoderattn_A polar bear is raising its hand_guidance_scale=7 5__textalpha=0 0_alpha=1 4000000000000001_2 jpg

editing type: object addition
Input Image, target prompt='A glass of milk next to a stack of cookies on a wooden board with a gray background'
7
forgedit command
accelerate launch src/sample_forgedit_batch_textencoder.py --train=True --edit=True --save=True --forget='donotforget' --interpolation=vs --gammastart=13 --gammaend=15 --numtest=7
3_orig_A glass of milk next to a stack of cookies on a wooden board with a gray background_guidance_scale=7 5__textalpha=0 0_alpha=1 3_7 jpg

editing type: object removal
input image, target prompt='A mountain lake'
rm7
forgedit command:
accelerate launch src/sample_forgedit_batch_textencoder.py --train=True --edit=True --save=True --forget='donotforget' --interpolation=vs --gammastart=15 --gammaend=18 --numtest=4
image

editing type: object replacement
input image, target prompt='A floor lamp standing next to a potted plant in a cozy room'
1
forgedit command
accelerate launch src/sample_forgedit_batch_textencoder.py --train=True --edit=True --save=True --forget='donotforget' --interpolation=vs --gammastart=11 --gammaend=15 --numtest=7
5_donotforget_A floor lamp standing next to a potted plant in a cozy room_guidance_scale=7 5__textalpha=0 0_alpha=1 1_1 jpg

editing type: background change
input image, target prompt='A silver car parked at a dense jungle'
8
forgedti command
accelerate launch src/sample_forgedit_batch_textencoder.py --train=True --edit=True --save=True --forget='encoderattn+encoder1' --interpolation=vs --gammastart=12 --gammaend=15 --numtest=4
0_encoderattn+encoder1_A silver car parked at a dense jungle_guidance_scale=7 5__textalpha=0 0_alpha=1 4000000000000001_8 jpg

editing type: style change
input image, target prompt='A Van Gogh style painting of a light house sitting on a cliff next to the ocean'
style2
forgedit command
accelerate launch src/sample_forgedit_batch_textencoder.py --train=True --edit=True --save=True --forget='donotforget' --interpolation=vs --gammastart=13 --gammaend=15 --numtest=7
2_donotforget_A Van Gogh style painting of a light house sitting on a cliff next to the ocean_guidance_scale=7 5__textalpha=0 0_alpha=1 3_style2 jpg

editing type: texture change
input image, target prompt='A statue of a horse running in a field'
texture2
forgedit command
accelerate launch src/sample_forgedit_batch_textencoder.py --train=True --edit=True --save=True --forget='decoderattn+decoder2' --interpolation=vs --gammastart=13 --gammaend=17 --numtest=4
2_decoderattn+decoder2_A statue of a horse running in a field_guidance_scale=7 5__textalpha=0 0_alpha=1 5_texture2 jpg

For emotion expression editing, Forgedit could tackle it too.
Input image, target prompt='a smiling man and a smiling woman'
test
Here I switch to use realistic vision for human editing, yet I think Stable Diffusion 1.4 should be working too.
forgedit command
accelerate launch src/sample_forgedit_batch_textencoder.py --train=True --edit=True --save=True --forget='donotforget' --interpolation=vs --targeth=768 --targetw=768 --gammastart=8 --gammaend=11
image
image

for object move and object size change, there are multiple cases in TEdBench, another text-guided image editing benchmark from Google. Our forgedit could tackle these cases too. The results could be found in Forgedit TEdBench.

Finally, if you have any difficulties reproducing Forgedit's results on editeval, feel free to leave a comment or contact me via email. It would be great if the editing results of Forgedit could be corrected in the next version of this survey. Thanks again.

@MingfuYAN
Copy link
Collaborator

Thanks for your information! We will update Table 1 and the experimental results in the next version of the paper.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants