I am getting a difference in quality compared to A1111. Is there a way to correct this? #334
Replies: 5 comments 10 replies
-
can you upload actual image generated in a1111 and using this repo so i can take a look at their metadata? |
Beta Was this translation helpful? Give feedback.
-
I saw the reddit thread on this and did the experiment locally. My result was an image indistinguishable from OP's 1111 image (to my eyes): The image looks so similar I thought I should generate a similar, related image to show I didn't just copy OP's image, so here's one with one extra generation step which has morphed the background telephone pole into a tree, for example: These were generated with OP's model / prompt / params from the reddit thread linked previously. Metadata intact at time of uploading. (Edit to add: in case it makes any difference, my install is a couple of days out of date; git log shows last commit as "Tue Apr 18 08:06:16 2023 -0400 / disable gradio queues on demand") |
Beta Was this translation helpful? Give feedback.
-
There are a lot of reasons things can vary. I noticed that Vlad's has some of the k sampler stuff checked by default that auto1111's does not (I think, iirc). Check clip skip and noise delta too. Some screenshots of various things that could be different that would result in a different generation even with otherwise identical settings. |
Beta Was this translation helpful? Give feedback.
-
I don't remember which ones, but the compatibility tab boxes were merged
into another section.
Do you discard the next to last sigma or use a different noise delta? Vae
on auto or overriding built in Vaes anything like that?
…On Sun, Apr 23, 2023 at 12:38 PM Nrgte ***@***.***> wrote:
I'm also getting vastly different images with vladmandic than in a1111.
I've checked all the settings in both UIs and they seem identical. I have
--no-half in a1111 so I enabled that in the CUDA settings for vlads.
I've checked the settings in Stable Diffusion, CUDA Settings, Sampler
Parameters and I use the same VAE, model, prompts, seed.
I ddin't find a Compatibilty tab equivalent in vlad.
Does anyone know how to debug this further?
—
Reply to this email directly, view it on GitHub
<#334 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A2755JZYITFMMT4K3R6VJCLXCVSKDANCNFSM6AAAAAAXGSO52E>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
They're mixed in, compat settings, in Vlad. Idr what they were or I could
help find.
…On Sun, Apr 23, 2023, 13:02 Nrgte ***@***.***> wrote:
No I don't discard the next to last sigma. But that was also the case in
a1111:
[image: grafik]
<https://user-images.githubusercontent.com/131605234/233856751-14b35330-9ecb-426a-94ae-52bae3bc4b7b.png>
Noise delta is on 0 on both as well.
[image: grafik]
<https://user-images.githubusercontent.com/131605234/233856836-c313d072-d77d-4840-8f56-88d750247749.png>
I'm unsure about the CUDA settings, since those didn't exist. And I can't
find the Compatibility settings. Otherwise everything seems the same.
—
Reply to this email directly, view it on GitHub
<#334 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A2755JZNC56HZF2Z2F64YETXCVVCHANCNFSM6AAAAAAXGSO52E>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
I love the fork, however, the quality for realistic models is significantly different than A1111. Is there a way to fix this? Tried different cross attention optimization methods as well and none seemed to help.
Example
Prompts: Pretty swedish girl, detailed face, best quality, high quality, skin indentation, skin pores, textured skin, analog, film grain, detailed eyes, perfect mouth, 8k, uhd, 8k uhd, closed mouth, casual clothes,
Negative prompt: easynegative, worst quality, bad quality, nsfw, naked, nude,
Size: 512x728, Seed: 1274235326, Model: ProgenUberAnalogMix, Steps: 20, Sampler: DPM++ 2S a Karras, CFG scale: 7, Model hash: de3a92d7da
Tried different cross attention optimization methods as well and none seemed to help.
2.0.0+cu118 autocast half
xformers: 0.0.17
accelerate: 0.18.0
transformers: 4.26.1
device: NVIDIA GeForce RTX 3070 (1) (compute_37) (8, 6)
cuda: 11.8
cudnn: 8700
Beta Was this translation helpful? Give feedback.
All reactions