Replies: 4 comments 2 replies
-
All the images in a batch are generated at the same time, using the prepared model. This model includes the loras, that are taken from the first image prompt. Thus, all the images in the batch will include the loras set for the first image, and only those. Like you, i use Dynamic Prompts and i have to use batch size 1 due to this inherent limitation. Nothing we can do about it. Regarding the differences when loras are not part of the ecuation, i don't know what is happening there, but i'm guessing that either the sampler or the optimizations used have some small randomization effect. |
Beta Was this translation helpful? Give feedback.
-
notes from @acorderob are correct. |
Beta Was this translation helpful? Give feedback.
-
Thanks, both of you! |
Beta Was this translation helpful? Give feedback.
-
I've stumbled upon this topic after falling into the same trap myself when using Dynamic Prompts + LoRAs and batches. I really hoped that there would be some sort of solution other than having to force batch to be 1. I'm currently doing the same but this does introduce a noteworthy overhead on my lower-end machine while searching for upscale-worthy images. Just to clarify:
Is my understanding correct? And there's really no way to mitigate the behaviour? |
Beta Was this translation helpful? Give feedback.
-
I've talked about this issue before, but lately I've revisited the issue with a few image tests.
Here's the basic problem.
Set up image generation with a prompt. For this experiment I found one online.
I ran this prompt three times with different batch configurations so that each time I generated 4 images.
Batch: 4x1
Batch: 2x2
Batch: 1x4
You can already see the problem. Even though its the same parameters used to generate each image, the images change slightly depending on the batch configuration. They don't change if you re-run the same batch configuration repeatedly.
To make matters worse, adding a Lora makes the changes between batch configs even more pronounced. I used the same prompt above but replaced the phrase
in a mystical forest at noon
with
in a mystical forest at dusk <lora:epiNoiseoffset_v2:1.0>
Batch: 4x1
Batch: 2x2
Batch: 1x4
And finally, here's the issue I keep running into. I use dynamic prompts plugin to create templates. Some of these templates will plug in elements that use Loras, making some images in the same batch that use Loras and others that don't. We can achieve the same effect here with the dynamic prompts extension and changing the text once again to:
in a mystical forest at {noon|dusk <lora:epiNoiseoffset_v2:1.0>}
The effects can cause completely different results on each run.
Batch: 4x1
Batch: 2x2
Batch: 1x4
What's happening here (after a lot of testing) is a bit sneaky.
I will typically generate a 1x4 batch that can come out like this: ... image1(Lora), image2, image3(Lora), image4
Then I want to recreate image2. But just running its saved metadata through Text2Image again, results in a totally different image. Instead you must add the Lora from image1 to the beginning of the prompt in image2 to replicate it. This Lora is NOT in the original image2's metadata.
Conclusion:
Currently the only way to run images with any guarantee of being able to reproduce an image exactly is to always set your Batch Size to 1.
Does anyone know why this happens? Is it a Stable Diffusion thing based on how batches are processed? Or is this a bug in how the web UI is handling image generation?
Long post, I know. Thanks for reading.
Beta Was this translation helpful? Give feedback.
All reactions