Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow seeding initial noise latent #113

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

pomofomo
Copy link

This is needed for reproducible generations, if I want to generate the same image again, the prompt and noise is sufficient. Currently, the noise tensor is generated random (no seed) inside StreamDiffusion (pipeline.py).

I added a noise argument (default=None) to StreamDiffusionWrapper (wrapper.py) and StreamDiffusion (pipeline.py) for the txt2img variants and the __call__ method, to allow the called to specify this. It works for batch sizes larger than 1 as well.

The user can either:

  • Pass in None (or omit it) to maintain current, unseeded latent vector
  • Pass in List[int] (of length batch_size) to use each element as a deterministic seed for one batch element
  • Pass in int, which is shorthand for List[int] only when batch_size == 1
  • Pass in torch.Tensor, which must be of the proper size, and allows the user to have full control of the noise tensor

I also exposed two more methods from StreamDiffusion, which could be used by advanced users to create and manipulate the noise vectors further (eg. create "similar" variants, but slowly deviating from the noise vector that created the given image). These are noise_size (to show the required input size) and noise_from_seeds, which is the internal generation method for the List[int] case, in case further processing is desired.

Note: I didn't know how to write automated tests on this codebase, but I modified the code in examples/optimal_performace to use this in both single image and batch generations, with different configurations and manually confirmed it works. I will push in a second commit commented out, so you can quickly try this.

@teftef6220
Copy link
Collaborator

Thanks for the PR.
We are currently working on it in our team.

@worthy7
Copy link

worthy7 commented Jan 5, 2025

Hi there, bump on this!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants