Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make enable/disable idempotent #52

Open
wants to merge 8 commits into
base: master
Choose a base branch
from

Conversation

rfan-debug
Copy link

Changes

  • Make enable/disable idempotent through an _enabled flag.

Existing issues

In the current code, if you call helper.enable() twice, it will fail into an infinite recursion as the following:

---------------------------------------------------------------------------
RecursionError                            Traceback (most recent call last)
Cell In[23], line 5
      3 helper.enable()
      4 # Generate Image
----> 5 deepcache_image = pipe(
      6     prompt,
      7     output_type='pt'
      8 ).images[0]

File [~/miniconda3/envs/ml-train/lib/python3.10/site-packages/torch/utils/_contextlib.py:116](http://localhost:8888/lab/tree/workspace/ai_experiments/miniconda3/envs/ml-train/lib/python3.10/site-packages/torch/utils/_contextlib.py#line=115), in context_decorator.<locals>.decorate_context(*args, **kwargs)
    113 @functools.wraps(func)
    114 def decorate_context(*args, **kwargs):
    115     with ctx_factory():
--> 116         return func(*args, **kwargs)

File [~/miniconda3/envs/ml-train/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py:1000](http://localhost:8888/lab/tree/workspace/ai_experiments/miniconda3/envs/ml-train/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#line=999), in StableDiffusionPipeline.__call__(self, prompt, height, width, num_inference_steps, timesteps, sigmas, guidance_scale, negative_prompt, num_images_per_prompt, eta, generator, latents, prompt_embeds, negative_prompt_embeds, ip_adapter_image, ip_adapter_image_embeds, output_type, return_dict, cross_attention_kwargs, guidance_rescale, clip_skip, callback_on_step_end, callback_on_step_end_tensor_inputs, **kwargs)
    997 latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
    999 # predict the noise residual
-> 1000 noise_pred = self.unet(
   1001     latent_model_input,
   1002     t,
   1003     encoder_hidden_states=prompt_embeds,
   1004     timestep_cond=timestep_cond,
   1005     cross_attention_kwargs=self.cross_attention_kwargs,
   1006     added_cond_kwargs=added_cond_kwargs,
   1007     return_dict=False,
   1008 )[0]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant