Skip to content

Commit

Permalink
update requirements and changelog
Browse files Browse the repository at this point in the history
Signed-off-by: Vladimir Mandic <[email protected]>
  • Loading branch information
vladmandic committed Dec 24, 2024
1 parent 6c2654d commit a71bae4
Show file tree
Hide file tree
Showing 6 changed files with 11 additions and 13 deletions.
8 changes: 4 additions & 4 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Change Log for SD.Next

## Update for 2024-12-23
## Update for 2024-12-24

### Highlights for 2024-12-23
### Highlights for 2024-12-24

### SD.Next Xmass edition: *What's new?*

Expand All @@ -28,11 +28,11 @@ And a lot of **Control** and **IPAdapter** goodies
Plus couple of new integrated workflows such as [FreeScale](https://github.com/ali-vilab/FreeScale) and [Style Aligned Image Generation](https://style-aligned-gen.github.io/)

And it wouldn't be a *Xmass edition* without couple of custom themes: *Snowflake* and *Elf-Green*!
All-in-all, we're around ~160 commits worth of updates, check changelog for full list
All-in-all, we're around ~180 commits worth of updates, check the changelog for full list

[ReadMe](https://github.com/vladmandic/automatic/blob/master/README.md) | [ChangeLog](https://github.com/vladmandic/automatic/blob/master/CHANGELOG.md) | [Docs](https://vladmandic.github.io/sdnext-docs/) | [WiKi](https://github.com/vladmandic/automatic/wiki) | [Discord](https://discord.com/invite/sd-next-federal-batch-inspectors-1101998836328697867)

## Details for 2024-12-23
## Details for 2024-12-24

### New models and integrations

Expand Down
5 changes: 2 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@
All individual features are not listed here, instead check [ChangeLog](CHANGELOG.md) for full list of changes
- Multiple UIs!
**Standard | Modern**
- Multiple diffusion models!
- Multiple [diffusion models](https://vladmandic.github.io/sdnext-docs/Model-Support/)!
- Built-in Control for Text, Image, Batch and video processing!
- Multiplatform!
**Windows | Linux | MacOS | nVidia | AMD | IntelArc/IPEX | DirectML | OpenVINO | ONNX+Olive | ZLUDA**
Expand All @@ -36,7 +36,6 @@ All individual features are not listed here, instead check [ChangeLog](CHANGELOG
- Optimized processing with latest `torch` developments with built-in support for `torch.compile`
and multiple compile backends: *Triton, ZLUDA, StableFast, DeepCache, OpenVINO, NNCF, IPEX, OneDiff*
- Built-in queue management
- Enterprise level logging and hardened API
- Built in installer with automatic updates and dependency management
- Mobile compatible

Expand Down Expand Up @@ -68,7 +67,7 @@ SD.Next supports broad range of models: [supported models](https://vladmandic.gi
- Any GPU or device compatible with **OpenVINO** libraries on both *Windows and Linux*
- *Apple M1/M2* on *OSX* using built-in support in Torch with **MPS** optimizations
- *ONNX/Olive*
- *AMD* GPUs on Windows using **ZLUDA** libraries
- *AMD* GPUs on Windows using **ZLUDA** libraries

## Getting started

Expand Down
2 changes: 1 addition & 1 deletion extensions-builtin/stable-diffusion-webui-rembg
2 changes: 1 addition & 1 deletion installer.py
Original file line number Diff line number Diff line change
Expand Up @@ -459,7 +459,7 @@ def check_python(supported_minors=[9, 10, 11, 12], reason=None):
def check_diffusers():
if args.skip_all or args.skip_git:
return
sha = '4b557132ce955d58fd84572c03e79f43bdc91450' # diffusers commit hash
sha = '6dfaec348780c6153a4cfd03a01972a291d67f82' # diffusers commit hash
pkg = pkg_resources.working_set.by_key.get('diffusers', None)
minor = int(pkg.version.split('.')[1] if pkg is not None else 0)
cur = opts.get('diffusers_version', '') if minor > 0 else ''
Expand Down
5 changes: 2 additions & 3 deletions modules/processing_vae.py
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,8 @@ def full_vae_decode(latents, model):

log_debug(f'VAE config: {model.vae.config}')
try:
decoded = model.vae.decode(latents, return_dict=False)[0]
with devices.inference_context():
decoded = model.vae.decode(latents, return_dict=False)[0]
except Exception as e:
shared.log.error(f'VAE decode: {stats} {e}')
if 'out of memory' not in str(e):
Expand All @@ -159,8 +160,6 @@ def full_vae_decode(latents, model):
model.vae.apply(sd_models.convert_to_faketensors)
devices.torch_gc(force=True)

# if shared.opts.diffusers_offload_mode == "balanced":
# shared.sd_model = sd_models.apply_balanced_offload(shared.sd_model)
elif shared.opts.diffusers_move_unet and not getattr(model, 'has_accelerate', False) and base_device is not None:
sd_models.move_base(model, base_device)
t1 = time.time()
Expand Down
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ accelerate==1.2.1
opencv-contrib-python-headless==4.9.0.80
einops==0.4.1
gradio==3.43.2
huggingface_hub==0.26.5
huggingface_hub==0.27.0
numexpr==2.8.8
numpy==1.26.4
numba==0.59.1
Expand Down

0 comments on commit a71bae4

Please sign in to comment.