Skip to content

Commit

Permalink
Merge pull request #3586 from vladmandic/dev
Browse files Browse the repository at this point in the history
Release refresh
  • Loading branch information
vladmandic authored Nov 22, 2024
2 parents a141e8c + 2b14727 commit 6846f4e
Show file tree
Hide file tree
Showing 17 changed files with 188 additions and 152 deletions.
17 changes: 17 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,22 @@
# Change Log for SD.Next

## Update for 2024-11-22

- Model loader improvements:
- detect model components on model load fail
- Flux, SD35: force unload model
- Flux: apply `bnb` quant when loading *unet/transformer*
- Flux: all-in-one safetensors
example: <https://civitai.com/models/646328?modelVersionId=1040235>
- Flux: do not recast quants
- Sampler improvements
- update DPM FlowMatch samplers
- Fixes:
- update `diffusers`
- fix README links
- fix sdxl controlnet single-file loader
- relax settings validator

## Update for 2024-11-21

### Highlights for 2024-11-21
Expand Down
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ For screenshots and informations on other available themes, see [Themes Wiki](ht
## Model support

Additional models will be added as they become available and there is public interest in them
See [models overview](wiki/Models) for details on each model, including their architecture, complexity and other info
See [models overview](https://github.com/vladmandic/automatic/wiki/Models) for details on each model, including their architecture, complexity and other info

- [RunwayML Stable Diffusion](https://github.com/Stability-AI/stablediffusion/) 1.x and 2.x *(all variants)*
- [StabilityAI Stable Diffusion XL](https://github.com/Stability-AI/generative-models), [StabilityAI Stable Diffusion 3.0](https://stability.ai/news/stable-diffusion-3-medium) Medium, [StabilityAI Stable Diffusion 3.5](https://huggingface.co/stabilityai/stable-diffusion-3.5-large) Medium, Large, Large Turbo
Expand Down Expand Up @@ -101,17 +101,17 @@ See [models overview](wiki/Models) for details on each model, including their ar

## Getting started

- Get started with **SD.Next** by following the [installation instructions](wiki/Installation)
- For more details, check out [advanced installation](wiki/Advanced-Install) guide
- List and explanation of [command line arguments](wiki/CLI-Arguments)
- Get started with **SD.Next** by following the [installation instructions](https://github.com/vladmandic/automatic/wiki/Installation)
- For more details, check out [advanced installation](https://github.com/vladmandic/automatic/wiki/Advanced-Install) guide
- List and explanation of [command line arguments](https://github.com/vladmandic/automatic/wiki/CLI-Arguments)
- Install walkthrough [video](https://www.youtube.com/watch?v=nWTnTyFTuAs)

> [!TIP]
> And for platform specific information, check out
> [WSL](wiki/WSL) | [Intel Arc](wiki/Intel-ARC) | [DirectML](wiki/DirectML) | [OpenVINO](wiki/OpenVINO) | [ONNX & Olive](wiki/ONNX-Runtime) | [ZLUDA](wiki/ZLUDA) | [AMD ROCm](wiki/AMD-ROCm) | [MacOS](wiki/MacOS-Python.md) | [nVidia](wiki/nVidia)
> [WSL](https://github.com/vladmandic/automatic/wiki/WSL) | [Intel Arc](https://github.com/vladmandic/automatic/wiki/Intel-ARC) | [DirectML](https://github.com/vladmandic/automatic/wiki/DirectML) | [OpenVINO](https://github.com/vladmandic/automatic/wiki/OpenVINO) | [ONNX & Olive](https://github.com/vladmandic/automatic/wiki/ONNX-Runtime) | [ZLUDA](https://github.com/vladmandic/automatic/wiki/ZLUDA) | [AMD ROCm](https://github.com/vladmandic/automatic/wiki/AMD-ROCm) | [MacOS](https://github.com/vladmandic/automatic/wiki/MacOS-Python.md) | [nVidia](https://github.com/vladmandic/automatic/wiki/nVidia)
> [!WARNING]
> If you run into issues, check out [troubleshooting](wiki/Troubleshooting) and [debugging](wiki/Debug) guides
> If you run into issues, check out [troubleshooting](https://github.com/vladmandic/automatic/wiki/Troubleshooting) and [debugging](https://github.com/vladmandic/automatic/wiki/Debug) guides
> [!TIP]
> All command line options can also be set via env variable
Expand Down
1 change: 1 addition & 0 deletions TODO.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ Main ToDo list can be found at [GitHub projects](https://github.com/users/vladma
- SD35 LoRA: <https://github.com/huggingface/diffusers/issues/9950>
- Flux IPAdapter: <https://github.com/huggingface/diffusers/issues/9825>
- Flux Fill/ControlNet/Redux: <https://github.com/huggingface/diffusers/pull/9985>
- Flux NF4: <https://github.com/huggingface/diffusers/issues/9996>
- SANA: <https://github.com/huggingface/diffusers/pull/9982>

## Other
Expand Down
12 changes: 12 additions & 0 deletions cli/model-keys.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,16 @@ def list_to_dict(flat_list):
return result_dict


def list_compact(flat_list):
result_list = []
for item in flat_list:
keys = item.split('.')
keys = '.'.join(keys[:2])
if keys not in result_list:
result_list.append(keys)
return result_list


def guess_dct(dct: dict):
# if has(dct, 'model.diffusion_model.input_blocks') and has(dct, 'model.diffusion_model.label_emb'):
# return 'sdxl'
Expand Down Expand Up @@ -65,7 +75,9 @@ def read_keys(fn):
except Exception as e:
pprint(e)
dct = list_to_dict(keys)
lst = list_compact(keys)
pprint(f'file: {fn}')
pprint(lst)
pprint(remove_entries_after_depth(dct, 3))
pprint(remove_entries_after_depth(dct, 6))
guess = guess_dct(dct)
Expand Down
2 changes: 1 addition & 1 deletion installer.py
Original file line number Diff line number Diff line change
Expand Up @@ -459,7 +459,7 @@ def check_python(supported_minors=[9, 10, 11, 12], reason=None):
def check_diffusers():
if args.skip_all or args.skip_requirements:
return
sha = 'cd6ca9df2987c000b28e13b19bd4eec3ef3c914b'
sha = 'b5fd6f13f5434d69d919cc8cedf0b11db664cf06'
pkg = pkg_resources.working_set.by_key.get('diffusers', None)
minor = int(pkg.version.split('.')[1] if pkg is not None else 0)
cur = opts.get('diffusers_version', '') if minor > 0 else ''
Expand Down
25 changes: 22 additions & 3 deletions modules/model_flux.py
Original file line number Diff line number Diff line change
Expand Up @@ -194,6 +194,7 @@ def load_transformer(file_path): # triggered by opts.sd_unet change
if _transformer is not None:
transformer = _transformer
else:
diffusers_load_config = model_quant.create_bnb_config(diffusers_load_config)
transformer = diffusers.FluxTransformer2DModel.from_single_file(file_path, **diffusers_load_config)
if transformer is None:
shared.log.error('Failed to load UNet model')
Expand All @@ -213,6 +214,11 @@ def load_flux(checkpoint_info, diffusers_load_config): # triggered by opts.sd_ch
text_encoder_2 = None
vae = None

# unload current model
sd_models.unload_model_weights()
shared.sd_model = None
devices.torch_gc(force=True)

# load overrides if any
if shared.opts.sd_unet != 'None':
try:
Expand Down Expand Up @@ -305,8 +311,21 @@ def load_flux(checkpoint_info, diffusers_load_config): # triggered by opts.sd_ch
repo_id = 'black-forest-labs/FLUX.1-dev' # workaround since sayakpaul model is missing model_index.json
for c in kwargs:
if kwargs[c].dtype == torch.float32 and devices.dtype != torch.float32:
shared.log.warning(f'Load model: type=FLUX component={c} dtype={kwargs[c].dtype} cast dtype={devices.dtype}')
shared.log.warning(f'Load model: type=FLUX component={c} dtype={kwargs[c].dtype} cast dtype={devices.dtype} recast')
kwargs[c] = kwargs[c].to(dtype=devices.dtype)
kwargs = model_quant.create_bnb_config(kwargs)
pipe = diffusers.FluxPipeline.from_pretrained(repo_id, cache_dir=shared.opts.diffusers_dir, **kwargs, **diffusers_load_config)

allow_bnb = 'gguf' not in (sd_unet.loaded_unet or '')
kwargs = model_quant.create_bnb_config(kwargs, allow_bnb)
if checkpoint_info.path.endswith('.safetensors') and os.path.isfile(checkpoint_info.path):
pipe = diffusers.FluxPipeline.from_single_file(checkpoint_info.path, cache_dir=shared.opts.diffusers_dir, **kwargs, **diffusers_load_config)
else:
pipe = diffusers.FluxPipeline.from_pretrained(repo_id, cache_dir=shared.opts.diffusers_dir, **kwargs, **diffusers_load_config)

# release memory
transformer = None
text_encoder_1 = None
text_encoder_2 = None
vae = None
devices.torch_gc()

return pipe
6 changes: 3 additions & 3 deletions modules/model_quant.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,10 @@
quanto = None


def create_bnb_config(kwargs = None):
def create_bnb_config(kwargs = None, allow_bnb: bool = True):
from modules import shared, devices
if len(shared.opts.bnb_quantization) > 0:
if 'Model' in shared.opts.bnb_quantization and 'transformer' not in (kwargs or {}):
if len(shared.opts.bnb_quantization) > 0 and allow_bnb:
if 'Model' in shared.opts.bnb_quantization:
load_bnb()
bnb_config = diffusers.BitsAndBytesConfig(
load_in_8bit=shared.opts.bnb_quantization_type in ['fp8'],
Expand Down
7 changes: 6 additions & 1 deletion modules/model_sd3.py
Original file line number Diff line number Diff line change
Expand Up @@ -120,6 +120,11 @@ def load_sd3(checkpoint_info, cache_dir=None, config=None):
repo_id = sd_models.path_to_repo(checkpoint_info.name)
fn = checkpoint_info.path

# unload current model
sd_models.unload_model_weights()
shared.sd_model = None
devices.torch_gc(force=True)

kwargs = {}
kwargs = load_overrides(kwargs, cache_dir)
if fn is None or not os.path.exists(fn):
Expand Down Expand Up @@ -152,5 +157,5 @@ def load_sd3(checkpoint_info, cache_dir=None, config=None):
config=config,
**kwargs,
)
devices.torch_gc(force=True)
devices.torch_gc()
return pipe
10 changes: 10 additions & 0 deletions modules/model_tools.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,16 @@ def remove_entries_after_depth(d, depth, current_depth=0):
return d


def list_compact(flat_list):
result_list = []
for item in flat_list:
keys = item.split('.')
keys = '.'.join(keys[:2])
if keys not in result_list:
result_list.append(keys)
return result_list


def list_to_dict(flat_list):
result_dict = {}
try:
Expand Down
Loading

0 comments on commit 6846f4e

Please sign in to comment.