Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mac m1 auto #677

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open

Mac m1 auto #677

wants to merge 4 commits into from

Conversation

cmutnik
Copy link

@cmutnik cmutnik commented Mar 23, 2024

Closes issue #35

This code is not complete, I am opening a PR so we can discuss structure and things like compose context.
You seem to know your way around compose way better than me, so should we add a new dir for auto-mq or just link store two dockerfiles in ./services/AUTOMATIC1111 and provide context in the compose?:

    build: 
      context: ./services/AUTOMATIC1111
      dockerfile: Dockerfile.x86

Right now, the ./services/AUTOMATIC1111/Dockerfile works and if you spin up with the following command, it will run on mac m1:

docker compose --profile auto-m1 up --build

That said, the Dockerfile needs to be cleaned (something I'm in the process of with the file Dockerfilemerged) and what are your thoughts on the added *.py and *.sh files -- should we keep them, as they contain edits needed for running on mac m1, or should we try to parse the changes to sed commands and modify existing files in place?

Note: the non-m1 Dockerfile has been preserved, just currently named ./services/AUTOMATIC1111/Dockerfile.x86, on this branch.

(Then, deff need to squash commit history haha)

Update versions

@cmutnik cmutnik mentioned this pull request Mar 23, 2024
@cmutnik
Copy link
Author

cmutnik commented Mar 23, 2024

The main setup wiki will also need to be updated, to reflect the new option:

# where [ui] is one of: invoke | auto | auto-cpu | auto-m1 | comfy | comfy-cpu

@gianiaz
Copy link

gianiaz commented Apr 14, 2024

Tried your PR on an M1 max processor, this is the output:

 ✔ Container webui-docker-auto-m1-1  Created                                                                                                                                             0.1s
Attaching to auto-m1-1
auto-m1-1  | no module 'xformers'. Processing without...
auto-m1-1  | no module 'xformers'. Processing without...
auto-m1-1  | /usr/lib/python3/dist-packages/scipy/__init__.py:146: UserWarning: A NumPy version >=1.17.3 and <1.25.0 is required for this version of SciPy (detected version 1.26.2
auto-m1-1  |   warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion}"
auto-m1-1  | No module 'xformers'. Proceeding without it.
auto-m1-1  | Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled
auto-m1-1  | Calculating sha256 for /stable-diffusion-webui/models/Stable-diffusion/Stable-diffusion/sd-v1-5-inpainting.ckpt: Running on local URL:  http://0.0.0.0:7860
auto-m1-1  |
auto-m1-1  | To create a public link, set `share=True` in `launch()`.
auto-m1-1  | Startup time: 7.7s (import torch: 4.0s, import gradio: 0.9s, setup paths: 0.8s, initialize shared: 0.1s, other imports: 0.5s, list SD models: 0.3s, load scripts: 0.4s, create ui: 0.6s).
auto-m1-1  | c6bbc15e3224e6973459ba78de4998b80b50112b0ae5b5c67113d56b4e366b19
auto-m1-1  | Loading weights [c6bbc15e32] from /stable-diffusion-webui/models/Stable-diffusion/Stable-diffusion/sd-v1-5-inpainting.ckpt
auto-m1-1  | Creating model from config: /stable-diffusion-webui/configs/v1-inpainting-inference.yaml
vocab.json: 100% 961k/961k [00:00<00:00, 2.29MB/s]
merges.txt: 100% 525k/525k [00:00<00:00, 1.67MB/s]
special_tokens_map.json: 100% 389/389 [00:00<00:00, 517kB/s]
tokenizer_config.json: 100% 905/905 [00:00<00:00, 1.15MB/s]
config.json: 100% 4.52k/4.52k [00:00<00:00, 3.56MB/s]
auto-m1-1  | Applying attention optimization: sdp... done.
auto-m1-1  | Model loaded in 22.9s (calculate hash: 14.0s, load weights from disk: 4.2s, create model: 3.6s, apply weights to model: 0.5s, calculate empty prompt: 0.4s).
auto-m1-1  | [W NNPACK.cpp:64] Could not initialize NNPACK! Reason: Unsupported hardware.

thank you for your effort :-)

@cmutnik
Copy link
Author

cmutnik commented Apr 26, 2024

Thats too bad, it works fully on the m3 max i tested on...looks like you may need to set USE_NNPACK=0 and/or --no-deps when installing torch/torchvisison.

wish i had an m1 max to test it on

@cmutnik
Copy link
Author

cmutnik commented Apr 26, 2024

want me to close the PR or leave it for someone else to try their hand at?

@tasmith039
Copy link

@cmutnik Good work putting this together. Just wanted to chime in and say I tested this out on my M2 air and it worked perfectly fine. I am not skilled enough to give helpful feedback on the PR but at-least I can confirm it works.

@cmutnik
Copy link
Author

cmutnik commented Sep 7, 2024

@gianiaz that looks like a cuda/torch issue. Maybe try it with export USE_NNPACK=0; export CUDA_VISIBLE_DEVICES="", reinstalling cmake and then reinstalling torch...? Not sure, but down to PP a call one day if youre up for it and mess with the error in real time.

The only other idea I have would be to specify the platform with docker buildx, cuz even though its a hardware issue, some base linux/amd64 image can run images on apple silicone (I know from personal experience) and if you deactivate gpu usage the difference in physical components wont play a role.

Thanks @tasmith039 im happy it worked, not sure what changed between m1 and m2/m3 though.

@dmitry-buzzwoo
Copy link

Hey guys! Thanks for your effort! Any updates on it? I'm looking forward! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants