Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MLX? #34

Open
rovo79 opened this issue Dec 6, 2023 · 6 comments
Open

MLX? #34

rovo79 opened this issue Dec 6, 2023 · 6 comments

Comments

@rovo79
Copy link
Contributor

rovo79 commented Dec 6, 2023

@aszc-dev
Anything in here that might lend itself to CoreMLSuite efforts?
MLX: An array framework for Apple silicon

https://github.com/ml-explore/mlx

MLX is an array framework for machine learning on Apple silicon, brought to you by Apple machine learning research.

Some key features of MLX include:

  • Familiar APIs: MLX has a Python API that closely follows NumPy. MLX also has a fully featured C++ API, which closely mirrors the Python API. MLX has higher-level packages like mlx.nn and mlx.optimizers with APIs that closely follow PyTorch to simplify building more complex models.
  • Composable function transformations: MLX has composable function transformations for automatic differentiation, automatic vectorization, and computation graph optimization.
  • Lazy computation: Computations in MLX are lazy. Arrays are only materialized when needed.
  • Dynamic graph construction: Computation graphs in MLX are built dynamically. Changing the shapes of function arguments does not trigger slow compilations, and debugging is simple and intuitive.
  • Multi-device: Operations can run on any of the supported devices (currently, the CPU and GPU).
  • Unified memory: A notable difference from MLX and other frameworks is the unified memory model. Arrays in MLX live in shared memory. Operations on MLX arrays can be performed on any of the supported device types without moving data.

MLX is designed by machine learning researchers for machine learning researchers. The framework is intended to be user-friendly, but still efficient to train and deploy models. The design of the framework itself is also conceptually simple. We intend to make it easy for researchers to extend and improve MLX with the goal of quickly exploring new ideas.

The design of MLX is inspired by frameworks like NumPy, PyTorch, Jax, and ArrayFire.

@BuildBackBuehler
Copy link

BuildBackBuehler commented Dec 7, 2023

I checked this earlier and got to work. As far as I can tell...300%! I could be wrong but as far as I can tell, we can now run ComfyUI natively!!!

But I'm not much of a technical person. But MLX can replace TorchSDE as far as I can tell. That is at least the last requirement of ComfyUI's base that wasn't native.

TorchSDE is solely utilized here https://github.com/comfyanonymous/ComfyUI/blob/248d9125b0821851ea4b7c749df20a040f5ebe57/comfy/k_diffusion/sampling.py#L6

I'm currently trying to patch things in but it is a mess. Maybe I'll be able to do it with the Suite's help. Was running into some issues because I was trying to mesh ComfyUI with coreml-stable-diff.

Edit: eh not so certain I'll be able to get it all together. But I imagine someone can. The attention.py module was tripping up my run. Need to mesh CoreML's with Comfyui's.

Edit 2: As far as the common extended features, I can say...
I know there's an onnx-runtime-silicon. I was using Miniforge so I just ran "onnx-runtime" and was able to get it. There's also onnx-coreml (in-lieu of onnx). Torchaudio is avail via Nightly conda install torchaudio -c pytorch-nightly

@escoolioinglesias
Copy link

escoolioinglesias commented Dec 7, 2023

@BuildBackBuehler Thank you for your efforts! Have you been able to get any improvements on inference? Really excited to know :)

@BuildBackBuehler
Copy link

BuildBackBuehler commented Dec 9, 2023

@BuildBackBuehler Thank you for your efforts! Have you been able to get any improvements on inference? Really excited to know :)

Like I said, I didn't get close to makin it work. However, I've been picking up a bit to understand the basics. It seems there is a way to simplify and semi-automate the process of converting all the Torch refs to MLX refs. Someone actually in the ML field would probably be able to do this with ease. I was working with LLMs and it seems MLX, Pytorch, Tensorflow all have a Common Dictionary that allows one to convert framework-to-framework. Just a total shot in the dark, but I presume after the conversion, it'd likely log all the inconsistencies/unsupported definitions that the target dictionary doesn't have one for.

But the Apple team posted some naked SD performance #s here – and as you can see, no Pytorch necessary!

@cchance27
Copy link
Contributor

cchance27 commented Dec 16, 2023

Never mind my previous comment I forgot how god-awful SDXL is on Mac I've been using 1.5 forgot how slow it is ya MLX is a step in the right direction lol

@cchance27
Copy link
Contributor

Few notes, there seems to be a lack of safetensor support in MLX currently their working on adding support, theres actually also no support for coreml that i can tell...

I still think it would be possible to do but it's a lot of work i think as we couldn't rely on the base model references, might likely need to be a seperate undertaking like i said since i don't seem to think it works with coreml

@JTZ18
Copy link

JTZ18 commented Dec 30, 2023

anyone knows if there's a online community of mlx enthusiasts looking to integrate mlx into different AI applications like Automatic1111, comfyUI, Foooocus, ollama etc

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants