Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allocation on device #32

Open
ALBIHANY opened this issue Aug 14, 2024 · 5 comments
Open

Allocation on device #32

ALBIHANY opened this issue Aug 14, 2024 · 5 comments

Comments

@ALBIHANY
Copy link

I'm having this error which i assume indicates that i don't have enough vram, however i'm able to run the FP8 version of the flux- dev and this exact same model on forge webui with no issues at all, so i'm not sure what's going on here i thought comfyUI supposed to be better optimized to than WebUI yet the model run on forge and not on comfy !!!

image

@ChibiChubu
Copy link

Did you put --high/med/low on the note on comfy?

@ALBIHANY
Copy link
Author

Did you put --high/med/low on the note on comfy?

If you mean comfyui arguments, yes i use --lowvram argument, but i don't think that has any affect on for that issue and again i can run the F8 version with the same setting it's just slow and i was hoping for the NF version to be faster, also i'm currently running the same NF model file with forge webui and it works smoothly, i think something is mismanaging the memory in comfyui with that model

@ChibiChubu
Copy link

Did you put --high/med/low on the note on comfy?

If you mean comfyui arguments, yes i use --lowvram argument, but i don't think that has any affect on for that issue and again i can run the F8 version with the same setting it's just slow and i was hoping for the NF version to be faster, also i'm currently running the same NF model file with forge webui and it works smoothly, i think something is mismanaging the memory in comfyui with that model

it because i have the same issue (i'm using 3090), it only happen when i use LoRA Boreal. I need remove those arguments. Then it work perfectly.

@ALBIHANY
Copy link
Author

Did you put --high/med/low on the note on comfy?

If you mean comfyui arguments, yes i use --lowvram argument, but i don't think that has any affect on for that issue and again i can run the F8 version with the same setting it's just slow and i was hoping for the NF version to be faster, also i'm currently running the same NF model file with forge webui and it works smoothly, i think something is mismanaging the memory in comfyui with that model

it because i have the same issue (i'm using 3090), it only happen when i use LoRA Boreal. I need remove those arguments. Then it work perfectly.

i will give it a shot t today and report back if that actually work

@ALBIHANY
Copy link
Author

Did you put --high/med/low on the note on comfy?

If you mean comfyui arguments, yes i use --lowvram argument, but i don't think that has any affect on for that issue and again i can run the F8 version with the same setting it's just slow and i was hoping for the NF version to be faster, also i'm currently running the same NF model file with forge webui and it works smoothly, i think something is mismanaging the memory in comfyui with that model

it because i have the same issue (i'm using 3090), it only happen when i use LoRA Boreal. I need remove those arguments. Then it work perfectly.

i will give it a shot t today and report back if that actually work

just tried it, it doesn't work still the same issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants
@ALBIHANY @ChibiChubu and others