-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Where folder put mlx flux dev ? for use in comfyUI ? #67
Comments
If your use case involves LORA or ControlNet, you'll still need to learn how to use the terminal and follow the official tutorial. You can skip the rest of this guide. However, if you only need basic text2img functionality, you might want to try this: https://github.com/raysers/Mflux-ComfyUI The model will be automatically downloaded to the models/mflux folder under the ComfyUI directory. Currently, only the 4-bit quantized version is used, but you can also manually download it and place it in this directory. The model identifiers on Hugging Face are madroid/flux.1-schnell-mflux-4bit or madroid/flux.1-dev-mflux-4bit. Before official support for ComfyUI is available, this can serve as a temporary solution. You can also find it directly in the ComfyUI Manager by searching for "mflux". |
Hi I’m French .. so mlx is easy way to have a good time render with flux .. because with my m1 with flux dev 45 min for 1 pic 😂 …
You can help me for use flux whit Mac m1 with normal time rendering ?.. I use comfyUI
Cordialement.
Stéphane Niati
Director Agence DSTP
Events / Web / Print / Design / Video
www.dstp.fr
***@***.***
+33 (6) 64.33.25.02
1 Allée du Vallon, 06500 Sainte-Agnès
dstp.fr
Le 5 oct. 2024 à 10:38 +0200, raysers ***@***.***>, a écrit :
… If your use case involves LORA or ControlNet, you'll still need to learn how to use the terminal and follow the official tutorial. You can skip the rest of this guide.
However, if you only need basic text2img functionality, you might want to try this:
https://github.com/raysers/Mflux-ComfyUI
The model will be automatically downloaded to the models/mflux folder under the ComfyUI directory. Currently, only the 4-bit quantized version is used, but you can also manually download it and place it in this directory. The model identifiers on Hugging Face are madroid/flux.1-schnell-mflux-4bit or madroid/flux.1-dev-mflux-4bit.
Before official support for ComfyUI is available, this can serve as a temporary solution. You can also find it directly in the ComfyUI Manager by searching for "mflux".
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Of course, happy to help. You can try installing the Mflux-ComfyUI plugin in your ComfyUI. For installation instructions, please visit https://github.com/raysers/Mflux-ComfyUI. Alternatively, if you already have ComfyUI Manager installed, you can quickly install the plugin by searching for "Mflux-ComfyUI" in the node manager of ComfyUI Manager. After installing the plugin and restarting ComfyUI, you can right-click to create a new node and find the Mflux section. Locate the "Quick MFlux Generation" node and create it. Connect the output image of the "Quick MFlux Generation" node to the save image node of ComfyUI, and you can then test your speed. During the first run, some time will be needed for downloading the model from Huggingface, as mentioned above, the model path is your_ComfyUI/models/mflux. Note that I'm just a beginner developer. Currently, I've only tested this plugin on my M1 PRO 16GB, with macOS Ventura 13.6, Python 3.11.9, and Torch 2.3.1. The plugin runs well in my environment. |
Hi from France …
With my Mac Pro M1 32 go 10 min for 1 pic in 1024x1024 !!
7 min with Drawthings app from Mac ..
Cordialement.
Stéphane Niati
Director Agence DSTP
Events / Web / Print / Design / Video
www.dstp.fr
***@***.***
+33 (6) 64.33.25.02
1 Allée du Vallon, 06500 Sainte-Agnès
dstp.fr
Le 5 oct. 2024 à 23:23 +0200, raysers ***@***.***>, a écrit :
… Of course, happy to help. You can try installing the Mflux-ComfyUI plugin in your ComfyUI. For installation instructions, please visit https://github.com/raysers/Mflux-ComfyUI. Alternatively, if you already have ComfyUI Manager installed, you can quickly install the plugin by searching for "Mflux-ComfyUI" in the node manager of ComfyUI Manager.
After installing the plugin and restarting ComfyUI, you can right-click to create a new node and find the Mflux section. Locate the "Quick MFlux Generation" node and create it. Connect the output image of the "Quick MFlux Generation" node to the save image node of ComfyUI, and you can then test your speed. During the first run, some time will be needed for downloading the model from Huggingface, as mentioned above, the model path is your_ComfyUI/models/mflux.
Note that I'm just a beginner developer. Currently, I've only tested this plugin on my M1 PRO 16GB, with macOS Ventura 13.6, Python 3.11.9, and Torch 2.3.1. The plugin runs well in my environment.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
I think you're using dev 20 steps or more. I'm sorry I can't help you further, as my Ventuna doesn't support PyTorch's bf precision, so I've never used the regular Flux or GGUF of ComfyUI, and therefore can't provide a comparison of generation times. I once mentioned that Mflux is my only successful attempt. There may be more implementations, like the DrawThings you mentioned, or you could try the recently released ComfyUI version of DiffusionKit. These are all great options for Mac users. Currently, I still only intend to use Mflux. Although my old 16GB machine can only use Schnell, being able to generate images in two steps that aren't inferior to the past 20-step SDXL results already satisfies me. |
Where folder put mlx flux dev ? for use in comfyUI ?
How to install it in local
The text was updated successfully, but these errors were encountered: