-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support HF models - freepik as first example #84
Conversation
MODEL_INFERENCE_STEPS = { | ||
"dev": 14, | ||
"schnell": 4, | ||
"freepik-lite-8b-alpha": 22, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
from their readme:
Flux.1 Lite is ready to unleash your creativity! For the best results, we strongly recommend using a guidance_scale of 3.5 and setting n_steps between 22 and 30.
I feel conflicted about having this value higher than dev, will play around to see if we can live with 10-14 range and give users faster generations by default.
FLUX1_DEV = ("black-forest-labs/FLUX.1-dev", "dev", 1000, 512) | ||
FLUX1_SCHNELL = ("black-forest-labs/FLUX.1-schnell", "schnell", 1000, 256) | ||
# ==== third party compatible models - best effort compatibility ===== | ||
# https://huggingface.co/Freepik/flux.1-lite-8B-alpha distilled from dev | ||
FLUX1_FREEPIK_LITE_8B_ALPHA = ("Freepik/flux.1-lite-8B-alpha", "freepik-lite-8b-alpha", 1000, 512) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
considered naming this FLUX1_FREEPIK_LITE_8B
and pointing to the beta/final releases as the model matures. However, in python zen we should be more explicit than not..., so I prefer to still let users know exactly what they have to ask for and receive.
Todo:
|
not able to exactly match the using the example in the model card: https://huggingface.co/Freepik/flux.1-lite-8B-alpha#text-to-image
creates while running the diffusers example as is (but changing device to a head scratcher for now, I think this may be worth solving to see where mflux might differ from diffusers on default implementations. |
closing for now, not interesting enough to prioritize. Issue #101 is much more interesting and can use some of the work already done here. |
Change
I nominate that we support compatible and trending 3rd party models based on:
The first model to meet this criteria for me is: https://huggingface.co/Freepik/flux.1-lite-8B-alpha - it's a distilled version of flux.1 dev, read more at the link.
the claim:
Tradeoffs
the model configs are hard-coded for now, to hypothetically let anyone choose any huggingface-hosted model is possible but requires some config refactoring, but also invites anyone to attempt an incompatible model and creating Issues tracker noise
For now, I think this can make some users happy (those with less RAM) while mflux can be seen as inclusive of third party releases.