-
Notifications
You must be signed in to change notification settings - Fork 907
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No models work except the standard one - bad magic #205
Comments
i'm having the same issue unfortunately, i've made no progress either. i'm trying to use ggml-vicuna-7b-4bit |
where can I download those models? why is nothing specified in the docs? |
https://medium.com/@martin-thissen/vicuna-on-your-cpu-gpu-best-free-chatbot-according-to-gpt-4-c24b322a193a |
I've got the same issue. I have 7B working via chat_mac.sh but it can't see other models except 7B. I've even tried renaming 13B in the same way as 7B but got "Bad magic". In other cases it searches for 7B model and says "llama_model_load: loading model from 'ggml-alpaca-7b-q4.bin' - please wait ... I also tried ./chat_mac -m alpaca-13b-ggml-q4_0-lora-merged/ggml-model-q4_0.bin but got the same "Bad magic error" |
I found this check (bad magic) in the source code, removed it, but this did not fix the situation, unfortunately a special model is needed there. |
I found this file: af9ab4a |
Does not work with any model other than the one that is attached in the description. I downloaded 4 pieces of 13b models, each time a bad magiс error, although the format is the same in the description of the model for alpaca, I downloaded 30b Model - bad magic, before the working 7b model I also had to download 5 non-working pieces.
Is this a bug, will it be fixed in the future, why are these models in the description suitable for alpace, but in reality they do not work, is this a bug in the new version? Throw off a working model, at least 13b, I already gave up, nothing works except the standard one.
The text was updated successfully, but these errors were encountered: