-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Fail to run with a local LLM (Ollama) #1186
Comments
|
Embeddings API
|
Verify the name of your models, seems like llama3.1 isn't pulled on your Ollama instance.
|
@balezeauquentin sorry for the outdated info (I switched to a different model):
Updated logs indexing-engine.log |
I can't open your logs file, I don't get why can you send it again please.
|
The
|
And your RAG pipeline isn't working ? |
这个问题特别恶心,困扰我好几天了 |
This should resolve the embedding issue you encountered. I faced a similar problem due to the different embedding format used by OpenAI. # repo: https://github.com/9prodhi/EmbedAdapter/blob/main/ollama_serv.py
python ollama_serv.py Also do not forget to change embedding llm |
After reducing the chunks size:
it pass
|
File "/home/zhangyj/anaconda3/envs/graphrag/lib/python3.12/site-packages/graphrag/llm/openai/openai_chat_llm.py", line 56, in _execute_llm seems no response generated from LLM (Ollama model), but don't know how to debug |
My index process has completed,but query shows SUCCESS: Global Search Response: I am sorry but I am unable to answer this question given the provided data. |
最近问题都是这个问题,就是官方的bug吧 |
Routing to #657 |
Do you need to file an issue?
Describe the bug
Fail to run with locally installed Ollama ...
Steps to reproduce
Ollama
Input
the-heart-sutra.txt
Init
Modify
settings.yaml
to use localllama2:latest
andnomic-embed-text
:Index
Expected Behavior
GraphRAG should index the text file ...
GraphRAG Config Used
Logs and screenshots
Additional Information
The text was updated successfully, but these errors were encountered: