Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Issue]: All workflows completed successfully,but graphrag failed to answer any question given the provided data #575

Closed
YP-Yang opened this issue Jul 16, 2024 · 19 comments
Labels
community_support Issue handled by community members

Comments

@YP-Yang
Copy link

YP-Yang commented Jul 16, 2024

Describe the issue

I use local LLM to run graphrag and completed all workflows successfully, but graphrag failed to answer any question given the provided data

I found there's an error like below, don't know if it is the cause
RuntimeError: Failed to generate valid JSON output
00:33:57,636 graphrag.index.reporting.file_workflow_callbacks INFO Community Report Extraction Error details=None
00:33:57,636 graphrag.index.verbs.graph.report.strategies.graph_intelligence.run_graph_intelligence WARNING No report found for community: 0

my local LLM is llama3 in ollama and nomic-embed-text-v1.5.Q5_K_M.gguf in LM-studio

the input file book.txt is just the same as in the GraphRAG Get Started doc:
curl https://www.gutenberg.org/cache/epub/24022/pg24022.txt > ./ragtest/input/book.txt

Steps to reproduce

No response

GraphRAG Config Used

this is my .env file
GRAPHRAG_API_KEY=ollama

this is my setting.yaml file
image

Logs and screenshots

this is the terminal output
(GraphRAG) D:\AI\RAG\GraphRAG>python -m graphrag.index --root .
🚀 Reading settings from settings.yaml
D:\anaconda3\envs\GraphRAG\Lib\site-packages\numpy\core\fromnumeric.py:59: FutureWarning: 'DataFrame.swapaxes' is
deprecated and will be removed in a future version. Please use 'DataFrame.transpose' instead.
return bound(*args, **kwds)
🚀 create_base_text_units
id ... n_tokens
0 680dd6d2a970a49082fa4f34bf63a34e ... 300
1 95f1f8f5bdbf0bee3a2c6f2f4a4907f6 ... 300
2 3a450ed2b7fb1e5fce66f92698c13824 ... 300
3 95b143eba145d91eacae7be3e4ebaf0c ... 300
4 c390f1b92e2888f78b58f6af5b12afa0 ... 300
.. ... ... ...
226 972bb34ddd371530f06d006480526d3e ... 300
227 2f918cd94d1825eb5cbdc2a9d3ce094e ... 300
228 eec5fc1a2be814473698e220b303dc1b ... 300
229 535f6bed392a62760401b1d4f2aa5e2f ... 300
230 9e59af410db84b25757e3bf90e036f39 ... 155

[231 rows x 5 columns]
🚀 create_base_extracted_entities
entity_graph
0 <graphml xmlns="http://graphml.graphdrawing.or...
🚀 create_summarized_entities
entity_graph
0 <graphml xmlns="http://graphml.graphdrawing.or...
🚀 create_base_entity_graph
level clustered_graph
0 0 <graphml xmlns="http://graphml.graphdrawing.or...
1 1 <graphml xmlns="http://graphml.graphdrawing.or...
2 2 <graphml xmlns="http://graphml.graphdrawing.or...
3 3 <graphml xmlns="http://graphml.graphdrawing.or...
D:\anaconda3\envs\GraphRAG\Lib\site-packages\numpy\core\fromnumeric.py:59: FutureWarning: 'DataFrame.swapaxes' is
deprecated and will be removed in a future version. Please use 'DataFrame.transpose' instead.
return bound(*args, **kwds)
D:\anaconda3\envs\GraphRAG\Lib\site-packages\numpy\core\fromnumeric.py:59: FutureWarning: 'DataFrame.swapaxes' is
deprecated and will be removed in a future version. Please use 'DataFrame.transpose' instead.
return bound(*args, **kwds)
🚀 create_final_entities
id ... description_embedding
0 b45241d70f0e43fca764df95b2b81f77 ... [-0.048254746943712234, 0.044550925493240356, ...
1 4119fd06010c494caa07f439b333f4c5 ... [0.002416445640847087, 0.08019110560417175, -0...
2 d3835bf3dda84ead99deadbeac5d0d7d ... [-0.0010763887548819184, 0.04444070905447006, ...
3 077d2820ae1845bcbb1803379a3d1eae ... [-0.02357032336294651, 0.03006734512746334, -0...
4 3671ea0dd4e84c1a9b02c5ab2c8f4bac ... [0.02192544937133789, 0.009480239823460579, -0...
.. ... ... ...
179 7ffa3a064bce468082739c5a164df5a3 ... [-0.0039461092092096806, 0.07387872040271759, ...
180 ce36d1d637cf4a4e93f5e37ffbc6bd76 ... [0.019510895013809204, 0.07107631862163544, -0...
181 eeb9c02c0efa4131b9e95d33c31019fc ... [-0.030116630718111992, 0.12038293480873108, -...
182 7b2472c5dd9949c58828413387b94659 ... [-0.018994171172380447, 0.06742893904447556, -...
183 bdddcb17ba6c408599dd395ce64f960a ... [-0.03383741155266762, 0.06317673623561859, -0...

[369 rows x 8 columns]
D:\anaconda3\envs\GraphRAG\Lib\site-packages\numpy\core\fromnumeric.py:59: FutureWarning: 'DataFrame.swapaxes' is
deprecated and will be removed in a future version. Please use 'DataFrame.transpose' instead.
return bound(*args, **kwds)
D:\anaconda3\envs\GraphRAG\Lib\site-packages\datashaper\engine\verbs\convert.py:72: FutureWarning: errors='ignore' is
deprecated and will raise in a future version. Use to_datetime without passing errors and catch exceptions explicitly
instead
datetime_column = pd.to_datetime(column, errors="ignore")
D:\anaconda3\envs\GraphRAG\Lib\site-packages\datashaper\engine\verbs\convert.py:72: UserWarning: Could not infer format,
so each element will be parsed individually, falling back to dateutil. To ensure parsing is consistent and
as-expected, please specify a format.
datetime_column = pd.to_datetime(column, errors="ignore")
🚀 create_final_nodes
level title type ... top_level_node_id x y
0 0 "CHARLES DICKENS" "PERSON" ... b45241d70f0e43fca764df95b2b81f77 0 0
1 0 "ARTHUR RACKHAM" "PERSON" ... 4119fd06010c494caa07f439b333f4c5 0 0
2 0 "JANET BLENKINSHIP" "PERSON" ... d3835bf3dda84ead99deadbeac5d0d7d 0 0
3 0 "J. B. LIPPINCOTT COMPANY" "ORGANIZATION" ... 077d2820ae1845bcbb1803379a3d1eae 0 0
4 0 "SUZANNE SHELL" ... 3671ea0dd4e84c1a9b02c5ab2c8f4bac 0 0
... ... ... ... ... ... .. ..
1471 3 "PROJECT GUTENBERG'S CONCEPT" ... 7ffa3a064bce468082739c5a164df5a3 0 0
1472 3 "MICHAEL HART" "PERSON" ... ce36d1d637cf4a4e93f5e37ffbc6bd76 0 0
1473 3 "PG SEARCH FACILITY" "EVENT" ... eeb9c02c0efa4131b9e95d33c31019fc 0 0
1474 3 "WWW.GUTENBERG.ORG" ... 7b2472c5dd9949c58828413387b94659 0 0
1475 3 "EMAIL NEWSLETTER" "EVENT" ... bdddcb17ba6c408599dd395ce64f960a 0 0

[1476 rows x 14 columns]
D:\anaconda3\envs\GraphRAG\Lib\site-packages\numpy\core\fromnumeric.py:59: FutureWarning: 'DataFrame.swapaxes' is
deprecated and will be removed in a future version. Please use 'DataFrame.transpose' instead.
return bound(*args, **kwds)
D:\anaconda3\envs\GraphRAG\Lib\site-packages\numpy\core\fromnumeric.py:59: FutureWarning: 'DataFrame.swapaxes' is
deprecated and will be removed in a future version. Please use 'DataFrame.transpose' instead.
return bound(*args, **kwds)
🚀 create_final_communities
id ... text_unit_ids
0 2 ... [0546d296a4d3bb0486bd0c94c01dc9be,0d6bc6e701a0...
1 3 ... [13f70e4c705fb134466c125b05af3440,3a450ed2b7fb...
2 15 ... [2a1f194b20e1c3a19176deba9b13b65c,3789fbe1d06a...
3 0 ... [0e13fd0aca5720eb614104772f20077b,0eb69b9f79f6...
4 7 ... [02182df5c36e5e2f734fc8706162fc69,4033108a1f27...
5 4 ... [95b143eba145d91eacae7be3e4ebaf0c,c390f1b92e28...
6 8 ... [0bc408d042e6d08bf8c345bff1b25fe8,4df2a9b3b21d...
7 17 ... [a2ff22727e335a64c636fa57134bb2f4,c057b2b3188a...
8 14 ... [282f8e11aee6349166c4f948df48e16e, 1a8c077ae18...
9 13 ... [28d4847787f924c12e78665c4dae6428,a31b7f9a68e9...
10 16 ... [0fd302f483e5cbe68789a30ac366e604,2f92cbc7a359...
11 11 ... [4865f50de7547514f317fcfee5bc6e55]
12 18 ... [3dc28534d84425a9cdd91f68958255a7,6a13906fb4f9...
13 9 ... [206aad72da92cb4fcb0c5ba8818e7d5f,92ff0f51be82...
14 10 ... [320c285a98f252d567b2005902763e5c, 320c285a98f...
15 6 ... [1236f696b79265de838991eb0f3d341a, 1236f696b79...
16 1 ... [0bc408d042e6d08bf8c345bff1b25fe8,20d307388baf...
17 5 ... [5d2b320242efa6c0a00d078906384495,736ac01b79d0...
18 12 ... [ebaa02f7e877ff91c3acfad450b7e1d1, 1c4390f57a2...
19 26 ... [0546d296a4d3bb0486bd0c94c01dc9be,0d6bc6e701a0...
20 34 ... [13f70e4c705fb134466c125b05af3440,3a450ed2b7fb...
21 21 ... [0e13fd0aca5720eb614104772f20077b,0eb69b9f79f6...
22 39 ... [02182df5c36e5e2f734fc8706162fc69,4033108a1f27...
23 35 ... [59505f0ab347d856c834be817ede9e63,7b678bbc20b8...
24 31 ... [155aebf0490ccb996a86bb7f6d4cc2e2,2545ffae9f16...
25 29 ... [6997e1ff5fabcfa641c9a291850a1981,85d62478a6d1...
26 40 ... [4df2a9b3b21d15ed7b34eb5970611c19, 0368649fcb6...
27 23 ... [1d57ed63a57765dc6072e2524e0f8c2b,282f8e11aee6...
28 19 ... [2818d4194a37f4573f7a83b49cd59b21,d222d20d61ef...
29 20 ... [15f8920aa56b63eafb97b2f8873782c8,1dcbc2287618...
30 22 ... [15f8920aa56b63eafb97b2f8873782c8,1c4390f57a2a...
31 24 ... [359b2df5aa75c64840a175a5c9e7e37c,3f910c43801e...
32 27 ... [12c823fe2af9519eb1c85b2659a3a87e,155aebf0490c...
33 30 ... [6997e1ff5fabcfa641c9a291850a1981,85d62478a6d1...
34 32 ... [155aebf0490ccb996a86bb7f6d4cc2e2,3dc650065076...
35 28 ... [5788cf5f0b27187d91fb0056f7d800fa,d95d1ec14f9c...
36 25 ... [5788cf5f0b27187d91fb0056f7d800fa,7ed8b64d3fcf...
37 37 ...
38 36 ... [13f70e4c705fb134466c125b05af3440,59505f0ab347...
39 33 ... [2f7a9e610f25e033dc2a4917e5f57870]
40 38 ... [0b41f41ca4493f16999f35a6d6a531c7, 0b41f41ca44...
41 43 ... [0546d296a4d3bb0486bd0c94c01dc9be,0d6bc6e701a0...
42 42 ... [0e13fd0aca5720eb614104772f20077b,0eb69b9f79f6...
43 44 ... [1d5a3ea2bdc7eb02878c9733fae3924b,23f97a28c076...
44 41 ... [e13bd32de400dd73792d45f5d18b72ea,fd56f460d645...
45 45 ... [0e13fd0aca5720eb614104772f20077b,0eb69b9f79f6...
46 46 ... [21e1f454a64f4522c8629742edb7ced6,22d74ccd8712...

[47 rows x 6 columns]
🚀 join_text_units_to_entity_ids
text_unit_ids ... id
0 680dd6d2a970a49082fa4f34bf63a34e ... 680dd6d2a970a49082fa4f34bf63a34e
1 95f1f8f5bdbf0bee3a2c6f2f4a4907f6 ... 95f1f8f5bdbf0bee3a2c6f2f4a4907f6
2 0546d296a4d3bb0486bd0c94c01dc9be ... 0546d296a4d3bb0486bd0c94c01dc9be
3 0d6bc6e701a0025632e41dc3387c641d ... 0d6bc6e701a0025632e41dc3387c641d
4 13f70e4c705fb134466c125b05af3440 ... 13f70e4c705fb134466c125b05af3440
.. ... ... ...
223 b3c35247f91923027d9bd7d476467f4f ... b3c35247f91923027d9bd7d476467f4f
224 e8cf7d2eec5c3bcbeefc60d9f15941ed ... e8cf7d2eec5c3bcbeefc60d9f15941ed
225 eec5fc1a2be814473698e220b303dc1b ... eec5fc1a2be814473698e220b303dc1b
226 f96b5ddf7fae853edbc4d916f66c623f ... f96b5ddf7fae853edbc4d916f66c623f
227 958e8453c6299cf980b3e6f962240699 ... 958e8453c6299cf980b3e6f962240699

[228 rows x 3 columns]
D:\anaconda3\envs\GraphRAG\Lib\site-packages\numpy\core\fromnumeric.py:59: FutureWarning: 'DataFrame.swapaxes' is
deprecated and will be removed in a future version. Please use 'DataFrame.transpose' instead.
return bound(*args, **kwds)
D:\anaconda3\envs\GraphRAG\Lib\site-packages\numpy\core\fromnumeric.py:59: FutureWarning: 'DataFrame.swapaxes' is
deprecated and will be removed in a future version. Please use 'DataFrame.transpose' instead.
return bound(*args, **kwds)
D:\anaconda3\envs\GraphRAG\Lib\site-packages\datashaper\engine\verbs\convert.py:65: FutureWarning: errors='ignore' is
deprecated and will raise in a future version. Use to_numeric without passing errors and catch exceptions explicitly
instead
column_numeric = cast(pd.Series, pd.to_numeric(column, errors="ignore"))
🚀 create_final_relationships
source target weight ... source_degree target_degree rank
0 "CHARLES DICKENS" "A CHRISTMAS CAROL" 1.0 ... 1 2 3
1 "ARTHUR RACKHAM" "A CHRISTMAS CAROL" 1.0 ... 1 2 3
2 "JANET BLENKINSHIP" "ONLINE DISTRIBUTED PROOFREADING TEAM" 1.0 ... 1 2 3
3 "J. B. LIPPINCOTT COMPANY" "PUBLICATION OF A CHRISTMAS CAROL" 1.0 ... 1 1 2
4 "SUZANNE SHELL" "ONLINE DISTRIBUTED PROOFREADING TEAM" 1.0 ... 1 2 3
.. ... ... ... ... ... ... ...
340 "VOLUNTEERS" "PROJECT GUTENBERG™" 1.0 ... 1 2 3
341 "DONATIONS" "PROJECT GUTENBERG™" 1.0 ... 2 2 4
342 "IRS" "COMPLIANCE" 1.0 ... 1 1 2
343 "PROFESSOR MICHAEL S. HART" "PROJECT GUTENBERG'S CONCEPT" 1.0 ... 1 1 2
344 "PG SEARCH FACILITY" "WWW.GUTENBERG.ORG" 1.0 ... 1 1 2

[345 rows x 10 columns]
🚀 join_text_units_to_relationship_ids
id relationship_ids
0 95f1f8f5bdbf0bee3a2c6f2f4a4907f6 [bc70fee2061541148833d19e86f225b3, 0fc15cc3b44...
1 8f05c8e8b3b9837fd079d58e372f2d30 [1ca41537c47c4752a17a44d1d7086d96, a2b1621a3e4...
2 9f76ed77ea7b876cf2b18cfa6544bf0c [1ca41537c47c4752a17a44d1d7086d96, df40ad480a3...
3 c390f1b92e2888f78b58f6af5b12afa0 [1ca41537c47c4752a17a44d1d7086d96, 0d8fde01d72...
4 2818d4194a37f4573f7a83b49cd59b21 [7e0d14ca308b4796bdc675a64bd3a36e, de04830d6e4...
.. ... ...
187 f96b5ddf7fae853edbc4d916f66c623f [36870a3393f6413e9bf647168eb6977a]
188 b3c35247f91923027d9bd7d476467f4f [4fe3ff52700c491f8cc650aadb4d7cb0, f1f6f6435a4...
189 972bb34ddd371530f06d006480526d3e [0af2ca1c090843ea92679fd14c1fbc9a, f0c578614b2...
190 eec5fc1a2be814473698e220b303dc1b [1b06d3e53ffd4771952fbef04d1e666c, 60dce7d8bc1...
191 958e8453c6299cf980b3e6f962240699 [6915637e8d124fdc8473111d501e3703, 2233f319291...

[192 rows x 2 columns]
🚀 create_final_community_reports
community ... id
0 46 ... d888d52d-c77f-4aff-bdea-4663922ea729
1 41 ... 8042150c-1581-4cfb-a6f8-7a0381fce758
2 42 ... 6f507306-78f0-4ee4-82b9-22eb1da39bfd
3 43 ... 7f1da666-5837-4f31-a478-cff51b07901a
4 44 ... 470365a0-fcfb-4f69-b3d6-67e1fc18eb4d
5 19 ... e5fe0812-0108-4ef1-88fc-a424e31bac6d
6 20 ... d1d6caea-2f3b-4c9c-8e88-6cb1f6f0e0d7
7 21 ... a1f0f398-d8c0-425a-b14f-13b0f0acc233
8 24 ... 15c17bd6-3c6c-4715-a87d-053965ebfa42
9 25 ... 307e735b-745e-4b0c-bcda-acdf92821671
10 26 ... 27c6d49e-e541-4f06-8a87-7d5cbefc72e7
11 27 ... 8ac85ef6-5a33-4494-9993-12637a727f43
12 28 ... 61eb826a-5729-487c-b3da-d280bf8ed42a
13 29 ... 34e4168e-4814-4804-a3fd-b7df9c8ffbf2
14 30 ... 1d276f1c-c89c-4adc-802e-e6749d1ba3c5
15 31 ... 2ef5e8bb-d276-4dab-829d-7a83e2b362f8
16 32 ... 413262ce-f4f6-41b5-9d67-4b35394e372b
17 33 ... 6b6bc9c6-6524-4e3f-81f1-1c3df1d33978
18 34 ... 43faf618-5028-489d-af26-e1d1b8a1a995
19 35 ... 35b64048-4a40-42bc-9d8a-9f290972873d
20 36 ... 6ea36c61-6a58-4b70-8e61-4f78fc06b5d6
21 37 ... dbc49043-eb78-4a82-944c-e1b36343e119
22 38 ... eea1d31d-c828-4f0a-8ddf-c5395c880b08
23 10 ... 98466546-5b9c-467d-a5b9-7cdc5748d832
24 11 ... 29a68cd2-8e2d-4267-84a7-7ac6b17abf40
25 13 ... 6e46377f-0257-4346-9a11-db842350a22c
26 14 ... c7377d80-6584-4982-bbb3-51f6ae4c45c3
27 16 ... cdab5616-9997-4f70-8b1d-c974d1e14979
28 17 ... 808ca4e2-1be2-4247-9330-4e26d930f409
29 18 ... 5b9a2c2c-2ecf-4870-a2fe-619320c9c5d5
30 4 ... eb3a1f3e-b50f-43f7-b580-40690283b4cf
31 5 ... 0d25de96-686c-4f5b-b06d-9eb35e8259e8
32 6 ... d9351192-1394-4ee9-8fc1-da1d2e03177c
33 9 ... 58f235d4-e59b-4436-bd84-97a3a57682c5

[34 rows x 10 columns]
🚀 create_final_text_units
id ... relationship_ids
0 95f1f8f5bdbf0bee3a2c6f2f4a4907f6 ... [bc70fee2061541148833d19e86f225b3, 0fc15cc3b44...
1 c390f1b92e2888f78b58f6af5b12afa0 ... [1ca41537c47c4752a17a44d1d7086d96, 0d8fde01d72...
2 4df2a9b3b21d15ed7b34eb5970611c19 ... [7e0d14ca308b4796bdc675a64bd3a36e, 39d31f770cf...
3 4033108a1f27d8d4a3caaa923d459730 ... [feb9ddd0ac2949178f26a36949aa5422, 1fa6d3118bd...
4 dbf014d7f9bcf97aa06ace38b6e41ccb ... [62c65bbae33c4ee9a21b61f6f454c4b4, 30b7034c446...
.. ... ... ...
226 427f29edf102f108c55aa868214fa411 ... None
227 2f918cd94d1825eb5cbdc2a9d3ce094e ... None
228 f44ec9393fd5cd1b28914e4203dcd7b9 ... None
229 3fedcfeffb43c689a33ffa06897ad045 ... None
230 01e84646075b255eab0a34d872336a89 ... None

[231 rows x 6 columns]
D:\anaconda3\envs\GraphRAG\Lib\site-packages\datashaper\engine\verbs\convert.py:72: FutureWarning: errors='ignore' is
deprecated and will raise in a future version. Use to_datetime without passing errors and catch exceptions explicitly
instead
datetime_column = pd.to_datetime(column, errors="ignore")
🚀 create_base_documents
id ... title
0 c305886e4aa2f6efcf64b57762777055 ... book.txt

[1 rows x 4 columns]
🚀 create_final_documents
id ... title
0 c305886e4aa2f6efcf64b57762777055 ... book.txt

[1 rows x 4 columns]
⠴ GraphRAG Indexer
├── Loading Input (InputFileType.text) - 1 files loaded (0 filtered) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:00
├── create_base_text_units
├── create_base_extracted_entities
├── create_summarized_entities
├── create_base_entity_graph
├── create_final_entities
├── create_final_nodes
├── create_final_communities
├── join_text_units_to_entity_ids
├── create_final_relationships
├── join_text_units_to_relationship_ids
├── create_final_community_reports
├── create_final_text_units
├── create_base_documents
└── create_final_documents
🚀 All workflows completed successfully.

(GraphRAG) D:\AI\RAG\GraphRAG>python -m graphrag.query --root . --method global "What are the top themes in this story"

INFO: Reading settings from settings.yaml
D:\anaconda3\envs\GraphRAG\Lib\site-packages\graphrag\query\indexer_adapters.py:71: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
entity_df["community"] = entity_df["community"].fillna(-1)
D:\anaconda3\envs\GraphRAG\Lib\site-packages\graphrag\query\indexer_adapters.py:72: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
entity_df["community"] = entity_df["community"].astype(int)
creating llm client with {'api_key': 'REDACTED,len=6', 'type': "openai_chat", 'model': 'llama3', 'max_tokens': 4000, 'request_timeout': 180.0, 'api_base': 'http://localhost:11434/v1', 'api_version': None, 'organization': None, 'proxy': None, 'cognitive_services_endpoint': None, 'deployment_name': None, 'model_supports_json': True, 'tokens_per_minute': 0, 'requests_per_minute': 0, 'max_retries': 10, 'max_retry_wait': 10.0, 'sleep_on_rate_limit_recommendation': True, 'concurrent_requests': 25}
Error parsing search response json
Traceback (most recent call last):
File "D:\anaconda3\envs\GraphRAG\Lib\site-packages\graphrag\query\structured_search\global_search\search.py", line 194, in map_response_single_batch
processed_response = self.parse_search_response(search_response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda3\envs\GraphRAG\Lib\site-packages\graphrag\query\structured_search\global_search\search.py", line 232, in parse_search_response
parsed_elements = json.loads(search_response)["points"]
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda3\envs\GraphRAG\Lib\json_init
.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda3\envs\GraphRAG\Lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda3\envs\GraphRAG\Lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

SUCCESS: Global Search Response: I am sorry but I am unable to answer this question given the provided data.

and these are from the indexing-engine.log

00:27:14,883 httpx INFO HTTP Request: POST http://localhost:1234/v1/embeddings "HTTP/1.1 200 OK"
00:27:14,927 graphrag.llm.base.rate_limiting_llm INFO perf - llm.embedding "Process" with 13 retries took 0.13999999999941792. input_tokens=958, output_tokens=0
00:27:15,113 httpx INFO HTTP Request: POST http://localhost:1234/v1/embeddings "HTTP/1.1 200 OK"
00:27:15,152 graphrag.llm.base.rate_limiting_llm INFO perf - llm.embedding "Process" with 13 retries took 0.14100000000144064. input_tokens=896, output_tokens=0
00:27:15,166 datashaper.workflow.workflow INFO executing verb drop
00:27:15,176 datashaper.workflow.workflow INFO executing verb filter
00:27:15,181 graphrag.index.emit.parquet_table_emitter INFO emitting parquet table create_final_entities.parquet
00:27:15,320 graphrag.index.run INFO Running workflow: create_final_nodes...
00:27:15,320 graphrag.index.run INFO dependencies for create_final_nodes: ['create_base_entity_graph']
00:27:15,320 graphrag.index.run INFO read table from storage: create_base_entity_graph.parquet
00:27:15,325 datashaper.workflow.workflow INFO executing verb layout_graph
00:27:15,575 datashaper.workflow.workflow INFO executing verb unpack_graph
00:27:15,632 datashaper.workflow.workflow INFO executing verb unpack_graph
00:27:15,759 datashaper.workflow.workflow INFO executing verb filter
00:27:15,779 datashaper.workflow.workflow INFO executing verb drop
00:27:15,789 datashaper.workflow.workflow INFO executing verb select
00:27:15,798 datashaper.workflow.workflow INFO executing verb rename
00:27:15,804 datashaper.workflow.workflow INFO executing verb join
00:27:15,814 datashaper.workflow.workflow INFO executing verb convert
00:27:15,835 datashaper.workflow.workflow INFO executing verb rename
00:27:15,835 graphrag.index.emit.parquet_table_emitter INFO emitting parquet table create_final_nodes.parquet
00:27:15,946 graphrag.index.run INFO Running workflow: create_final_communities...
00:27:15,946 graphrag.index.run INFO dependencies for create_final_communities: ['create_base_entity_graph']
00:27:15,946 graphrag.index.run INFO read table from storage: create_base_entity_graph.parquet
00:27:15,962 datashaper.workflow.workflow INFO executing verb unpack_graph
00:27:16,9 datashaper.workflow.workflow INFO executing verb unpack_graph
00:27:16,56 datashaper.workflow.workflow INFO executing verb aggregate_override
00:27:16,66 datashaper.workflow.workflow INFO executing verb join
00:27:16,76 datashaper.workflow.workflow INFO executing verb join
00:27:16,96 datashaper.workflow.workflow INFO executing verb concat
00:27:16,105 datashaper.workflow.workflow INFO executing verb filter
00:27:16,202 datashaper.workflow.workflow INFO executing verb aggregate_override
00:27:16,222 datashaper.workflow.workflow INFO executing verb join
00:27:16,233 datashaper.workflow.workflow INFO executing verb filter
00:27:16,252 datashaper.workflow.workflow INFO executing verb fill
00:27:16,265 datashaper.workflow.workflow INFO executing verb merge
00:27:16,276 datashaper.workflow.workflow INFO executing verb copy
00:27:16,282 datashaper.workflow.workflow INFO executing verb select
00:27:16,282 graphrag.index.emit.parquet_table_emitter INFO emitting parquet table create_final_communities.parquet
00:27:16,393 graphrag.index.run INFO Running workflow: join_text_units_to_entity_ids...
00:27:16,393 graphrag.index.run INFO dependencies for join_text_units_to_entity_ids: ['create_final_entities']
00:27:16,403 graphrag.index.run INFO read table from storage: create_final_entities.parquet
00:27:16,435 datashaper.workflow.workflow INFO executing verb select
00:27:16,440 datashaper.workflow.workflow INFO executing verb unroll
00:27:16,456 datashaper.workflow.workflow INFO executing verb aggregate_override
00:27:16,460 graphrag.index.emit.parquet_table_emitter INFO emitting parquet table join_text_units_to_entity_ids.parquet
00:27:16,557 graphrag.index.run INFO Running workflow: create_final_relationships...
00:27:16,557 graphrag.index.run INFO dependencies for create_final_relationships: ['create_base_entity_graph', 'create_final_nodes']
00:27:16,557 graphrag.index.run INFO read table from storage: create_base_entity_graph.parquet
00:27:16,567 graphrag.index.run INFO read table from storage: create_final_nodes.parquet
00:27:16,587 datashaper.workflow.workflow INFO executing verb unpack_graph
00:27:16,652 datashaper.workflow.workflow INFO executing verb filter
00:27:16,682 datashaper.workflow.workflow INFO executing verb rename
00:27:16,696 datashaper.workflow.workflow INFO executing verb filter
00:27:16,728 datashaper.workflow.workflow INFO executing verb drop
00:27:16,738 datashaper.workflow.workflow INFO executing verb compute_edge_combined_degree
00:27:16,748 datashaper.workflow.workflow INFO executing verb convert
00:27:16,778 datashaper.workflow.workflow INFO executing verb convert
00:27:16,778 graphrag.index.emit.parquet_table_emitter INFO emitting parquet table create_final_relationships.parquet
00:27:16,888 graphrag.index.run INFO Running workflow: join_text_units_to_relationship_ids...
00:27:16,888 graphrag.index.run INFO dependencies for join_text_units_to_relationship_ids: ['create_final_relationships']
00:27:16,888 graphrag.index.run INFO read table from storage: create_final_relationships.parquet
00:27:16,914 datashaper.workflow.workflow INFO executing verb select
00:27:16,924 datashaper.workflow.workflow INFO executing verb unroll
00:27:16,936 datashaper.workflow.workflow INFO executing verb aggregate_override
00:27:16,954 datashaper.workflow.workflow INFO executing verb select
00:27:16,954 graphrag.index.emit.parquet_table_emitter INFO emitting parquet table join_text_units_to_relationship_ids.parquet
00:27:17,55 graphrag.index.run INFO Running workflow: create_final_community_reports...
00:27:17,55 graphrag.index.run INFO dependencies for create_final_community_reports: ['create_final_relationships', 'create_final_nodes']
00:27:17,55 graphrag.index.run INFO read table from storage: create_final_relationships.parquet
00:27:17,55 graphrag.index.run INFO read table from storage: create_final_nodes.parquet
00:27:17,90 datashaper.workflow.workflow INFO executing verb prepare_community_reports_nodes
00:27:17,112 datashaper.workflow.workflow INFO executing verb prepare_community_reports_edges
00:27:17,128 datashaper.workflow.workflow INFO executing verb restore_community_hierarchy
00:27:17,144 datashaper.workflow.workflow INFO executing verb prepare_community_reports
00:27:17,144 graphrag.index.verbs.graph.report.prepare_community_reports INFO Number of nodes at level=3 => 369
00:27:17,163 graphrag.index.verbs.graph.report.prepare_community_reports INFO Number of nodes at level=2 => 369
00:27:17,176 graphrag.index.verbs.graph.report.prepare_community_reports INFO Number of nodes at level=1 => 369
00:27:17,222 graphrag.index.verbs.graph.report.prepare_community_reports INFO Number of nodes at level=0 => 369
00:27:17,295 datashaper.workflow.workflow INFO executing verb create_community_reports
00:27:27,992 httpx INFO HTTP Request: POST http://localhost:11434/v1/chat/completions "HTTP/1.1 200 OK"
00:27:27,992 graphrag.llm.base.rate_limiting_llm INFO perf - llm.chat "create_community_report" with 0 retries took 10.688000000000102. input_tokens=2532, output_tokens=439
00:27:29,735 httpx INFO HTTP Request: POST http://localhost:11434/v1/chat/completions "HTTP/1.1 200 OK"
00:27:38,699 httpx INFO HTTP Request: POST http://localhost:11434/v1/chat/completions "HTTP/1.1 200 OK"
00:27:47,543 httpx INFO HTTP Request: POST http://localhost:11434/v1/chat/completions "HTTP/1.1 200 OK"
00:27:56,478 httpx INFO HTTP Request: POST http://localhost:11434/v1/chat/completions "HTTP/1.1 200 OK"
00:27:56,479 graphrag.index.graph.extractors.community_reports.community_reports_extractor ERROR error generating community report
Traceback (most recent call last):
File "D:\anaconda3\envs\GraphRAG\Lib\site-packages\graphrag\index\graph\extractors\community_reports\community_reports_extractor.py", line 58, in call
await self._llm(
File "D:\anaconda3\envs\GraphRAG\Lib\site-packages\graphrag\llm\openai\json_parsing_llm.py", line 34, in call
result = await self._delegate(input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda3\envs\GraphRAG\Lib\site-packages\graphrag\llm\openai\openai_token_replacing_llm.py", line 37, in call
return await self._delegate(input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda3\envs\GraphRAG\Lib\site-packages\graphrag\llm\openai\openai_history_tracking_llm.py", line 33, in call
output = await self.delegate(input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda3\envs\GraphRAG\Lib\site-packages\graphrag\llm\base\caching_llm.py", line 104, in call
result = await self.delegate(input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda3\envs\GraphRAG\Lib\site-packages\graphrag\llm\base\rate_limiting_llm.py", line 177, in call
result, start = await execute_with_retry()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda3\envs\GraphRAG\Lib\site-packages\graphrag\llm\base\rate_limiting_llm.py", line 159, in execute_with_retry
async for attempt in retryer:
File "D:\anaconda3\envs\GraphRAG\Lib\site-packages\tenacity\asyncio_init
.py", line 166, in anext
do = await self.iter(retry_state=self.retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda3\envs\GraphRAG\Lib\site-packages\tenacity\asyncio_init
.py", line 153, in iter
result = await action(retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda3\envs\GraphRAG\Lib\site-packages\tenacity_utils.py", line 99, in inner
return call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda3\envs\GraphRAG\Lib\site-packages\tenacity_init
.py", line 398, in
self._add_action_func(lambda rs: rs.outcome.result())
^^^^^^^^^^^^^^^^^^^
File "D:\anaconda3\envs\GraphRAG\Lib\concurrent\futures_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "D:\anaconda3\envs\GraphRAG\Lib\concurrent\futures_base.py", line 401, in __get_result
raise self._exception
File "D:\anaconda3\envs\GraphRAG\Lib\site-packages\graphrag\llm\base\rate_limiting_llm.py", line 165, in execute_with_retry
return await do_attempt(), start
^^^^^^^^^^^^^^^^^^
File "D:\anaconda3\envs\GraphRAG\Lib\site-packages\graphrag\llm\base\rate_limiting_llm.py", line 147, in do_attempt
return await self._delegate(input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda3\envs\GraphRAG\Lib\site-packages\graphrag\llm\base\base_llm.py", line 48, in call
return await self._invoke_json(input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda3\envs\GraphRAG\Lib\site-packages\graphrag\llm\openai\openai_chat_llm.py", line 90, in _invoke_json
raise RuntimeError(FAILED_TO_CREATE_JSON_ERROR)
RuntimeError: Failed to generate valid JSON output
00:27:56,482 graphrag.index.reporting.file_workflow_callbacks INFO Community Report Extraction Error details=None
00:27:56,482 graphrag.index.verbs.graph.report.strategies.graph_intelligence.run_graph_intelligence WARNING No report found for community: 45
00:28:09,522 httpx INFO HTTP Request: POST http://localhost:11434/v1/chat/completions "HTTP/1.1 200 OK"
………………
………………
………………
File "D:\anaconda3\envs\GraphRAG\Lib\site-packages\graphrag\llm\openai\openai_chat_llm.py", line 90, in _invoke_json
raise RuntimeError(FAILED_TO_CREATE_JSON_ERROR)
RuntimeError: Failed to generate valid JSON output
00:33:57,636 graphrag.index.reporting.file_workflow_callbacks INFO Community Report Extraction Error details=None
00:33:57,636 graphrag.index.verbs.graph.report.strategies.graph_intelligence.run_graph_intelligence WARNING No report found for community: 0
00:33:57,663 datashaper.workflow.workflow INFO executing verb window
00:33:57,665 graphrag.index.emit.parquet_table_emitter INFO emitting parquet table create_final_community_reports.parquet
00:33:57,791 graphrag.index.run INFO Running workflow: create_final_text_units...
00:33:57,791 graphrag.index.run INFO dependencies for create_final_text_units: ['join_text_units_to_relationship_ids', 'create_base_text_units', 'join_text_units_to_entity_ids']
00:33:57,791 graphrag.index.run INFO read table from storage: join_text_units_to_relationship_ids.parquet
00:33:57,794 graphrag.index.run INFO read table from storage: create_base_text_units.parquet
00:33:57,796 graphrag.index.run INFO read table from storage: join_text_units_to_entity_ids.parquet
00:33:57,825 datashaper.workflow.workflow INFO executing verb select
00:33:57,839 datashaper.workflow.workflow INFO executing verb rename
00:33:57,853 datashaper.workflow.workflow INFO executing verb join
00:33:57,869 datashaper.workflow.workflow INFO executing verb join
00:33:57,886 datashaper.workflow.workflow INFO executing verb aggregate_override
00:33:57,901 datashaper.workflow.workflow INFO executing verb select
00:33:57,903 graphrag.index.emit.parquet_table_emitter INFO emitting parquet table create_final_text_units.parquet
00:33:58,12 graphrag.index.run INFO Running workflow: create_base_documents...
00:33:58,12 graphrag.index.run INFO dependencies for create_base_documents: ['create_final_text_units']
00:33:58,13 graphrag.index.run INFO read table from storage: create_final_text_units.parquet
00:33:58,44 datashaper.workflow.workflow INFO executing verb unroll
00:33:58,60 datashaper.workflow.workflow INFO executing verb select
00:33:58,74 datashaper.workflow.workflow INFO executing verb rename
00:33:58,90 datashaper.workflow.workflow INFO executing verb join
00:33:58,107 datashaper.workflow.workflow INFO executing verb aggregate_override
00:33:58,123 datashaper.workflow.workflow INFO executing verb join
00:33:58,141 datashaper.workflow.workflow INFO executing verb rename
00:33:58,155 datashaper.workflow.workflow INFO executing verb convert
00:33:58,173 graphrag.index.emit.parquet_table_emitter INFO emitting parquet table create_base_documents.parquet
00:33:58,278 graphrag.index.run INFO Running workflow: create_final_documents...
00:33:58,278 graphrag.index.run INFO dependencies for create_final_documents: ['create_base_documents']
00:33:58,278 graphrag.index.run INFO read table from storage: create_base_documents.parquet
00:33:58,312 datashaper.workflow.workflow INFO executing verb rename
00:33:58,314 graphrag.index.emit.parquet_table_emitter INFO emitting parquet table create_final_documents.parquet

Additional Information

  • GraphRAG Version:
  • Operating System:
  • Python Version:
  • Related Issues:
@YP-Yang YP-Yang added the triage Default label assignment, indicates new issue needs reviewed by a maintainer label Jul 16, 2024
@flowertreeML
Copy link

same issue

@BrennonTWilliams
Copy link

I am having this problem too

@sdjd93dj
Copy link

Same issue

@superposition
Copy link

Same

@menghongtao
Copy link

I think the error occurs when LLM results is not json format in the map seach step. You can change another model or use a really simple test text file, and test agin.

@Grant512
Copy link

I have a same issue. when i trace LLM log, the system prompt is not affect. i dont know why

@liyoung1992
Copy link

same issue

@natoverse
Copy link
Collaborator

Consolidating alternate model issues here: #657

@natoverse natoverse closed this as not planned Won't fix, can't repro, duplicate, stale Jul 22, 2024
@natoverse natoverse added community_support Issue handled by community members and removed triage Default label assignment, indicates new issue needs reviewed by a maintainer labels Jul 22, 2024
@yurochang
Copy link

Same question, and it seems did not solved in other issues.

Consolidating alternate model issues here: #657

@awaescher
Copy link

awaescher commented Jul 26, 2024

Same issue


Small update: I found out that my model always returned nosense like this:

python -m graphrag.query --root ./myfolder --method global "What are the main topics"
"The main topic is 'What are the main topics'"

I found out that my local Ollama instance (0.3.0) seemed to ignore the system prompt and I got it working by manually stitching together the two prompts into one:

File: /graphrag/query/structured_search/global_search/search.py , method: _map_response_single_batch

#search_messages = [
#  {"role": "system", "content": search_prompt},
#  {"role": "user", "content": query},
#]
search_messages = [ {"role": "user", "content": search_prompt + "\n\n### USER QUESTION ### \n\n" + query} ]

@peixikk
Copy link

peixikk commented Jul 26, 2024

Same issue

Small update: I found out that my model always returned nosense like this:

python -m graphrag.query --root ./myfolder --method global "What are the main topics"
"The main topic is 'What are the main topics'"

I found out that my local Ollama instance (0.3.0) seemed to ignore the system prompt and I got it working by manually stitching together the two prompts into one:

File: /graphrag/query/structured_search/global_search/search.py , method: _map_response_single_batch

#search_messages = [
#  {"role": "system", "content": search_prompt},
#  {"role": "user", "content": query},
#]
search_messages = [ {"role": "user", "content": search_prompt + "\n\n### USER QUESTION ### \n\n" + query} ]

thank you!

@sam234990
Copy link

Same issue

Small update: I found out that my model always returned nosense like this:

python -m graphrag.query --root ./myfolder --method global "What are the main topics"
"The main topic is 'What are the main topics'"

I found out that my local Ollama instance (0.3.0) seemed to ignore the system prompt and I got it working by manually stitching together the two prompts into one:

File: /graphrag/query/structured_search/global_search/search.py , method: _map_response_single_batch

#search_messages = [
#  {"role": "system", "content": search_prompt},
#  {"role": "user", "content": query},
#]
search_messages = [ {"role": "user", "content": search_prompt + "\n\n### USER QUESTION ### \n\n" + query} ]

Thank you, this really works for me.

@awaescher
Copy link

I can't believe this is true, I mean wtf?!

@Mila-1001
Copy link

Same issue

Small update: I found out that my model always returned nosense like this:

python -m graphrag.query --root ./myfolder --method global "What are the main topics"
"The main topic is 'What are the main topics'"

I found out that my local Ollama instance (0.3.0) seemed to ignore the system prompt and I got it working by manually stitching together the two prompts into one:

File: /graphrag/query/structured_search/global_search/search.py , method: _map_response_single_batch

#search_messages = [
#  {"role": "system", "content": search_prompt},
#  {"role": "user", "content": query},
#]
search_messages = [ {"role": "user", "content": search_prompt + "\n\n### USER QUESTION ### \n\n" + query} ]

Wow!!! thank you!

@MisterAndry
Copy link

MisterAndry commented Aug 28, 2024

Is there any way to fix this without changing the graphrag source code? Maybe it is possible to change the behavior of Ollama model with modelfile? Or some other way?
I'm asking because I can't change the source code after deploying my application.

@Hannover1992
Copy link

Same Issue

settings
encoding_model: cl100k_base
skip_workflows: []
llm:
api_key: ${GRAPHRAG_API_KEY}
type: openai_chat # or azure_openai_chat
model: llama3.1

model_supports_json: true # recommended if this is available for your model.

max_tokens: 2000

request_timeout: 180.0

api_base: http://127.0.0.1:11434/v1

api_version: 2024-02-15-preview

organization: <organization_id>

deployment_name: <azure_model_deployment_name>

tokens_per_minute: 150_000 # set a leaky bucket throttle

requests_per_minute: 10_000 # set a leaky bucket throttle

max_retries: 1

max_retry_wait: 10.0

sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times

concurrent_requests: 1 # the number of parallel inflight requests that may be made

parallelization:
stagger: 0.3

num_threads: 50 # the number of threads to use for parallel processing

async_mode: threaded # or asyncio

embeddings:

parallelization: override the global parallelization settings for embeddings

async_mode: threaded # or asyncio
llm:
api_key: ${GRAPHRAG_API_KEY}
type: openai_embedding # or azure_openai_embedding
model: bge-large:latest
api_base: http://127.0.0.1:11434/v1

@treeaaa
Copy link

treeaaa commented Sep 4, 2024

同樣的問題

**小更新:**我發現我的模型總是返回這樣的無意義訊息:

python -m graphrag.query --root ./myfolder --method global "主要主題是什麼"
"主要主題是'主要主題是什麼'"

我發現我的本地 Ollama 實例 (0.3.0) 似乎忽略了系統提示,我透過手動將兩個提示拼接在一起使其工作:

文件:/graphrag/query/structured_search/global_search/search.py ,方法:_map_response_single_batch

#search_messages = [
#  {"role": "system", "content": search_prompt},
#  {"role": "user", "content": query},
#]
search_messages = [ {"role": "user", "content": search_prompt + "\n\n### USER QUESTION ### \n\n" + query} ]

thanks so much. you ara a hero.

this also sccuess slove the problem -> ollama/ollama#6176 (comment)

@zhang-jingzhe
Copy link

Same issue

Small update: I found out that my model always returned nosense like this:

python -m graphrag.query --root ./myfolder --method global "What are the main topics"
"The main topic is 'What are the main topics'"

I found out that my local Ollama instance (0.3.0) seemed to ignore the system prompt and I got it working by manually stitching together the two prompts into one:

File: /graphrag/query/structured_search/global_search/search.py , method: _map_response_single_batch

#search_messages = [
#  {"role": "system", "content": search_prompt},
#  {"role": "user", "content": query},
#]
search_messages = [ {"role": "user", "content": search_prompt + "\n\n### USER QUESTION ### \n\n" + query} ]

It works for me, thanks a lot!

@worstkid92
Copy link

Same issue

Small update: I found out that my model always returned nosense like this:

python -m graphrag.query --root ./myfolder --method global "What are the main topics"
"The main topic is 'What are the main topics'"

I found out that my local Ollama instance (0.3.0) seemed to ignore the system prompt and I got it working by manually stitching together the two prompts into one:

File: /graphrag/query/structured_search/global_search/search.py , method: _map_response_single_batch

#search_messages = [
#  {"role": "system", "content": search_prompt},
#  {"role": "user", "content": query},
#]
search_messages = [ {"role": "user", "content": search_prompt + "\n\n### USER QUESTION ### \n\n" + query} ]

Wow! Thank you !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
community_support Issue handled by community members
Projects
None yet
Development

No branches or pull requests