You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I took the VectorStoreRAG sample app and made slight adjustments to connect it to my local Llama 3.2 and Qdrant instances.
My idea was to perform a vector search on Qdrant to retrieve context information from a PDF file. Additionally, the AI needs to access real-time data to compute the final response to the user. To achieve this, I added a plugin that fetches real-time price information.
Expected Behavior
My expectation is for the model to call both plugins: the SearchPlugin, which is inline, and the PricesTablePlugin. Both plugins have been added to the kernel.
Actual Behavior
The model returns an empty response (an empty string).
If the arguments are passed like this, the model returns an empty response:
If the function auto-calling setting is omitted, the model does return a response, but it lacks price information. However, it includes the context retrieved from the PDF search:
arguments: new KernelArguments()
{
{ "question", question },
},
private async Task ChatLoopAsync(CancellationToken cancellationToken)
{
...
Console.WriteLine("Assistant > Press Enter with no prompt to exit.");
kernel.Plugins.AddFromType<PricesTablePlugin>("PricesTable");
kernel.Plugins.Add(vectorStoreTextSearch.CreateWithGetTextSearchResults("SearchPlugin"));
while (!cancellationToken.IsCancellationRequested)
{
Console.WriteLine($"Assistant > What would you like to know from the loaded PDFs: ({pdfFiles})?");
Console.Write("User > ");
var question = Console.ReadLine();
if (string.IsNullOrWhiteSpace(question))
{
appShutdownCancellationTokenSource.Cancel();
break;
}
// Invoke the LLM with a template that uses the search plugin to:
// 1. Retrieve related information to the user query from the vector store.
// 2. Add the information to the LLM prompt.
var response = kernel.InvokePromptStreamingAsync(
promptTemplate: """
Please use this information to answer the question:
{{#with (SearchPlugin-GetTextSearchResults question)}}
{{#each this}}
Name: {{Name}}
Value: {{Value}}
Link: {{Link}}
-----------------
{{/each}}
{{/with}}
Include citations to the relevant information where it is referenced in the response.
Question: {{question}}
""",
arguments: new KernelArguments(new OllamaPromptExecutionSettings
{
FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
})
{
{ "question", question },
},
templateFormat: "handlebars",
promptTemplateFactory: new HandlebarsPromptTemplateFactory(),
cancellationToken: cancellationToken);
Console.Write("\nAssistant > ");
try
{
await foreach (var message in response.ConfigureAwait(false))
{
Console.Write(message);
}
Console.WriteLine();
}
catch (Exception ex)
{
Console.WriteLine($"Call to LLM failed with error: {ex}");
}
}
}
Platform Details
OS: Windows 10
IDE: Visual Studio Code
Language: C#
Source: Latest Semantic Kernel, Llama 3.2
The text was updated successfully, but these errors were encountered:
I took the VectorStoreRAG sample app and made slight adjustments to connect it to my local Llama 3.2 and Qdrant instances.
My idea was to perform a vector search on Qdrant to retrieve context information from a PDF file. Additionally, the AI needs to access real-time data to compute the final response to the user. To achieve this, I added a plugin that fetches real-time price information.
Expected Behavior
My expectation is for the model to call both plugins: the SearchPlugin, which is inline, and the PricesTablePlugin. Both plugins have been added to the kernel.
Actual Behavior
The model returns an empty response (an empty string).
If the arguments are passed like this, the model returns an empty response:
If the function auto-calling setting is omitted, the model does return a response, but it lacks price information. However, it includes the context retrieved from the PDF search:
Both plugins are registered in the following way:
Full Code Example
Platform Details
OS: Windows 10
IDE: Visual Studio Code
Language: C#
Source: Latest Semantic Kernel, Llama 3.2
The text was updated successfully, but these errors were encountered: