You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have two other feature requests (#3451, #3208) that I've opened, both of which I will be closing in favor of this one as I believe this suggestion is the best way to handle this.
Currently when you use generateObject or generateText with a function call and the LLM responds with data that does not match your provided schema, the behavior is that the vercel AI SDK will throw a InvalidToolArgumentsError or AISDKError, which just returns the zod parsing error.
The issue here is that now the entire generateText or generateObject call is throwing a zod error, and the response from the provider is no longer accessible at all, even though that was technically a successful response. This is a problem for a few reasons
We lose access to what the LLM actually generated. Sometimes the response is recoverable / fixable, but we do not have an opportunity to handle those error cases.
Usage data is now lost (I know we could still save it with fetch intercepts, but this means we have to work with the data outside of the calling function scope)
I think of the vercel AI SDK as an extension of calling these providers, so it feels weird to me that a successful response from the LLM provider is completely inaccessible due to a zod parse error. I do agree that a generateText/generateObject should throw this error, but it should include the full LLM provider response.
In the case of #3208 when Groq throws an error directly from their API, the error that groq throws should be passed through the AISDKError.
If the error was thrown like this, that would be a huge, huge help.
try {
const {result} = await generateText({
...
});
} catch (e: InvalidToolArgumentsError) {
e.response
// Do something with the e.response
// Log usage data still
}
Use Cases
No response
Additional context
No response
The text was updated successfully, but these errors were encountered:
You can already access the broken tool call args: check if the error with InvalidToolArguments.isInstance and then access toolArgs (which are the incorrect args)
Thanks @lgrammel! I opened #3869 to point people to this on the Error Handling page.
That being said though, is there any way to access some of the other data we'd expect on a successful LLM response, such as usage or experimental_providerMetadata?
Hi @lgrammel - ran into this again with generateObject. I looked through the error types in the docs, but I didn't see any equivalent error helper to access the incorrect object generation.
Ultimately, we still think the Vercel AI SDK should expose more at the request/response level. Right now we have this feeling that we're fighting the framework due to over abstraction. If we're able to access the full request/response regardless of ai sdk errors, then that would solve everything and anything the sdk did not already provide, we'd be able to build around ourselves.
We're happy to try put in a PR ourselves, but before we spend the time working on it we just want to make sure that the AI SDK team would be receptive of this change. If you give us the green light, we'll try to get this in next week.
The PR we would make would just expose the full provider response all of the AI SDK error types.
Feature Description
I have two other feature requests (#3451, #3208) that I've opened, both of which I will be closing in favor of this one as I believe this suggestion is the best way to handle this.
Currently when you use generateObject or generateText with a function call and the LLM responds with data that does not match your provided schema, the behavior is that the vercel AI SDK will throw a InvalidToolArgumentsError or AISDKError, which just returns the zod parsing error.
The issue here is that now the entire generateText or generateObject call is throwing a zod error, and the response from the provider is no longer accessible at all, even though that was technically a successful response. This is a problem for a few reasons
I think of the vercel AI SDK as an extension of calling these providers, so it feels weird to me that a successful response from the LLM provider is completely inaccessible due to a zod parse error. I do agree that a generateText/generateObject should throw this error, but it should include the full LLM provider response.
In the case of #3208 when Groq throws an error directly from their API, the error that groq throws should be passed through the AISDKError.
If the error was thrown like this, that would be a huge, huge help.
Use Cases
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: