Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use tools with useAssistant() OR enable code-interpreter on useChat() #3852

Open
DDX1 opened this issue Nov 23, 2024 · 0 comments
Open

Use tools with useAssistant() OR enable code-interpreter on useChat() #3852

DDX1 opened this issue Nov 23, 2024 · 0 comments
Labels
ai/ui enhancement New feature or request

Comments

@DDX1
Copy link

DDX1 commented Nov 23, 2024

Feature Description

Right now, we have to decide between either using useAssistant() to access the code-interpreter but having no access to all the local tools OR having access to all tools via useChat() but no access to the code interpreter. It would super powerful if both is available within one setup. Preferably, the useAssistant() hook allows for incorporating tools as known with useChat().

Use Cases

Tools can be designed to fetch large datasets from APIs and provide to the agents, which is a great feature. However, if the size of the response is too large, any AI API rate limit is reached quickly and the agents/bots will crash. Instead of pulling the data into a bot's context, it makes more sense to upload them as files and process them in a sandbox environment such as code-interpreter to keep the context/memory of an agent/bot clean and efficiently used.

Additional context

Right now we are running a sub-agent independently from the Vercel AI SDK built plain with the OpenAI library for being able to involve the code interpreter. This sub-agent is provided to the "main" Vercel AI agent as a tool. We need the code-interpreter in order to process large datasets that would exceed the token limit of the main-agent right away if pulled into the context.

Our fetch tools detect whether the response is too large for the main-agent token limit, and if so, upload the files to the sub-agent thread (openAI assistant) and respond with the openAI filedId for reference. The main agent uses this reference to instruct the sub-agent to process that data.

@DDX1 DDX1 added the enhancement New feature or request label Nov 23, 2024
@lgrammel lgrammel added the ai/ui label Nov 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ai/ui enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants