Replies: 1 comment
-
You can configure the baseUrl, headers, and even provide a custom fetch function if needed: https://sdk.vercel.ai/providers/ai-sdk-providers/openai#provider-instance |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
My network is under censorship, so I have a proxy software that uses the port 33210 to handle all my requests(or part of them?) to guarantee network access, but in the ai sdk's case, all the requests to 'https://api.openai.com/v1/chat/completions' turned out to report timeout, while the requests sent from my browser and command line works without any problem.
The example code I used looks like this:
I'd like to ask if there is a way to let the request work through my local proxy, or if there is a way to monitor the request sent by my next.js project, so that I could find out a way to fix this problem.
Thanks for your attention.
Beta Was this translation helpful? Give feedback.
All reactions