This project integrates the OpenAI API with a FastAPI server to create a conversational AI system. It utilizes environment variables for API keys and includes routes for starting conversations and chatting with the AI.
Before you start, make sure you have Python installed on your system.
- Clone the repository:
git clone [email protected]:thissayantan/gpt-api.git
- Navigate to the cloned directory:
cd gpt-api
- Install the required packages:
poetry install
Create a .env
file in the root of the project and set the OPENAI_API_KEY
variable:
OPENAI_API_KEY='your-openai-api-key'
Start the FastAPI server with:
uvicorn main:app --reload
The --reload
flag enables hot reloading during development.
Once the server is running, you can interact with the API as follows:
- Send a GET request to the root (
/
) to verify that the server is running. - Start a conversation with a GET request to
/start
. This will return athread_id
. - Use the
thread_id
to chat with the AI by sending a POST request to/chat
with aDialogueSnippet
JSON object.
GET /
: Root route that returns a welcome message.GET /start
: Starts a new conversation thread with the AI.POST /chat
: Sends a user's message to the AI and returns its response.
The code includes print statements for debugging purposes. These will show up in your terminal as you interact with the API.
This project is licensed under MIT License. For more information, please see the LICENSE file.