-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update Chat Interface examples and add more LLM libraries and API providers #10025
base: main
Are you sure you want to change the base?
Conversation
🪼 branch checks and previews
Install Gradio from this PR pip install https://gradio-pypi-previews.s3.amazonaws.com/ecd69f364b6da57c6324add75499f0a6a7e50f52/gradio-5.6.0-py3-none-any.whl Install Gradio Python Client from this PR pip install "gradio-client @ git+https://github.com/gradio-app/gradio@ecd69f364b6da57c6324add75499f0a6a7e50f52#subdirectory=client/python" Install Gradio JS Client from this PR npm install https://gradio-npm-previews.s3.amazonaws.com/ecd69f364b6da57c6324add75499f0a6a7e50f52/gradio-client-1.8.0.tgz Use Lite from this PR <script type="module" src="https://gradio-lite-previews.s3.amazonaws.com/ecd69f364b6da57c6324add75499f0a6a7e50f52/dist/lite.js""></script> |
🦄 no changes detectedThis Pull Request does not include changes to any packages.__No changes detected. __
|
for file in message["files"]: | ||
files.append(file) | ||
|
||
documents = SimpleDirectoryReader(input_files=files).load_data() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These should be created outside the function since the source document doesn't change?
Maybe we can make this a "chat with your document" type-demo instead and keep the function as is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome PR. I think adding llama-cpp-python would be great for those curious/interested in on-device inference. I think Groq would be nice as well.
history_transformer_format = list(zip(history[:-1], history[1:])) + [[message, ""]] | ||
stop = StopOnTokens() | ||
|
||
messages = "".join(["".join(["\n<human>:"+item[0], "\n<bot>:"+item[1]]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code is now outdated. you don't have to apply the chat template yourself since transformers supports the role/content
dictionary conversation format.
I think the streaming can also be simplified but not sure.
Can we add these as starters in the playground? Trying to think of ways to make these more visible cc @aliabd |
Fixed demos for OpenAI and Langchain. Added more examples for Llama Index, SambaNova, and Hyperbolic AI
Closes: #10003