You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, there isn't specific documentation or examples in LangChainJS that demonstrate using Retrieval-Augmented Generation (RAG) with both images and documents together. However, LangChainJS does support handling multimodal data, including images and text, separately. You can integrate image data by using models that accept image inputs, such as by passing images as byte strings or URLs in content blocks. For text documents, you can follow the standard RAG process involving indexing, retrieval, and generation.
To combine these in a RAG pipeline, you would need to handle the image and text data separately and then integrate their outputs. This might involve retrieving relevant text documents and processing images independently, then using the results together in a language model to generate a response. Unfortunately, specific tools or methods for combining these in a single RAG workflow aren't detailed in the current LangChainJS documentation.
Checklist
Issue with current documentation:
How to use RAG with images and documents both together?
Idea or request for content:
No response
The text was updated successfully, but these errors were encountered: