Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TRAING WITH OR WITHOUT CHAT_TEMPLATE #13

Open
VincentVanNF opened this issue Nov 29, 2024 · 3 comments
Open

TRAING WITH OR WITHOUT CHAT_TEMPLATE #13

VincentVanNF opened this issue Nov 29, 2024 · 3 comments

Comments

@VincentVanNF
Copy link

Hello, I noticed that the input data used during training and in the demo.py consists of raw text and image tokens, without employing the commonly used chat templates in SFT fine-tuning, such as <|user|>, <|system|>, <|assistant|> tokens. We know that this approach can provide structured inputs, which can improve the accuracy and consistency of the model's generation; it helps the model understand complex instructions and context. Moreover, according to experiments in the paper, it can be seen that the instruction part has a significant impact on the final results:
image

Therefore, can it be assumed that this method, which leverages the model's generative understanding ability to generate embeddings, is also influenced by prompt design? If so, then training models using chat templates that can improve prompt quality should yield better results? Have any related experiments and conclusions been conducted? Thank you for your response!

@XMHZZ2018
Copy link
Contributor

@VincentVanNF

This is a great question! We also considered this during the project. From our observations in previous LLM/VLM-based embedding model papers, some approaches adopt a chat template, while others do not. Therefore, in our experiments, we tested the model with and without the use of a chat template. After conducting contrastive learning, we found that incorporating a chat template in the training data did not result in a significant difference in performance.

@VincentVanNF
Copy link
Author

@VincentVanNF

This is a great question! We also considered this during the project. From our observations in previous LLM/VLM-based embedding model papers, some approaches adopt a chat template, while others do not. Therefore, in our experiments, we tested the model with and without the use of a chat template. After conducting contrastive learning, we found that incorporating a chat template in the training data did not result in a significant difference in performance.

Thanks for your reply! I recently tried to reproduce this method on other VLMs. If there are significant differences with chat templates in other models, I will share them accordingly.

@XMHZZ2018
Copy link
Contributor

@VincentVanNF
Thanks! We look forward to learning any findings you have. Currently we are also actively working on supporting more VLM backbones and we believe your discovery will be very helpful!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants