MIND: Multimodal Shopping Intention Distillation from Large Vision-language Models for E-commerce Purchase Understanding
This is the official code and data repository for the paper [MIND: Multimodal Shopping Intention Distillation from Large Vision-language Models for E-commerce Purchase Understanding]
The Framework consist of three stages
The file is at: ./llava/serve/feature_extract.py
. The product feature can be extracted by utilizing information from both visual and text modalities
The intention generation file is at ./llava/serve/intention_generation.py
. Genereate the co-buy intention based on the products' name, images and detailed features. The generation would be constrained by the relation adopted by FolkScope.
The filter file is at ./llava/serve/intention_generation.py
. Use another human-centric LVLM to filter the qualified intentions that aligns well with human.
Required packages are listed in requirements.txt
. Install them by running:
pip install -r requirements.txt
We use IntentionQA which is a benchmark carefully curated to evaluate LLMs comprehension abilities in E-commerce domain. For further information, please refer to
Should you want to download the intention data in MIND, please refer to MIND
please use the bibtex below for citing our work.