Skip to content
/ LTDD Public

Official Implementation of paper "Distilling Long-tailed Datasets"

Notifications You must be signed in to change notification settings

ichbill/LTDD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Distilling Long-tailed Datasets

Existing DD methods exhibit degraded performance when applied to imbalanced datasets, especially when the imbalance factor increases, whereas our method provides significantly better performance under different imbalanced scenarios.

image

Getting Started

  1. Create environment as follows
conda env create -f environment.yaml
conda activate distillation
  1. Generate expert trajectories
cd buffer

# representation experts
python buffer_FTD.py --cfg ../configs/buffer/CIFAR10_LT/imbrate_0005/first_stage_weight_balance.yaml

# classifier experts
python buffer_FTD.py --cfg ../configs/buffer/CIFAR10_LT/imbrate_0005/second_stage_weight_balance.yaml
  1. Perform the distillation
cd distill
python EDGE_tesla.py --cfg ../configs/xxxx.yaml

Acknowledgement

Our code is built upon MTT, FTD, and DATM.

Citation

If you find our code useful for your research, please cite our paper.

@article{zhao2024distilling,
  title={Distilling Long-tailed Datasets},
  author={Zhao, Zhenghao and Wang, Haoxuan and Shang, Yuzhang and Wang, Kai and Yan, Yan},
  journal={arXiv preprint arXiv:2408.14506},
  year={2024}
}

About

Official Implementation of paper "Distilling Long-tailed Datasets"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages