Knowledge Distillation Comparison Hub
3 papers - avg viability 5.3
Top Papers
- From Images to Words: Efficient Cross-Modal Knowledge Distillation to Language Models from Black-box Teachers(7.0)
ARMADA is an efficient cross-modal knowledge distillation framework that enhances language models using knowledge from vision-language models without extensive pre-training.
- DAIT: Distillation from Vision-Language Models to Lightweight Classifiers with Adaptive Intermediate Teacher Transfer(6.0)
DAIT enables efficient knowledge transfer from large Vision-Language Models to lightweight classifiers for fine-grained visual categorization.
- Integrating Knowledge Distillation Methods: A Sequential Multi-Stage Framework(3.0)
A framework to enhance model efficiency by integrating multiple knowledge distillation methods sequentially.