ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.12767
  4. Cited By
D$^2$TV: Dual Knowledge Distillation and Target-oriented Vision Modeling
  for Many-to-Many Multimodal Summarization

D2^22TV: Dual Knowledge Distillation and Target-oriented Vision Modeling for Many-to-Many Multimodal Summarization

22 May 2023
Yunlong Liang
Fandong Meng
Jiaan Wang
Jinan Xu
Yufeng Chen
Jie Zhou
    VLM
ArXivPDFHTML

Papers citing "D$^2$TV: Dual Knowledge Distillation and Target-oriented Vision Modeling for Many-to-Many Multimodal Summarization"

5 / 5 papers shown
Title
Towards Unifying Multi-Lingual and Cross-Lingual Summarization
Towards Unifying Multi-Lingual and Cross-Lingual Summarization
Jiaan Wang
Fandong Meng
Duo Zheng
Yunlong Liang
Zhixu Li
Jianfeng Qu
Jie Zhou
24
22
0
16 May 2023
Understanding Translationese in Cross-Lingual Summarization
Understanding Translationese in Cross-Lingual Summarization
Jiaan Wang
Fandong Meng
Yunlong Liang
Tingyi Zhang
Jiarong Xu
Zhixu Li
Jie Zhou
10
15
0
14 Dec 2022
Improving Neural Cross-Lingual Summarization via Employing Optimal
  Transport Distance for Knowledge Distillation
Improving Neural Cross-Lingual Summarization via Employing Optimal Transport Distance for Knowledge Distillation
Thong Nguyen
A. Luu
50
39
0
07 Dec 2021
UniMS: A Unified Framework for Multimodal Summarization with Knowledge
  Distillation
UniMS: A Unified Framework for Multimodal Summarization with Knowledge Distillation
Zhengkun Zhang
Xiaojun Meng
Yasheng Wang
Xin Jiang
Qun Liu
Zhenglu Yang
31
32
0
13 Sep 2021
Unifying Vision-and-Language Tasks via Text Generation
Unifying Vision-and-Language Tasks via Text Generation
Jaemin Cho
Jie Lei
Hao Tan
Mohit Bansal
MLLM
249
518
0
04 Feb 2021
1