19
0

Advancing Video Quality Assessment for AIGC

Xinli Yue
Jianhui Sun
Han Kong
Liangchao Yao
Tianyi Wang
Lei Li
Fengyun Rao
Jing Lv
Fan Xia
Yuetang Deng
Qian Wang
Lingchen Zhao
Abstract

In recent years, AI generative models have made remarkable progress across various domains, including text generation, image generation, and video generation. However, assessing the quality of text-to-video generation is still in its infancy, and existing evaluation frameworks fall short when compared to those for natural videos. Current video quality assessment (VQA) methods primarily focus on evaluating the overall quality of natural videos and fail to adequately account for the substantial quality discrepancies between frames in generated videos. To address this issue, we propose a novel loss function that combines mean absolute error with cross-entropy loss to mitigate inter-frame quality inconsistencies. Additionally, we introduce the innovative S2CNet technique to retain critical content, while leveraging adversarial training to enhance the model's generalization capabilities. Experimental results demonstrate that our method outperforms existing VQA techniques on the AIGC Video dataset, surpassing the previous state-of-the-art by 3.1% in terms of PLCC.

View on arXiv
Comments on this paper