ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
  • Feedback
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2508.08789
74
1
v1v2v3v4 (latest)

Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance

12 August 2025
Yuchu Jiang
Jian Zhao
Yuchen Yuan
Tianle Zhang
Yao Huang
Y. Zhang
Yan Wang
Y. Li
Xizhong Guo
Yusheng Zhao
Jun Zhang
Z. Zhang
Xiaojian Lin
Yixiu Zou
H. Ma
Yuhu Shang
Yuzhi Hu
Keshu Cai
Ruochen Zhang
Boyuan Chen
Y. Gao
Ziheng Jiao
Yi Qin
S. Du
Xiao Tong
Zhekun Liu
Yu Chen
Xuankun Rong
Rui Wang
Y. Zheng
Zhaoxin Fan
Murat Sensoy
H. Zhang
Pan Zhou
Lei Jin
Hao Zhao
Xu Yang
Jiaojiao Zhao
Jianshu Li
Joey Tianyi Zhou
Zhi-Qi Cheng
L. Huang
Zhiyi Liu
Z. Zhu
J. Li
Gang Wang
Q. Li
Xu Zhang
Yaodong Yang
Mang Ye
Wenqi Ren
Zhaofeng He
Hang Su
R. Ni
Liping Jing
Xingxing Wei
Junliang Xing
Massimo Alioto
Shengmei Shen
Petia Radeva
Dacheng Tao
Ya Zhang
Shuicheng Yan
Chi Zhang
Z. He
Xuelong Li
    SILM
ArXiv (abs)PDFHTMLGithub (29★)
Main:14 Pages
4 Figures
Bibliography:7 Pages
Appendix:4 Pages
Abstract

The rapid advancement of AI has expanded its capabilities across domains, yet introduced critical technical vulnerabilities, such as algorithmic bias and adversarial sensitivity, that pose significant societal risks, including misinformation, inequity, security breaches, physical harm, and eroded public trust. These challenges highlight the urgent need for robust AI governance. We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security (system reliability), Derivative Security (real-world harm mitigation), and Social Ethics (value alignment and accountability). Uniquely, our approach unifies technical methods, emerging evaluation benchmarks, and policy insights to promote transparency, accountability, and trust in AI systems. Through a systematic review of over 300 studies, we identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight. These shortcomings stem from treating governance as an afterthought, rather than a foundational design principle, resulting in reactive, siloed efforts that fail to address the interdependence of technical integrity and societal trust. To overcome this, we present an integrated research agenda that bridges technical rigor with social responsibility. Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy. The accompanying repository is available atthis https URL.

View on arXiv
Comments on this paper