Publications

Showing 0 publications
AI Alignment: A Comprehensive Survey
AI Alignment: A Comprehensive Survey
Jiaming Ji , Tianyi Qiu , Boyuan Chen , Borong Zhang , Hantao Lou , Kaile Wang , Yawen Duan , ... show more (13 authors) , Yaodong Yang , Yizhou Wang , Song-Chun Zhu , Yike Guo , Wen Gao
ACM Compute Survey 2025. (Impact Factor: 28.0 (ranked 1/147 in Computer Science Theory & Methods))
AI Alignment, Safety Alignment, Survey
Safe RLHF-V: Safe Reinforcement Learning from Multi-modal Human Feedback
Safe RLHF-V: Safe Reinforcement Learning from Multi-modal Human Feedback
Jiaming Ji , Xinyu Chen , Rui Pan , Conghui Zhang , Han Zhu , Jiahao Li , Donghai Hong , ... show more (4 authors) , Chi-Min Chan , Yida Tang , Sirui Han , Yike Guo , Yaodong Yang
NeurIPS 2025
Safety Alignment, Robotics, Vision-Language-Action
SafeVLA: Towards Safety Alignment of Vision-Language-Action Model via Constrained Learning
SafeVLA: Towards Safety Alignment of Vision-Language-Action Model via Constrained Learning
Borong Zhang , Yuhao Zhang , Jiaming Ji , Yingshan Lei , Josef Dai , Yuanpei Chen , Yaodong Yang
NeurIPS 2025 Spotlight
Safety Alignment, Robotics, Vision-Language-Action
Generative RLHF-V: Learning Principles from Multi-modal Human Preference
Generative RLHF-V: Learning Principles from Multi-modal Human Preference
Jiayi Zhou , Jiaming Ji , Boyuan Chen , Jiapeng Sun , Wenqi Chen , Donghai Hong , Sirui Han , Yike Guo , Yaodong Yang
NeurIPS 2025
Safety Alignment, Robotics, Vision-Language-Action
InterMT: Multi-Turn Interleaved Preference Alignment with Human Feedback
InterMT: Multi-Turn Interleaved Preference Alignment with Human Feedback
Boyuan Chen , Donghai Hong , Jiaming Ji , Jiacheng Zheng , Bowen Dong , Jiayi Zhou , Kaile Wang , ... show more (3 authors) , Qirui Zheng , Wenxin Li , Sirui Han , Yike Guo , Yaodong Yang
NeurIPS 2025 Spotlight
Safety Alignment, Robotics, Vision-Language-Action
ProgressGym: Alignment with a Millennium of Moral Progress
ProgressGym: Alignment with a Millennium of Moral Progress
Tianyi Qiu , Yang Zhang , Xuchuan Huang , Jasmine Xinze Li , Jiaming Ji , Yaodong Yang
ACL 2025 Findings
Safety Alignment, Robotics, Vision-Language-Action
Reward Generalization in RLHF: A Topological Perspective
Reward Generalization in RLHF: A Topological Perspective
Tianyi Qiu , Fanzhi Zeng , Jiaming Ji , Dong Yan , Kaile Wang , Jiayi Zhou , Yang Han , Josef Dai , Xuehai Pan , Yaodong Yang
ACL 2025 Findings
Safety Alignment, Robotics, Vision-Language-Action
Align Anything: Training All-Modality Models to Follow Instructions with Language Feedback
Align Anything: Training All-Modality Models to Follow Instructions with Language Feedback
Jiaming Ji , Jiayi Zhou , Hantao Lou , Boyuan Chen , Donghai Hong , Xuyao Wang , Wenqi Chen , ... show more (7 authors) , Dong Li , Weipeng Chen , Jun Song , Bo Zheng , Yaodong Yang
Arxiv 2025
AI Alignment, Multimodal Models
OmniSafe: An Infrastructure for Accelerating Safe Reinforcement Learning Research
OmniSafe: An Infrastructure for Accelerating Safe Reinforcement Learning Research
Jiaming Ji , Jiayi Zhou , Borong Zhang , Juntao Dai , Xuehai Pan , Ruiyang Sun , Weidong Huang , Yiran Geng , Mickel Liu , Yaodong Yang
JMLR 2024. (Top 15 ~ 20 Papers for Open-source AI Systems per year.)
Safe Reinforcement Learning, Robotics, Open Source
Aligner: Efficient Alignment by Learning to Correct
Aligner: Efficient Alignment by Learning to Correct
Jiaming Ji , Boyuan Chen , Hantao Lou , Donghai Hong , Borong Zhang , Xuehai Pan , Juntao Dai , Yaodong Yang
NeurIPS 2024 Oral
AI Alignment, AI Safety, NeurIPS
SafeSora: Towards Safety Alignment of Text2Video Generation via a Human Preference Dataset
SafeSora: Towards Safety Alignment of Text2Video Generation via a Human Preference Dataset
Juntao Dai , Tianle Chen , Xuyao Wang , Ziran Yang , Taiye Chen , Jiaming Ji , Yaodong Yang
NeurIPS 2024.
AI Safety, Safety Alignment
Language Models Resist Alignment: Evidence From Data Compression
Language Models Resist Alignment: Evidence From Data Compression
ACL 2025 Best Paper
Large Language Models, Safety Alignment, AI Safety
ProgressGym: Alignment with a Millennium of Moral Progress
ProgressGym: Alignment with a Millennium of Moral Progress
Tianyi Qiu , Yang Zhang , Xuchuan Huang , Jasmine Xinze Li , Jiaming Ji , Yaodong Yang
NeurIPS 2024.
Large Language Models, AI Alignment
PKU-SafeRLHF: Towards Multi-Level Safety Alignment for LLMs with Human Preference
PKU-SafeRLHF: Towards Multi-Level Safety Alignment for LLMs with Human Preference
Jiaming Ji , Donghai Hong , Borong Zhang , Boyuan Chen , Josef Dai , Boren Zheng , Tianyi Qiu , Boxun Li , Yaodong Yang
ACL 2025 Main.
Large Language Models, Safety Alignment, Reinforcement Learning from Human Feedback
Safe RLHF: Safe Reinforcement Learning from Human Feedback
Safe RLHF: Safe Reinforcement Learning from Human Feedback
Josef Dai , Xuehai Pan , Ruiyang Sun , Jiaming Ji , Xinbo Xu , Mickel Liu , Yizhou Wang , Yaodong Yang
ICLR 2024. Spotlight
Safety Alignment, Reinforcement Learning from Human Feedback
Safety-Gymnasium: A Unified Safe Reinforcement Learning Benchmark
Safety-Gymnasium: A Unified Safe Reinforcement Learning Benchmark
Jiaming Ji , Borong Zhang , Jiayi Zhou , Xuehai Pan , Weidong Huang , Ruiyang Sun , Yiran Geng , Yifan Zhong , Juntao Dai , Yaodong Yang
NeurIPS 2023.
Safe Reinforcement Learning, Robotics
BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset
BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset
Jiaming Ji , Mickel Liu , Juntao Dai , Xuehai Pan , Ce Bian , Chi Zhang , Ruiyang Sun , Yizhou Wang , Yaodong Yang
NeurIPS 2023.
Large Language Models, Safety Alignment, Reinforcement Learning from Human Feedback