Large Language Models

Language Models Resist Alignment: Evidence From Data Compression
Language Models Resist Alignment: Evidence From Data Compression
ACL 2025 Best Paper
Large Language Models, Safety Alignment, AI Safety
ProgressGym: Alignment with a Millennium of Moral Progress
ProgressGym: Alignment with a Millennium of Moral Progress
Tianyi Qiu , Yang Zhang , Xuchuan Huang , Jasmine Xinze Li , Jiaming Ji , Yaodong Yang
NeurIPS 2024.
Large Language Models, AI Alignment
PKU-SafeRLHF: Towards Multi-Level Safety Alignment for LLMs with Human Preference
PKU-SafeRLHF: Towards Multi-Level Safety Alignment for LLMs with Human Preference
Jiaming Ji , Donghai Hong , Borong Zhang , Boyuan Chen , Josef Dai , Boren Zheng , Tianyi Qiu , Boxun Li , Yaodong Yang
ACL 2025 Main.
Large Language Models, Safety Alignment, Reinforcement Learning from Human Feedback
BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset
BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset
Jiaming Ji , Mickel Liu , Juntao Dai , Xuehai Pan , Ce Bian , Chi Zhang , Ruiyang Sun , Yizhou Wang , Yaodong Yang
NeurIPS 2023.
Large Language Models, Safety Alignment, Reinforcement Learning from Human Feedback