PKU-Alignment Group @Pair-Lab (under construction)
PKU-Alignment Group @Pair-Lab (under construction)
News
People
Events
Publications
Contact
More Platforms
知乎
Bilibili
Email
小红书
PAIR-Lab
Copied
Copied to clipboard
Josef Dai
Latest
SafeVLA: Towards Safety Alignment of Vision-Language-Action Model via Constrained Learning
Reward Generalization in RLHF: A Topological Perspective
Align Anything: Training All-Modality Models to Follow Instructions with Language Feedback
PKU-SafeRLHF: Towards Multi-Level Safety Alignment for LLMs with Human Preference
Safe RLHF: Safe Reinforcement Learning from Human Feedback
Cite
×