PKU-Alignment Group @Pair-Lab (under construction)
PKU-Alignment Group @Pair-Lab (under construction)
News
People
Events
Publications
Contact
More Platforms
知乎
Bilibili
Email
小红书
PAIR-Lab
Copied
Copied to clipboard
Donghai Hong
Ph.D Student
MSc (2024), Peking University
Interests
Safety Alignment
Safety Evaluation
Latest
Safe RLHF-V: Safe Reinforcement Learning from Multi-modal Human Feedback
Generative RLHF-V: Learning Principles from Multi-modal Human Preference
InterMT: Multi-Turn Interleaved Preference Alignment with Human Feedback
Align Anything: Training All-Modality Models to Follow Instructions with Language Feedback
Aligner: Efficient Alignment by Learning to Correct
PKU-SafeRLHF: Towards Multi-Level Safety Alignment for LLMs with Human Preference
Cite
×