PKU-Alignment Group @Pair-Lab (under construction)
PKU-Alignment Group @Pair-Lab (under construction)
News
People
Events
Publications
Contact
More Platforms
知乎
Bilibili
Email
小红书
PAIR-Lab
Copied
Copied to clipboard
Xuyao Wang
Research Intern
Nankai University
Interests
Reinforcement Learning
AI Infra
Latest
InterMT: Multi-Turn Interleaved Preference Alignment with Human Feedback
Align Anything: Training All-Modality Models to Follow Instructions with Language Feedback
SafeSora: Towards Safety Alignment of Text2Video Generation via a Human Preference Dataset
Cite
×