PKU-Alignment Group @Pair-Lab (under construction)
PKU-Alignment Group @Pair-Lab (under construction)
News
People
Events
Publications
Contact
More Platforms
知乎
Bilibili
Email
小红书
PAIR-Lab
Copied
Copied to clipboard
Jiayi Zhou
Ph.D Student
Ph.D (2024), Peking University
Interests
Reinforcement Learning
AI Safety
Preference Modeling
Latest
AI Alignment: A Comprehensive Survey
Safe RLHF-V: Safe Reinforcement Learning from Multi-modal Human Feedback
Generative RLHF-V: Learning Principles from Multi-modal Human Preference
InterMT: Multi-Turn Interleaved Preference Alignment with Human Feedback
Reward Generalization in RLHF: A Topological Perspective
Sequence to Sequence Reward Modeling: Improving RLHF by Language Feedback
Align Anything: Training All-Modality Models to Follow Instructions with Language Feedback
OmniSafe: An Infrastructure for Accelerating Safe Reinforcement Learning Research
Language Models Resist Alignment: Evidence From Data Compression
Safety-Gymnasium: A Unified Safe Reinforcement Learning Benchmark
Cite
×