Search

PKU-Alignment Group @Pair-Lab (under construction)
PKU-Alignment Group @Pair-Lab (under construction)
  • News
  • People
  • Events
  • Publications
  • Contact
  • More Platforms
    知乎 Bilibili Email 小红书 PAIR-Lab
    Copied
    Copied to clipboard
    Boyuan Chen

    Boyuan Chen

    Ph.D Student

    Ph.D (2026), Peking University

      Interests
      • Reinforcement Learning
      • Scalable Oversight
      • Superalignment

      Latest

      • AI Alignment: A Comprehensive Survey
      • Safe RLHF-V: Safe Reinforcement Learning from Multi-modal Human Feedback
      • Generative RLHF-V: Learning Principles from Multi-modal Human Preference
      • InterMT: Multi-Turn Interleaved Preference Alignment with Human Feedback
      • Align Anything: Training All-Modality Models to Follow Instructions with Language Feedback
      • Aligner: Efficient Alignment by Learning to Correct
      • Language Models Resist Alignment: Evidence From Data Compression
      • PKU-SafeRLHF: Towards Multi-Level Safety Alignment for LLMs with Human Preference

      © 2025 PKU-Alignment Group.

      Published with Hugo Blox Builder — the free, open source website builder that empowers creators.

      Cite
      Copy Download