Search

PKU-Alignment Group @Pair-Lab (under construction)
PKU-Alignment Group @Pair-Lab (under construction)
  • News
  • People
  • Events
  • Publications
  • Contact
  • More Platforms
    知乎 Bilibili Email 小红书 PAIR-Lab
    Copied
    Copied to clipboard
    Xuehai Pan

    Xuehai Pan

    Alumni

    Peking University

      Interests
      • Reinforcement Learning
      • Value Alignment

      Latest

      • AI Alignment: A Comprehensive Survey
      • Reward Generalization in RLHF: A Topological Perspective
      • OmniSafe: An Infrastructure for Accelerating Safe Reinforcement Learning Research
      • Aligner: Efficient Alignment by Learning to Correct
      • Safe RLHF: Safe Reinforcement Learning from Human Feedback
      • Safety-Gymnasium: A Unified Safe Reinforcement Learning Benchmark
      • BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset

      © 2025 PKU-Alignment Group.

      Published with Hugo Blox Builder — the free, open source website builder that empowers creators.

      Cite
      Copy Download