About Jing-Cheng Pang (庞竟成):

Your Image

I am a fourth-year Ph.D. student at Nanjing University, advised by Professor Yang Yu. In June 2019, I obtained B.Sc. degree from UESTC. In September 2019, I joined the LAMDA group led by Professor Zhi-Hua Zhou, as a Master’s student, excused from the entrance examination. Since September 2021, I was fortunate to be accepted into the Successive Postgraduate and Doctoral program, starting my current Ph.D. studies. From July to October in 2024, I engaged as a visiting student with Prof. Masashi Sugiyama’s team at RIKEN-AIP, Tokyo, Japan.

My research focuses on connecting human and intelligent agent (RL-based or LLM-based) through natural language instructions (effective instruction parsing). Particularly, my study includes:

  • Reinforcement Learning: language-conditioned RL, optimization algorithm, imitation learning, applications;
  • Large Language Models: training, inference-time optimization, intelligent agent;
  • Embodied Robot: home-service robot, sim2real.

Free free to contact/follow me if you are interested in my work.

Recent News

  • 2025.01: ReViWo for robotics manipulation under viewpoint disturbance, is accepted by ICLR 2025.
  • 2025.01: Invited talk at HUAWEI NAIE Group. Topic: RL-driven LLM Optimization and Recent Progress [slides].
  • 2024.12: InCLET is accepted by AAMAS 2025 as a full paper. We will give an oral presentation at Detriot!
  • 2024.09: Our work, KALM, is accepted by NeurIPS 2024!
  • 2024.07: I start my visiting to Prof. Masashi Sugiyama’s team at RIKEN-AIP, Tokyo, Japan, doing RL research
  • 2024.01: RLC for LLM self-improvement is accepted by ICLR 2024
  • 2023.11: Awarded as Top Reviewer (8%) of NeurIPS 2023
  • 2023.09: TALAR, for instruction-following agents, gets accepted by NeurIPS 2023

Selected Publications

  1. Jing-Cheng Pang, Nan Tang, Kaiyuan Li, Yuting Tang, Xin-Qiang Cai, Zhen-Yu Zhang, Gang Niu, Masashi Sugiyama and Yang Yu. Learning View-invariant World Models for Visual Robotic Manipulation. In: ICLR, 2025. [paper]
  2. Jing-Cheng Pang, Si-Hang Yang, Kaiyuan Li, Jiaji Zhang, Xiong-Hui Chen, Nan Tang and Yang Yu. KALM: Knowledgeable Agents by Offline Reinforcement Learning from Large Language Model Rollouts. In: NeurIPS, 2024. [paper]
  3. Jing-Cheng Pang, Peng-Yuan Wang, Kaiyuan Li, Xiong-Hui Chen, Jiacheng Xu, Zongzhang Zhang and Yang Yu. Language Model Self-improvement by Reinforcement Learning Contemplation. In: ICLR, 2024. [paper]
  4. Jing-Cheng Pang, Xinyu Yang, Si-Hang Yang, Xiong-Hui Chen and Yang Yu. Natural Language Instruction-following with Task-related Language Development and Translation. In: NeurIPS, 2023. [paper]
  5. Jing-Cheng Pang, Tian Xu, Shengyi Jiang, Yu-Ren Liu and Yang Yu. Reinforcement Learning With Sparse-Executing Actions via Sparsity Regularization. TNNLS, to appear. [paper]
  6. Peng-Yuan Wang, Jing-Cheng Pang, Chen-Yang Wang, Xu-Hui Liu, Tian-Shuo Liu, Si-Hang Yang, Hong Qian and Yang Yu. InCLET: In-context Learning from Language Models can Improve Embodied Instruction-following. In: AAMAS (Oral), 2025. [paper]

[Full publication list]

Projects