3D Human Motion Generation
and Simulation Tutorial

ICCV 2025 Tutorial

10/19/2025 9:00-16:00 (HST)
Room TBD

Introduction

Human motion generation is an important area of research with applications in virtual reality, gaming, animation, robotics, and AI-driven content creation. Generating realistic and controllable human motion is essential for creating interactive digital environments, improving character animation, and enhancing human-computer interaction. Recent advances in deep learning have made it possible to automate motion generation, reducing the need for expensive motion capture and manual animation. Techniques such as diffusion models, generative masked modeling, and variational autoencoders (VAEs) have been used to synthesize diverse and realistic human motion. Transformer-based models have improved the ability to capture temporal dependencies, leading to smoother and more natural movement. In addition, reinforcement learning and physics-based methods have helped create physically plausible and responsive motion, which is useful for applications like robotics and virtual avatars. Human motion generation research spans across computer vision, computer graphics, and robotics. However, many researchers and developers may not be familiar with the latest advances and challenges in this area. This tutorial will provide an introduction to human motion generation, covering key methods, recent developments, and practical applications. We will also discuss open research problems and opportunities for future work. The tutorial will cover the following topics: 1) human motion generation basics, 2) kinematic-based generation methods, 3) physics-based generation methods, 4) controllability of human motion generation, 5) human-object/human/scene interactions, 6) co-speech gesture synthesis, and 7) open research problems.

Schedule

Time (HST) Programme
09:00 - 09:10 Opening Remarks
09:10 - 10:00

Invited Talk: TBD

Chuan Guo is a research scientist at Meta Reality Lab. Previously, He spent a wonderful year at Snap Research as a research scientist. His research interests are in generative AI for digital human performance and character animation, focusing on 3D avatar animation, motion synthesis/stylization, and human-scene/object interaction.

Chuan Guo
Research Scientist, Meta Reality Lab
10:00 - 10:10 Coffee Break
10:10 - 11:00

Invited Talk: Is Motion Tracking All You Need for Humanoid Control?

Zhengyi “Zen” Luo is a research scientist at NVIDIA GEAR Lab and recently completed his PhD in Robotics at Carnegie Mellon University under the supervision of Prof. Kris Kitani. Prior to his current role, he interned at Meta Reality Labs and NVIDIA’s AI research groups, and holds an MS in Robotics from CMU and a BS in Computer Science from the University of Pennsylvania. His research spans vision, learning, and robotics, with a particular emphasis on human pose estimation, motion modeling, and humanoid control.

Zhengyi Luo
Research Scientist, NVIDIA
11:00 - 11:10 Coffee Break
11:10 - 12:00

Invited Talk: Towards Realistic Co-Speech Gestures in Social Interaction

Libin Liu is an Assistant Professor at Peking University. His research focuses on computer graphics and embodied intelligence, with an emphasis on character animation, physics-based simulation, and motion control. His work aims to model and generate natural, physically consistent human motion for virtual characters and robots. He has received the Best Paper Award at SIGGRAPH Asia 2022 and an Honorable Mention at SIGGRAPH 2023.

Libin Liu
Assistant Professor, Peking University
12:00 - 14:00 Lunch Break
14:00 - 14:50

Invited Talk: Controllable Motion Synthesis: From Text to Arbitrary Objectives

Korrawe Karunratanakul is a postdoctoral researcher in the Computer Vision and Learning Group (VLG) at ETH Zurich, where he also completed his PhD in Computer Science with Prof. Siyu Tang. Previously, he was a research intern at Meta Reality Labs and the Max Planck Institute for Intelligent Systems (MPI-IS). His research interests include generative models, human motion modeling, robotics, and world models, focusing on improving our understanding of human interaction with the 3D world in everyday life.

Korrawe Karunratanakul
Postdoctoral Researcher, ETH Zurich
14:50 - 15:00 Coffee Break
15:00 - 15:50

Invited Talk: Generating Human Motion and Interactions with Control

Xianghui Xie is a PhD at Max Planck Institute for Informatics and University of Tübingen, supervised by Prof. Gerard Pons-Moll. He is interested in 3D computer vision in general, with a focus in accurate tracking and realistic synthesis of humans, objects and especially the interactions between them. By capturing and synthesizing interactions, the goal is to empower digital humans or humanoid robots to behave like real humans. His PhD is funded by the research fellowship from Amazon-MPI science hub in Germany.

Xianghui Xie
PhD Student, Max Planck Institute for Informatics
15:50 - 16:00 Coffee Break
16:00 - 16:50 Panel Discussion
16:50 - 17:00 Ending Remarks

Speakers

Libin Liu

Peking University

Chuan Guo

Meta

Zhengyi Luo

NVIDIA

korrawe karunratanakul

ETH Zurich

Xianghui Xie

Max Planck Institute

Organizers

Huaizu Jiang

Assistant Professor
Northeastern University

Chuan Guo

Research Scientist
Meta

Lingjie Liu

Assistant Professor
University of Pennsylvania

Zhiyang (Frank) Dou

PhD Student
Massachusetts Institute of Technology

Yiming Xie

PhD Student
Northeastern University