Adversarial Reinforcement Learning for Scaling Synthetic Trajectory Imitation in Physics-based Character Animation
With techniques such as Motion Matching and large motion capture data sets, video game characters have become very realistic for user-controlled motion. The next stage towards realistic video game characters is to incorporate physical interaction into the characters behaviour so that it reacts to new environments in a realistic way. Our research project is to scale previous work in physical character motion animation to work for a variety of motions that a user will encounter, including walking, running, jumping, climbing steps, falling over, and standing up to create an immersive and realistic user control experience. Part of the difficulty of the work involves balancing the desires of the user control and the physical response of the character motion (for example, forward momentum of a character turning while running). We want to show that one system can capture a range of physically-realisable motions and build a prototype demonstrating the application of this system to game development.