With adversarial reinforcement learning, physically simulated characters can be developed that automatically synthesize lifelike and responsive behaviors.
A character is first trained to perform complex motor skills by imitating human motion data. Once the character has acquired a rich repertoire of skills, it can reuse those skills to perform new tasks in a natural, lifelike way.
This model then allows you to generate motions for new scenarios, without tedious manual animation or new motion data from real actors.
A character is first trained to perform complex motor skills by imitating human motion data. Once the character has acquired a rich repertoire of skills, it can reuse those skills to perform new tasks in a natural, lifelike way.
This model then allows you to generate motions for new scenarios, without tedious manual animation or new motion data from real actors.
Sign in or sign up to post comments.
Be the first to comment