EVENT

Event News

Talk by Prof. Taku Komura from University of Edinburgh:
"Learning Neural Character Controllers from Motion Capture Data"

We are pleased to announce that Dr Taku Komura from Edinburgh will give a talk below.

Title:

Learning Neural Character Controllers from Motion Capture Data

Speaker:

Dr Taku Komura (Reader, Institute of Perception, Action and Behavior, University of Edinburgh, UK)

Time/Date:

14:00-15:00 Friday 7th December

Venue:

Room 1208, 12th floor, NII

Abstract:

I will present about two data-driven frameworks based on neural networks for interactive character control. The first approach is called a Phase-Functioned Neural Network (PFNN). In this network structure, the weights are computed via a cyclic function which uses the phase as an input. Along with the phase, our system takes as input user controls, the previous state of the character, the geometry of the scene, and automatically produces high quality motions that achieve the desired user control. The entire network is trained in an end-to-end fashion on a large dataset composed of locomotion such as walking, running, jumping, and climbing movements fitted into virtual environments. Our system can therefore automatically produce motions where the character adapts to different geometric environments such as walking and running over rough terrain, climbing over large rocks, jumping over obstacles, and crouching under low ceilings. Our network architecture produces higher quality results than time-series autoregressive models such as LSTMs as it deals explicitly with the latent variable of motion relating to the phase. Once trained, our system is also extremely fast and compact, requiring only milliseconds of execution time and a few megabytes of memory, even when trained on gigabytes of motion data. Our work is most appropriate for controlling characters in interactive scenes such as computer games and virtual reality systems. The second approach is called Mode-Adaptive Neural Networks. This is an extension of the PFNN and has the capability to control quadruped characters, where the locomotion is multimodal. The system is composed of the motion prediction network and the gating network. At each frame, the motion prediction network computes the character state in the current frame given the state in the previous frame and the user-provided control signals. The gating network dynamically updates the weights of the motion prediction network by selecting and blending what we call the expert weights, each of which specializes in a particular movement. Due to the increased flexibility, the system can learn consistent expert weights across a wide range of non-periodic/periodic actions, from unstructured motion capture data, in an end-to-end fashion. In addition, the users are released from performing complex labelling of phases in different gaits. We show that this architecture is suitable for encoding the multi-modality of quadruped locomotion and synthesizing responsive motion in real-time.

Biography:

Taku Komura is a Reader (associate professor) at the Institute of Perception, Action and Behavior, School of Informatics, University of Edinburgh. As the leader of the Computer Graphics and Visualization Unit his research has focused on data-driven character animation, physically-based character animation, crowd simulation, cloth animation, anatomy-based modelling, and robotics. Recently, his main research interests have been the application of machine learning techniques for animation synthesis. He received the Royal Society Industry Fellowship (2014) and the Google AR/VR Research Award (2017).

entry3405

SPECIAL