Physical Hand Animation with Machine Learning
View/ Open
Date
2020Author
Cantürk, Tarık
xmlui.dri2xhtml.METS-1.0.item-emb
Acik erisimxmlui.mirage2.itemSummaryView.MetaData
Show full item recordAbstract
Hands are the essential limbs of humans which they use for interacting with their environ- ments. Catching, holding, moving, touching, and many other interactions are done by hand. The hand has a highly complex anatomical structure. It is a quite complicated task to model the hand movements, considering the bones of the fingers, joints, muscles, and tendons that connect them to each other and move them. Several motion capture systems are used to transfer hand motions to the digital environment. However, it’s harder to capture a catching motion with these systems, compared to capturing hand interactions with steady objects. Besides, only kinematic animations can be generated with motion capture systems, and these kinematic animations may not be usable for different catching scenarios. Therefore, employing physics-based animation techniques for generating hand motions is needed. To generate hand motions with physics-based animation techniques, the physical model of the hand must be created. We present a physical hand model with muscles and soft tissues on a skeletal structure. The presented model is intended to create realistic physical interactions and also to be efficient enough. Recently, great accomplishments have been achieved in computer animation field with the em- ployment of machine learning techniques. We present a framework that generates catch- ing motions for the proposed physical hand model, by using deep reinforcement learning techniques. To catch a thrown object, multiple body parts are required to work in coordination. While our main focus is to generate proper physics-based hand motions, we also work on synthesizing arm motions that are essential for taking the hand to the correct interception point with the right orientation. It’s been addressed in the literature [1, 2] that catching motion can be divided into smaller phases. In this way, we handle the catching motion in two phases and developed a controller brain for each phase by using deep reinforcement learning. One of these brains is designed to move the arm for getting prepared for the catching motion. And then the other one is designed to control the hand for accomplishing the actual catching movement. In addition to these, a third brain is generated with deep reinforcement learning, that manages the working time of these two brains. The results of the proposed framework is evaluated and compared with other configurations by several experiments. Moreover, user test studies have been conducted for evaluating the naturalness of the resulting motions.
xmlui.mirage2.itemSummaryView.Collections
The following license files are associated with this item: