Makine Öğrenmesi ile Kaçınma Hareketlerinin Üretilmesi
View/ Open
Date
2019Author
Özoğur, İsmail Abdullah
xmlui.dri2xhtml.METS-1.0.item-emb
Acik erisimxmlui.mirage2.itemSummaryView.MetaData
Show full item recordAbstract
GENERATING AVOIDANCE MOTIONS WITH MACHINE LEARNING
İsmail Abdullah Özogur
Master of Science, Department of Computer Graphics
Supervisor: Asst. Prof. Dr. Zümra Kavafoğlu
September 2019, 42 pages
Avoidance motion plays an important role in computer animation applications which have virtual human models, especially in video games. Motion capture techniques are often used to achieve realistic avoidance motions. However, there is an unlimited number of motion that a person can perform in real life, which are obviously impossible to be achieved with motion capture techniques. Even if adequate amount of motion capture data is available to emulate human motion, it won’t be easy to find the appropriate motion from a large amount of data. There are studies to determine the most appropriate motion within a large pool of motion capture data in real time, but these studies often involve costly processes and use a large amount of system resources. Therefore, such methods are difficult to use for games with a large number of characters. Due to the reasons mentioned above, unrealistic motion such as a big jump and an extreme bending is used against multiple attacks in many games. However, the fact that the virtual human model avoids the attacks in an unrealistic way, which could in fact be easily warded off, and continuously employs the same motion lead the virtual human model to lose touch with reality. This study seeks to teach avoidance motion to virtual human model through deep reinforcement learning without using any motion capture data. The components of the learning algorithm in this study are designed in a way that the virtual model won’t spend an unrealistic iii effort and will act according to physical constraints. Therefore, this study aims to teach real life motion to the virtual human model. In the offline learning phase, spheres were thrown from random areas to the random points of the virtual human model. The rewards of reinforcement learning algorithm are set based on the virtual human model’s success on guarding off the spheres. Accordingly, real-time avoidance motion was generated through this learned policy at the offline learning phase. In order to measure the virtual human model’s performance of avoidance motion, user tests were conducted and their results were interpreted.
Keywords: avoidance, deep reinforcement learning, Virtual Human Animation.