Otonom Oyun Ajanlarının Performansını Sürekli İyileştirmek için Görev Tabanlı Görsel Dikkat Kullanımı
View/ Open
Date
2024-01-05Author
Ulu, Eren
xmlui.dri2xhtml.METS-1.0.item-emb
Acik erisimxmlui.mirage2.itemSummaryView.MetaData
Show full item recordAbstract
Recent developments in the field of machine learning have led to the widespread acceptance of Deep Reinforcement Learning (DRL) techniques, which are a subset of machine learning, in the realm of digital intelligence. DRL allows agents to make sequential decisions and adapt their behavior through interactions with their environment, making it particularly suited for tasks that involve decision-making and learning from experience. This increasing utilization of DRL has opened new avenues for enhancing the capabilities of digital agents, enabling them to tackle complex challenges such as autonomous game playing, robotic control, and optimizing resource allocation. These advancements in DRL hold great promise for revolutionizing the ways in which intelligent agents operate in various domains more efficiently."
DRL has been effectively performed in various complex video game environments. In many game environments, DeepMind’s baseline Deep Q-Network (DQN) game agents performed at a level comparable to that of humans. However, these DRL models require many experience samples to learn and lack the adaptability to changes in the environment and handling complexity.
This thesis focusses on the specific domain of video game playing agents that have garnered significant attention due to adaptive decision-making. The study delves into the application of DRL techniques to develop and enhance game playing agents.
In the first part of the thesis, we proposed Attention-Augmented DQN(AADQN) game agent by incorporating a combined top-down and bottom-up visual attention mechanism into the DQN game agent to highlight task-relevant features of input. Our AADQN model uses attention mechanis that dynamically teaches a DQN game agent how to play a game by focusing on the most task-related information. In the evaluation of our agent's performance across eight games in the Atari 2600 domain, which vary in complexity, we demonstrate that our algorithm surpasses the baseline DQN agent. Notably, our model can achieve greater flexibility and higher scores at a reduced number of time steps.
In the second section of this thesis, we address the limitations associated with employing Auxiliary Functions (AF) in DQN game agents. We investigate uxiliary strategies in some games in the Atari 2600 domain environments by integration of auxiliary functions and exploring methods to enabling more efficient and robust learning, ultimately contributing to the advancement of DQN game agent in complex and dynamic gaming environments
We demonstrate that our methods are effective in addressing the inherent inefficiency and inflexibility issues that plague the DQN, thereby marking a significant advancement in the realm of DQN game agents. By investigating the integration auxiliary functions and attention mechanism with DQN algorithms, this thesis show what can be achieved in performance improvement in autonomous game playing in Atari game envrionments. The findings and insights from this thesis are expected to contribute not only to the field of artificial intelligence but also to the broader community of gamers and developers, offering new perspectives on the creation of sophisticated and responsive game agents.