Impact of game variability on reinforcement learning complexity in video games: a case study of geometry dash
Files
Fontaine_44001900_2024.pdf
Open access - Adobe PDF
- 10.11 MB
Details
- Supervisors
- Faculty
- Degree label
- Abstract
- Most advancements in the Reinforcement Learning (RL) field consist of increasing performances and benchmarks. This thesis aims to better understand the RL complexity and innovates by investigating the impact of reinforcement learning complexity within variable video game environments. We consider, as our game model, the popular 2D platformer, Geometry Dash, to probe the dynamics of RL under varying conditions. The study examines how different degrees of freedom in the game, game parameter settings, and the introduction of random disturbances influence learning processes. This research utilizes a deep Q-learning (DQL) algorithm combined with a Convolutional Neural Network (CNN) architecture, proven effective in Atari 2600 games. Employing an experimental methodology, the findings provide detailed insights into the strengths and weaknesses of artificial intelligence (AI) learning mechanisms. Ultimately, this research aims to enhance our understanding of RL complexity, paving the way for extended investigations with new algorithms and further experiments. This could hopefully lead to a theoretical framework for understanding how RL complexity evolves with game variability.