Skip to content

Implementation and analysis of tabular and deep reinforcement learning algorithms (Q-Learning, DQN, SARSA, and more) to solve the Maze Rider problem with optimized state space, reward shaping, and sampling techniques.

Notifications You must be signed in to change notification settings

souhaiel1/-Deep-RL-Maze-Rider-

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

Maze Solver: Reinforcement Learning with Tabular and Deep Methods

Overview

This project explores the Maze Rider problem, a challenge demonstrating the trade-off between exploration and exploitation in Reinforcement Learning (RL). The implementation covers both tabular RL methods and Deep RL techniques to solve mazes of varying complexities efficiently.

Features

  • State Space Reduction: Compact representation focusing on key elements (agent and goal positions).
  • Reward Shaping: Methods to improve convergence by penalizing wall hits and repetitive actions.
  • Sampling Techniques: Implementation of ε-greedy exploration and Thompson Sampling.
  • Tabular RL Algorithms: Q-Learning, SARSA, Double Q-Learning, and Dyna-Q.
  • Deep RL Methods: DQN, Reinforce, and A2C with MLP and convolutional neural networks.

Algorithms Implemented

Tabular RL

  • Q-Learning
  • SARSA
  • Double Q-Learning
  • Dyna-Q

Deep RL

  • Deep Q-Networks (DQN)
  • Reinforce
  • Advantage Actor-Critic (A2C)

Sampling Strategies

  • ε-greedy Exploration
  • Thompson Sampling

For a detailed analysis of algorithms and their performance, refer to the report.

About

Implementation and analysis of tabular and deep reinforcement learning algorithms (Q-Learning, DQN, SARSA, and more) to solve the Maze Rider problem with optimized state space, reward shaping, and sampling techniques.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published