Skip to content

A Q-learning implementation for the OpenAI Gym MountainCar environment. Includes scripts for training, visualization, and rendering, along with pre-trained models and metrics.

License

Notifications You must be signed in to change notification settings

alienx5499/MountainCar

Folders and files

NameName
Last commit message
Last commit date

Latest commit

31590b3 Β· Nov 28, 2024

History

23 Commits
Nov 28, 2024
Nov 28, 2024
Nov 28, 2024
Nov 28, 2024
Nov 28, 2024
Nov 28, 2024

Repository files navigation

🌟 MountainCar: OpenAI Gym Reinforcement Learning 🌟

Master the MountainCar environment with Q-Learning and Visualization

Build Passing Views Contributions Welcome License: MIT Security ⭐ GitHub stars 🍴 GitHub forks Commits πŸ› GitHub issues πŸ“‚ GitHub pull requests πŸ’Ύ GitHub code size

πŸ”— Visit the Live Demo | πŸ“‘ Explore Documentation


πŸ”οΈ What is MountainCar?

MountainCar is a classic reinforcement learning environment from OpenAI Gym designed to challenge agents to master the task of driving an underpowered car up a steep mountain. This repository includes:

  • MountainCar.py: A script for rendering and evaluating a trained Q-learning agent.
  • MountainCarAlgorithm.py: The Q-learning implementation for training the agent.
  • Visualize_q_table.py: Tools for analyzing and visualizing the trained Q-table.
  • ModelData/: A folder containing the pre-trained Q-table (q_table.pkl) and training metrics (training_metrics.pkl).

"Conquer the MountainCar challenge and understand the power of Q-learning!"


πŸ“š Table of Contents

  1. ✨ Features
  2. πŸ› οΈ Tech Stack
  3. πŸ“Έ Screenshots
  4. βš™οΈ Setup Instructions
  5. 🎯 Target Audience
  6. 🀝 Contributing
  7. πŸ“œ License

✨ Features

  • πŸš— Custom Q-Learning Algorithm: Train agents with hyperparameters like learning rate, discount factor, and epsilon decay.
  • πŸ“ˆ Q-Table Visualization: Gain insights into the training process with histograms of Q-values.
  • πŸ’» Modular Codebase: Separate scripts for training, evaluation, and visualization.
  • πŸ”οΈ Enhanced Reward System: Custom rewards for better learning outcomes.
  • πŸ–₯️ Rendering Script: Visualize the trained agent in action with real-time rendering.

πŸ› οΈ Tech Stack

🌐 Python Technologies

  • Reinforcement Learning: OpenAI Gym
  • Visualization: Matplotlib, NumPy
  • Code Management: Pickle for saving Q-tables and metrics

πŸ› οΈ Scripts and Files

  • MountainCar/: Folder containing:
    • MountainCar.py: Script for rendering the trained agent and observing its performance.
    • MountainCarAlgorithm.py: Core Q-learning algorithm for training the agent.
    • Visualize_q_table.py: Scripts to visualize training metrics and analyze the Q-table.
  • ModelData/: Folder containing:
    • q_table.pkl: Pre-trained Q-table.
    • training_metrics.pkl: Saved training metrics for analysis.

πŸ“Έ Screenshots

Here are visualizations showcasing the training process, Q-table analysis, and the agent in action:

  1. Total Rewards Per Episode
    Visualizes the total rewards collected by the agent over episodes, showing trends and improvement over time.
    Total Rewards Per Episode

  2. Epsilon Decay Over Episodes
    Highlights how the epsilon value decreases during training, balancing exploration and exploitation.
    Epsilon Decay Over Episodes

  3. Distribution of Maximum Q-Values
    Demonstrates the distribution of maximum Q-values across the state space, providing insights into the agent's decision-making quality.
    Distribution of Maximum Q-Values

  4. MountainCar Agent in Action
    Watch the trained agent perform in the MountainCar environment as it attempts to reach the goal.
    MountainCar Agent in Action


βš™οΈ Setup Instructions

  1. Clone the Repository
    git clone https://github.com/alienx5499/MountainCar.git
  2. Navigate to the Project Directory
    cd MountainCar
  3. Install Dependencies
    pip install -r requirements.txt
  4. Run Training Script
    python MountainCarAlgorithm.py
  5. Visualize Training Metrics
    python Visualize_q_table.py
  6. Render the Trained Agent
    python MountainCar.py

🎯 Target Audience

  1. Reinforcement Learning Enthusiasts: Dive deep into Q-learning and OpenAI Gym.
  2. AI Researchers: Analyze and build upon the classic MountainCar environment.
  3. Students and Educators: Use as a learning tool for understanding reinforcement learning.
  4. Developers: Expand the repository with new algorithms and features.

🀝 Contributing

We ❀️ open source! Contributions are welcome to make this project even better.

  1. Fork the repository.
  2. Create your feature branch.
    git checkout -b feature/new-feature
  3. Commit your changes.
    git commit -m "Add a new feature"
  4. Push to the branch and open a pull request.

Refer to our CONTRIBUTING.md for detailed contribution guidelines.


Awesome Contributors

Thank you for contributing to our repository



πŸ“œ License

This project is licensed under the MIT License. See the LICENSE file for details.


πŸ“¬ Feedback & Suggestions

We value your input! Share your thoughts through GitHub Issues.

πŸ”— Visit the Live Demo | πŸ“‘ Explore Documentation


πŸ’‘ Let's conquer the MountainCar challenge together!

About

A Q-learning implementation for the OpenAI Gym MountainCar environment. Includes scripts for training, visualization, and rendering, along with pre-trained models and metrics.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages