Skip to content

Solving The Taxi Problem from "Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition" by Tom Dietterich

Notifications You must be signed in to change notification settings

MoraKanHan/Openai_Taxi_V3

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Openai_Taxi_V3

Solving The Taxi Problem from "Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition" by Tom Dietterich

Description:

There are four designated locations in the grid world indicated by R(ed), G(reen), Y(ellow), and B(lue). When the episode starts, the taxi starts off at a random square and the passenger is at a random location. The taxi drives to the passenger's location, picks up the passenger, drives to the passenger's destination (another one of the four specified locations), and then drops off the passenger. Once the passenger is dropped off, the episode ends.

Observations:

There are 500 discrete states since there are 25 taxi positions, 5 possible locations of the passenger (including the case when the passenger is in the taxi), and 4 destination locations.

Passenger locations:

  • 0: R(ed)
  • 1: G(reen)
  • 2: Y(ellow)
  • 3: B(lue)
  • 4: in taxi

Destinations:

  • 0: R(ed)
  • 1: G(reen)
  • 2: Y(ellow)
  • 3: B(lue)

Actions:

There are 6 discrete deterministic actions:
  • 0: move south
  • 1: move north
  • 2: move east
  • 3: move west
  • 4: pickup passenger
  • 5: drop off passenger

Rewards:

  • There is a default per-step reward of -1,
  • except for delivering the passenger, which is +20,
  • or executing "pickup" and "drop-off" actions illegally, which is -10.

About

Solving The Taxi Problem from "Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition" by Tom Dietterich

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages