Skip to content

Commit

Permalink
Update default.html
Browse files Browse the repository at this point in the history
  • Loading branch information
sichkar-valentyn authored Aug 18, 2018
1 parent 8ebae1f commit c5c5edc
Showing 1 changed file with 11 additions and 0 deletions.
11 changes: 11 additions & 0 deletions docs/_layouts/default.html
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,9 @@ <h3 id="related-works">Related works:</h3>
<li>
<p>The research results for Neural Network Knowledge Based system for the tasks of collision avoidance is put in separate repository and is available here: <a href="https://github.com/sichkar-valentyn/Matlab_implementation_of_Neural_Networks">Matlab implementation of Neural Networks</a></p>
</li>
<li>
<p>The study of Semantic Web languages OWL and RDF for Knowledge representation of Alarm-Warning System is put in separate repository and is available here: <a href="https://github.com/sichkar-valentyn/Knowledge_Base_Represented_by_Semantic_Web_Language">Knowledge Base Represented by Semantic Web Language</a></p>
</li>
<li>
<p>The study of Neural Networks for Computer Vision in autonomous vehicles and robotics is put in separate repository and is available here: <a href="https://github.com/sichkar-valentyn/Neural_Networks_for_Computer_Vision">Neural Networks for Computer Vision</a></p>
</li>
Expand Down Expand Up @@ -136,11 +139,15 @@ <h3 id="rl-q-learning-environment-1-experimental-results"><a name="RL Q-Learning

<p><img src="/Reinforcement_Learning_in_Python/images/Environment-1.gif" alt="Environment-1" width="312" height="341" /> <img src="/Reinforcement_Learning_in_Python/images/Environment-1.png" alt="Environment-1" width="312" height="341" /></p>

<p><br /></p>

<h3 id="q-learning-algorithm-resulted-chart-for-the-environment-1"><a name="Q-learning algorithm resulted chart for the environment-1">Q-learning algorithm resulted chart for the environment-1</a></h3>
<p>Represents number of episodes via number of steps and number of episodes via cost for each episode</p>

<p><img src="/Reinforcement_Learning_in_Python/images/Charts-1.png" alt="RL_Q-Learning_C-1" /></p>

<p><br /></p>

<h3 id="final-q-table-with-values-from-the-final-shortest-route-for-environment-1"><a name="Final Q-table with values from the final shortest route for environment-1">Q-table with values from the final shortest route for environment-1</a></h3>
<p><img src="/Reinforcement_Learning_in_Python/images/Q-Table-E-1.png" alt="RL_Q-Learning_T-1" />
<br />Looking at the values of the table we can see the decision for the next action made by agent (mobile robot). The sequence of final actions to reach the goal after the Q-table is filled with knowledge is the following: <em>down-right-down-down-down-right-down-right-down-right-down-down-right-right-up-up.</em>
Expand All @@ -153,11 +160,15 @@ <h3 id="rl-q-learning-environment-2-experimental-results"><a name="RL Q-Learning

<p><img src="/Reinforcement_Learning_in_Python/images/Environment-2.png" alt="RL_Q-Learning_E-2" /></p>

<p><br /></p>

<h3 id="q-learning-algorithm-resulted-chart-for-the-environment-2"><a name="Q-learning algorithm resulted chart for the environment-2">Q-learning algorithm resulted chart for the environment-2</a></h3>
<p>Represents number of episodes via number of steps and number of episodes via cost for each episode</p>

<p><img src="/Reinforcement_Learning_in_Python/images/Charts-2.png" alt="RL_Q-Learning_C-2" /></p>

<p><br /></p>

<h3 id="final-q-table-with-values-from-the-final-shortest-route-for-environment-1-1"><a name="Final Q-table with values from the final shortest route for environment-1">Q-table with values from the final shortest route for environment-1</a></h3>
<p><img src="/Reinforcement_Learning_in_Python/images/Q-Table-E-2.png" alt="RL_Q-Learning_T-2" /></p>

Expand Down

0 comments on commit c5c5edc

Please sign in to comment.