From 099777e0c506cbd91a15ac2e9704aac46efc4d4f Mon Sep 17 00:00:00 2001 From: "Benjamin A. Beasley" Date: Wed, 11 Oct 2023 23:04:02 -0400 Subject: [PATCH] =?UTF-8?q?Fix=20a=20trivial=20typo=20(=E2=80=9Cof=20and?= =?UTF-8?q?=E2=80=9D=E2=86=92=E2=80=9Cof=20an=E2=80=9D)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 0bf4a62..c0e3fd7 100644 --- a/README.md +++ b/README.md @@ -349,7 +349,7 @@ A dedicated `Neurons` class called `SuccessorFeatures` learns the successor feat `SuccessorFeatures` are a specific instance of a more general class of neurons called `ValueNeuron`s which learn value function for any reward density under the `Agent`s motion policy. This can be used to do reinforcement learning tasks such as finding rewards hidden behind walls etc as shown in [this demo](./demos/reinforcement_learning_example.ipynb). -We also have a working examples of and actor critic algorithm using deep neural networks [here](./demos/actor_critic_example.ipynb) +We also have a working examples of an actor critic algorithm using deep neural networks [here](./demos/actor_critic_example.ipynb) Finally, we are working on a dedicated subpackage -- (`RATS`: RL Agent Toolkit and Simulator) -- to host all this RL stuff and more so keep an eye out.