Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation for Goal conditioning #5149

Merged
merged 12 commits into from
Mar 29, 2021
Merged
31 changes: 31 additions & 0 deletions docs/Learning-Environment-Design-Agents.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,8 @@
- [RayCast Observation Summary & Best Practices](#raycast-observation-summary--best-practices)
- [Variable Length Observations](#variable-length-observations)
- [Variable Length Observation Summary & Best Practices](#variable-length-observation-summary--best-practices)
- [Goal Observations](#goal-observations)
- [Goal Observation Summary & Best Practices](#goal-observation-summary--best-practices)
- [Actions and Actuators](#actions-and-actuators)
- [Continuous Actions](#continuous-actions)
- [Discrete Actions](#discrete-actions)
Expand Down Expand Up @@ -560,6 +562,35 @@ between -1 and 1.
of an entity to the `BufferSensor`.
- Normalize the entities observations before feeding them into the `BufferSensor`.

### Goal Observations

It is possible for agents to collect observations that will be treated as "goal".
vincentpierre marked this conversation as resolved.
Show resolved Hide resolved
A goal is used to condition the policy of the Agent, meaning that if the goal
changes, the behavior of the Agent will change as well. Note that this is true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should say that goals change the policy, which is a mapping from (observation -> action)

for any observation since all observations influence the policy of the Agent to
some degree. But by specifying a goal explicitly, we can make this conditioning
more important to the agent. This feature can be used in settings where an agent
must learn to solve different tasks that are similar by some aspects because the
agent will learn to reuse learnings from different tasks to generalize better.
In Unity, you can specify that a `VectorSensor` or
a `CameraSensor` is a goal by attaching a `VectorSensorComponent` or a
`CameraSensorComponent` to the Agent can selecting `Goal` as `Observation Type`.
vincentpierre marked this conversation as resolved.
Show resolved Hide resolved
On the trainer side, there are two different ways to condition the policy. This
setting is determined by the
[conditioning_type parameter](Training-Configuration-File.md#common-trainer-configurations).
If set to `hyper` (default) a [HyperNetwork](https://arxiv.org/pdf/1609.09106.pdf)
will be used to generate some of the
weights of the policy using the goal observations as input. Note that using a
HyperNetwork requires a lot of computations, it is recommended to use a smaller
number of hidden units in the policy to alleviate this.
If set to `none` the goals will be considered as regular observations.

#### Goal Observation Summary & Best Practices
- Attach a `VectorSensorComponent` or `CameraSensorComponent` to an agent and
set the observation type to goal to use the feature.
- Set the conditioning_type parameter in the training configuration.
- Reduce the number of hidden units in the network when using the HyperNetwork
conditioning type.

## Actions and Actuators

Expand Down
1 change: 1 addition & 0 deletions docs/Training-Configuration-File.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@ choice of the trainer (which we review on subsequent sections).
| `network_settings -> num_layers` | (default = `2`) The number of hidden layers in the neural network. Corresponds to how many hidden layers are present after the observation input, or after the CNN encoding of the visual observation. For simple problems, fewer layers are likely to train faster and more efficiently. More layers may be necessary for more complex control problems. <br><br> Typical range: `1` - `3` |
| `network_settings -> normalize` | (default = `false`) Whether normalization is applied to the vector observation inputs. This normalization is based on the running average and variance of the vector observation. Normalization can be helpful in cases with complex continuous control problems, but may be harmful with simpler discrete control problems. |
| `network_settings -> vis_encode_type` | (default = `simple`) Encoder type for encoding visual observations. <br><br> `simple` (default) uses a simple encoder which consists of two convolutional layers, `nature_cnn` uses the CNN implementation proposed by [Mnih et al.](https://www.nature.com/articles/nature14236), consisting of three convolutional layers, and `resnet` uses the [IMPALA Resnet](https://arxiv.org/abs/1802.01561) consisting of three stacked layers, each with two residual blocks, making a much larger network than the other two. `match3` is a smaller CNN ([Gudmundsoon et al.](https://www.researchgate.net/publication/328307928_Human-Like_Playtesting_with_Deep_Learning)) that is optimized for board games, and can be used down to visual observation sizes of 5x5. |
| `network_settings -> conditioning_type` | (default = `hyper`) Conditioning type for the policy using goal observations. <br><br> `none` treats the goal observations as regular observations, `hyper` (default) uses a HyperNetowk with goal observations as input to generate some of the weights of the policy. Note that when using `hyper` the number of parameters of the network increases greatly. Therefore, it is recommended to reduce the number of `hidden_units` when using this `conditioning_type`
vincentpierre marked this conversation as resolved.
Show resolved Hide resolved


## Trainer-specific Configurations
Expand Down