Skip to content

Commit

Permalink
Fix typos in documentation. (#2283)
Browse files Browse the repository at this point in the history
  • Loading branch information
0mdc authored Dec 15, 2023
1 parent 79a19eb commit 288ec1f
Show file tree
Hide file tree
Showing 5 changed files with 21 additions and 21 deletions.
6 changes: 3 additions & 3 deletions docs/AUDIO.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,9 +53,9 @@ The RLRAudioPropagationConfiguration() exposes various configuration options tha
| transmission | bool | false | Enable transmission of rays |
| meshSimplification | bool | false | Uses a series of mesh simplification operations to reduce the mesh complexity for ray tracing. Vertex welding is applied, followed by simplification using the edge collapse algorithm. |
| temporalCoherence | bool | false | Turn on/off temporal smoothing of the impulse response. This uses the impulse response from the previous simulation time step as a starting point for the next time step. This reduces the number of rays required by about a factor of 10, resulting in faster simulations, but should not be used if the motion of sources/listeners is not continuous. |
| dumpWaveFiles | bool | false | Write the wave files for different bands. Will be writted to the AudioSensorSpec's [outputDirectory](#outputDirectory) |
| dumpWaveFiles | bool | false | Write the wave files for different bands. Will be written to the AudioSensorSpec's [outputDirectory](#outputDirectory) |
| enableMaterials | bool | true | Enable audio materials |
| writeIrToFile | bool | false | Write the final impulse response to a file. Will be writted to the AudioSensorSpec's [outputDirectory](#outputDirectory) |
| writeIrToFile | bool | false | Write the final impulse response to a file. Will be written to the AudioSensorSpec's [outputDirectory](#outputDirectory) |



Expand All @@ -72,7 +72,7 @@ This section describes the channel layout struct, which defines what the output

- ##### RLRAudioPropagationChannelLayoutType

The channel layout describes how the audio output will be experienced by the listener. Lets look at channel layout types that are currently supported.
The channel layout describes how the audio output will be experienced by the listener. Let's look at channel layout types that are currently supported.

|Enum|Usage|
|-----------|---------|
Expand Down
8 changes: 4 additions & 4 deletions docs/docs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@
.. py:function:: habitat_sim.nav.PathFinder.snap_point
:summary: Snaps a point to the closet navigable location

Will only search within a 4x8x4 cube centerred around the point.
Will only search within a 4x8x4 cube centered around the point.
If there is no navigable location within that cube, no navigable point will be found.

:param point: The starting location of the agent
Expand All @@ -110,7 +110,7 @@
=======

We currently have the following actions added by default. Any action not
registered with an explict name is given the snake case version of the
registered with an explicit name is given the snake case version of the
class name, i.e. ``MoveForward`` can be accessed with the name
``move_forward``. See `registry.register_move_fn`, `SceneNodeControl`,
and `ActuationSpec`
Expand Down Expand Up @@ -142,11 +142,11 @@
==============

The Semantic scene provides access to semantic information about the given
environement
environment

.. note-warning::

Not avaliable for all datasets.
Not available for all datasets.

.. py:module:: habitat_sim.utils.common
Expand Down
6 changes: 3 additions & 3 deletions docs/noise_models.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
and observations from real sensors.

A noise model can be applied to a sensor by specifying the name of the noise
model in the `sensor.SensorSpec.noise_model` feild.
model in the `sensor.SensorSpec.noise_model` field.
Arguments can be passed to the noise model constructor as keyword arguments using
the `sensor.SensorSpec.noise_model_kwargs` field. For instance, to use the `RedwoodDepthNoiseModel`
with a ``noise_multiplier`` of 5
Expand All @@ -19,7 +19,7 @@
These noise models are commonly the result of contributions from various research projects.
If you use a noise model in your research, please cite the relevant work specified by the docummentation
If you use a noise model in your research, please cite the relevant work specified by the documentation


**Depth Noise Models**
Expand Down Expand Up @@ -52,4 +52,4 @@
.. py:function:: habitat_sim.sensors.noise_models.RedwoodDepthNoiseModel.__init__
:param gpu_device_id: The ID of CUDA device to use (only applicable if habitat-sim was built with ``--with-cuda``)
:param noise_multiplier: Multipler for the Gaussian random-variables. This reduces or increases the amount of noise
:param noise_multiplier: Multiplier for the Gaussian random-variables. This reduces or increases the amount of noise
12 changes: 6 additions & 6 deletions docs/pages/image-extractor.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Habitat Sim provides an API to extract static images from a scene. The main clas
* pose_extractor_name: The name of the pose extractor used to programmatically define camera poses for image extraction. If the user registered a custom pose extractor (see "Custom Pose Extraction" section), this is the name given during registration. Default "closest_point_extractor".
* shuffle: Whether to shuffle the extracted images once they have been extracted. Default True.
* split: A tuple of train/test split percentages. Must add to 100. Default (70, 30).
* use_chaching: If True, ImageExtractor caches images in memory for quicker access during training. Default True.
* use_caching: If True, ImageExtractor caches images in memory for quicker access during training. Default True.
* pixels_per_meter: Resolution of topdown map (explained below). 0.1 means each pixel in the topdown map represents 0.1 x 0.1 meters in the coordinate system of the scene. Default 0.1.

**Methods**
Expand Down Expand Up @@ -60,7 +60,7 @@ Returns a list of the names of all the semantic classes represented in the curre

``close() -> None``:

The ImageExtractor uses an instance of habitat_sim.Simulator on the backend to extract images, of which only one can be instantiated at a time. Therefore, if you try to instantiate two ImageExtractors, you will get an error. You must call this method before instantiating another ImageExtractor. This method deletes the simulator associated with the current ImageExtracor instance.
The ImageExtractor uses an instance of habitat_sim.Simulator on the backend to extract images, of which only one can be instantiated at a time. Therefore, if you try to instantiate two ImageExtractors, you will get an error. You must call this method before instantiating another ImageExtractor. This method deletes the simulator associated with the current ImageExtractor instance.

-----

Expand Down Expand Up @@ -164,7 +164,7 @@ Make sure you have Habitat Sim correctly installed and the data downloaded (see
# different chairs will be marked with different id's). So we need
# to create a mapping from these instance id to the class labels we
# want to predict. We will use the below dictionaries to define a
# funtion that takes the raw output of the semantic sensor and creates
# function that takes the raw output of the semantic sensor and creates
# a 2d numpy array of out class labels.
self.labels = {
'background': 0,
Expand Down Expand Up @@ -448,7 +448,7 @@ After training for a short time on a small training dataset, we are able to see
.. image:: ../images/semantic-segmentation-results.png


On the top row we see the input to the model which is the batch of RGB images. On the middle row is the grouth truth masks. On the bottom row are the masks that the model predicted.
On the top row we see the input to the model which is the batch of RGB images. On the middle row is the ground truth masks. On the bottom row are the masks that the model predicted.



Expand Down Expand Up @@ -477,7 +477,7 @@ habitat_sim.registry (i.e. adding the @registry.register_pose_extractor(name) de

The default behavior is reliant on something called the topdown view of a scene, which is just a two-dimensional birds-eye representation of the scene. The topdown view is a two-dimensional array of 1s and 0s where 1 means that pixel is "navigable" in the scene (i.e. an agent can walk on top of that point) and 0 means that pixel is "unnavigable". For more detailed information about navigability and computing topdown maps, please refer to the `Habitat-Sim Basics for Navigation Colab notebook`_.

The default pose extractor is the ClosestPointExtractor, which behaves as follows. For each camera poisition, the pose extractor will aim the camera pose at the closest point that is "unnvaigable". For example, if the camera position is right next to a chair in the scene, and that chair is the closest point that an agent in the environment cannot walk on top of, the camera will point at the chair.
The default pose extractor is the ClosestPointExtractor, which behaves as follows. For each camera position, the pose extractor will aim the camera pose at the closest point that is "unnvaigable". For example, if the camera position is right next to a chair in the scene, and that chair is the closest point that an agent in the environment cannot walk on top of, the camera will point at the chair.

The ClosestPointExtractor will use the topdown view of the scene, which is given to it in its constructor, and create a grid of evenly spaced points. Each of those points will then yield a closest point as described above, which is used to define a camera angle, and subsequently a camera pose.

Expand Down Expand Up @@ -580,7 +580,7 @@ events happen:
self.cfg = self._config_sim(sim.config.sim_cfg.scene_id, img_size)
sim.reconfigure(self.cfg)
2. A towndown view of the scene is created, which a 2d numpy array consisting of 0.0s (meaning the point is unnavigable) and 1.0s (meaning the point is navigable). We create a list of 3-tuples (<topdown view>, <scene filepath>, <reference point for the scene>), one for each scene. This allows us to switch between multiple scenes and have a coordinate reference point within each scene.
2. A topdown view of the scene is created, which a 2d numpy array consisting of 0.0s (meaning the point is unnavigable) and 1.0s (meaning the point is navigable). We create a list of 3-tuples (<topdown view>, <scene filepath>, <reference point for the scene>), one for each scene. This allows us to switch between multiple scenes and have a coordinate reference point within each scene.

.. code:: py
Expand Down
10 changes: 5 additions & 5 deletions docs/pages/managed-rigid-object-tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ be modified directly.
:end-before: # [/object_user_configurations]

Forces and torques can be applied directly to the object using :ref:`habitat_sim.physics.ManagedRigidObject.apply_force` and :ref:`habitat_sim.physics.ManagedRigidObject.apply_torque`.
Instantanious initial velocities can also be set using the object's properties, :ref:`habitat_sim.physics.ManagedRigidObject.linear_velocity` and :ref:`habitat_sim.physics.ManagedRigidObject.angular_velocity`.
Instantaneous initial velocities can also be set using the object's properties, :ref:`habitat_sim.physics.ManagedRigidObject.linear_velocity` and :ref:`habitat_sim.physics.ManagedRigidObject.angular_velocity`.

In the example below, a constant anti-gravity force is applied to the boxes' centers of mass (COM) causing them to float in the air.
A constant torque is also applied which gradually increases the angular velocity of the boxes.
Expand Down Expand Up @@ -117,7 +117,7 @@ This is useful for synchronizing the simulation state of objects to a known stat

However, when applying model or algorithmic control it is more convenient to specify a constant linear and angular velocity for the object which will be simulated without manual integration.
The object's :ref:`habitat_sim.physics.VelocityControl` structure provides this functionality and can be acquired directly from the object via the read only property :ref:`habitat_sim.physics.ManagedRigidObject.velocity_control`.
Once paramters are set, control takes effect immediately on the next simulation step as shown in the following example.
Once parameters are set, control takes effect immediately on the next simulation step as shown in the following example.

.. include:: ../../examples/tutorials/nb_python/managed_rigid_object_tutorial.py
:code: py
Expand All @@ -142,7 +142,7 @@ Velocities can also be specified in the local space of the object to easily appl

Previous stages of this tutorial have covered adding objects to the world and manipulating them by setting positions, velocity, forces, and torques.
In all of these examples, the agent has been a passive onlooker observing the scene.
However, the agent can also be attached to a simulated object for embodiement and control.
However, the agent can also be attached to a simulated object for embodiment and control.
This can be done by passing the :ref:`Agent`'s scene node to the :ref:`habitat_sim.physics.RigidObjectManager.add_object_by_template_handle` or :ref:`habitat_sim.physics.RigidObjectManager.add_object_by_template_id` functions.

In this example, the agent is embodied by a rigid robot asset and the :ref:`habitat_sim.physics.VelocityControl` structure is used to control the robot's actions.
Expand Down Expand Up @@ -205,7 +205,7 @@ Objects can be configured to fill different roles in a simulated scene by assign
Constant forces and torques can be applied to these objects with :ref:`habitat_sim.physics.ManagedRigidObject.apply_force` and :ref:`habitat_sim.physics.ManagedRigidObject.apply_torque`.
These are cleared after each call to :ref:`Simulator.step_physics`.

Instantanious initial velocities can also be set for these objects using their :ref:`habitat_sim.physics.ManagedRigidObject.linear_velocity` and :ref:`habitat_sim.physics.ManagedRigidObject.angular_velocity` properties.
Instantaneous initial velocities can also be set for these objects using their :ref:`habitat_sim.physics.ManagedRigidObject.linear_velocity` and :ref:`habitat_sim.physics.ManagedRigidObject.angular_velocity` properties.

- :ref:`habitat_sim.physics.MotionType.KINEMATIC`

Expand All @@ -223,6 +223,6 @@ This can be queried from the simulator with :ref:`habitat_sim.physics.ManagedRig

For :ref:`habitat_sim.physics.MotionType.KINEMATIC` objects, velocity control will directly modify the object's rigid state.

For :ref:`habitat_sim.physics.MotionType.DYNAMIC` object, velocity control will set the initial velocity of the object before simualting.
For :ref:`habitat_sim.physics.MotionType.DYNAMIC` object, velocity control will set the initial velocity of the object before simulating.
In this case, velocity will be more accurate with smaller timestep requests to :ref:`Simulator.step_physics`.
Note that dynamics such as forces, collisions, and gravity will affect these objects, but expect extreme damping as velocities are being manually set before each timestep when controlled.

0 comments on commit 288ec1f

Please sign in to comment.