Skip to content

Commit

Permalink
Fix typos in doc directory (#51054)
Browse files Browse the repository at this point in the history
Signed-off-by: co63oc <co63oc@users.noreply.github.com>
  • Loading branch information
co63oc authored Mar 4, 2025
1 parent 6a6981b commit 3f7e102
Show file tree
Hide file tree
Showing 45 changed files with 64 additions and 64 deletions.
2 changes: 1 addition & 1 deletion ci/ray_ci/doc/cmd_check_api_discrepancy.py
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ def main(ray_checkout_dir: str, team: str) -> None:

all_pass = True
# Needs to do core first, otherwise, the APIs in other teams may be covered by core.
# This is due to the side effect ofo "importlib" and walking through the modules.
# This is due to the side effect of "importlib" and walking through the modules.
if not _check_team(ray_checkout_dir, "core"):
all_pass = False
for team in TEAM_API_CONFIGS:
Expand Down
2 changes: 1 addition & 1 deletion doc/source/cluster/configure-manage-dashboard.md
Original file line number Diff line number Diff line change
Expand Up @@ -258,7 +258,7 @@ When the Grafana instance requires user authentication, the following settings h

#### Troubleshooting

##### Dashboard message: either Prometheus or Grafana server is not deteced
##### Dashboard message: either Prometheus or Grafana server is not detected
If you have followed the instructions above to set up everything, run the connection checks below in your browser:
* check Head Node connection to Prometheus server: add `api/prometheus_health` to the end of Ray Dashboard URL (for example: http://127.0.0.1:8265/api/prometheus_health)and visit it.
* check Head Node connection to Grafana server: add `api/grafana_health` to the end of Ray Dashboard URL (for example: http://127.0.0.1:8265/api/grafana_health) and visit it.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -265,9 +265,9 @@ spec:
cpu: "2"
memory: "4G"
requests:
# For production use-cases, we recommend specifying integer CPU reqests and limits.
# For production use-cases, we recommend specifying integer CPU requests and limits.
# We also recommend setting requests equal to limits for both CPU and memory.
# For this example, we use a 500m CPU request to accomodate resource-constrained local
# For this example, we use a 500m CPU request to accommodate resource-constrained local
# Kubernetes testing environments such as Kind and minikube.
cpu: "2"
# The rest state memory usage of the Ray head node is around 1Gb. We do not
Expand Down Expand Up @@ -374,9 +374,9 @@ spec:
limits:
cpu: "1"
memory: "1G"
# For production use-cases, we recommend specifying integer CPU reqests and limits.
# For production use-cases, we recommend specifying integer CPU requests and limits.
# We also recommend setting requests equal to limits for both CPU and memory.
# For this example, we use a 500m CPU request to accomodate resource-constrained local
# For this example, we use a 500m CPU request to accommodate resource-constrained local
# Kubernetes testing environments such as Kind and minikube.
requests:
cpu: "500m"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -157,9 +157,9 @@ spec:
cpu: "1"
memory: "2G"
requests:
# For production use-cases, we recommend specifying integer CPU reqests and limits.
# For production use-cases, we recommend specifying integer CPU requests and limits.
# We also recommend setting requests equal to limits for both CPU and memory.
# For this example, we use a 500m CPU request to accomodate resource-constrained local
# For this example, we use a 500m CPU request to accommodate resource-constrained local
# Kubernetes testing environments such as Kind and minikube.
cpu: "500m"
# The rest state memory usage of the Ray head node is around 1Gb. We do not
Expand Down Expand Up @@ -223,9 +223,9 @@ spec:
limits:
cpu: "1"
memory: "1G"
# For production use-cases, we recommend specifying integer CPU reqests and limits.
# For production use-cases, we recommend specifying integer CPU requests and limits.
# We also recommend setting requests equal to limits for both CPU and memory.
# For this example, we use a 500m CPU request to accomodate resource-constrained local
# For this example, we use a 500m CPU request to accommodate resource-constrained local
# Kubernetes testing environments such as Kind and minikube.
requests:
cpu: "500m"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ spec:
# resource accounting. K8s requests are not used by Ray.
resources:
limits:
# Slightly less than 16 to accomodate placement on 16 vCPU virtual machine.
# Slightly less than 16 to accommodate placement on 16 vCPU virtual machine.
cpu: "14"
memory: "54Gi"
# The node that hosts this pod should have at least 1000Gi disk space,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -284,7 +284,7 @@ Training finished iteration 10 at 2024-04-29 17:27:43. Total running time: 1min
Training saved a checkpoint for iteration 10 at: (local)/mnt/cluster_storage/finetune-resnet/TorchTrainer_96923_00000_0_2024-04-29_17-21-29/checkpoint_000009
Training completed after 10 iterations at 2024-04-29 17:27:45. Total running time: 1min 9s
2024-04-29 17:27:46,236 WARNING experiment_state.py:323 -- Experiment checkpoint syncing has been triggered multiple times in the last 30.0 seconds. A sync will be triggered whenever a trial has checkpointed more than `num_to_keep` times since last sync or if 300 seconds have passed since last sync. If you have set `num_to_keep` in your `CheckpointConfig`, consider increasing the checkpoint frequency or keeping more checkpoints. You can supress this warning by changing the `TUNE_WARN_EXCESSIVE_EXPERIMENT_CHECKPOINT_SYNC_THRESHOLD_S` environment variable.
2024-04-29 17:27:46,236 WARNING experiment_state.py:323 -- Experiment checkpoint syncing has been triggered multiple times in the last 30.0 seconds. A sync will be triggered whenever a trial has checkpointed more than `num_to_keep` times since last sync or if 300 seconds have passed since last sync. If you have set `num_to_keep` in your `CheckpointConfig`, consider increasing the checkpoint frequency or keeping more checkpoints. You can suppress this warning by changing the `TUNE_WARN_EXCESSIVE_EXPERIMENT_CHECKPOINT_SYNC_THRESHOLD_S` environment variable.
Result(
metrics={'loss': 0.08333033206416111, 'acc': 0.23529411764705882},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ NAME ACCEPTED PROVISIONED
rayjob-pytorch-text-classifier-nv77q-e95ec-rayjob-gpu-1 True False False 22s
```

Note the two coloumns in the output: `ACCEPTED` and `PROVISIONED`.
Note the two columns in the output: `ACCEPTED` and `PROVISIONED`.
`ACCEPTED=True` means that Kueue and the Kubernetes node autoscaler have acknowledged the request.
`PROVISIONED=True` means that the Kubernetes node autoscaler has completed provisioning nodes.
Once both of these conditions are true, the ProvisioningRequest is satisfied.
Expand Down
2 changes: 1 addition & 1 deletion doc/source/cluster/kubernetes/user-guides/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -177,7 +177,7 @@ Python version.
To distribute custom code dependencies across your cluster, you can build a custom container image,
using one of the [official Ray images](https://hub.docker.com/r/rayproject/ray) as the base.
See {ref}`this guide <docker-images>` to learn more about the official Ray images.
For dynamic dependency management geared towards iteration and developement,
For dynamic dependency management geared towards iteration and development,
you can also use {ref}`Runtime Environments <runtime-environments>`.

For `kuberay-operator` versions 1.1.0 and later, the Ray container image must have `wget` installed in it.
Expand Down
2 changes: 1 addition & 1 deletion doc/source/cluster/kubernetes/user-guides/tls.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ openssl x509 -in ca.crt -noout -text
# Method 1: Use `cat $FILENAME | base64` to encode `ca.key` and `ca.crt`.
# Then, paste the encoding strings to the Kubernetes Secret in `ray-cluster.tls.yaml`.

# Method 2: Use kubectl to encode the certifcate as Kubernetes Secret automatically.
# Method 2: Use kubectl to encode the certificate as Kubernetes Secret automatically.
# (Note: You should comment out the Kubernetes Secret in `ray-cluster.tls.yaml`.)
kubectl create secret generic ca-tls --from-file=ca.key --from-file=ca.crt
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ Alternative Connection Approach:

Instead of port-forwarding, you can directly connect to the Ray Client server on the head node if your computer
has network access to the head node. This is an option if your computer is on the same network as the Cluster or
if your computer can connct to the Cluster with a VPN.
if your computer can connect to the Cluster with a VPN.

If your computer does not have direct access, you can modify the network configuration to grant access. On `EC2 <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/authorizing-access-to-an-instance.html>`_,
this can be done by modifying the security group to allow inbound access from your local IP address to the Ray Client server port (``10001`` by default).
Expand Down
2 changes: 1 addition & 1 deletion doc/source/cluster/vms/examples/ml-example.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ Use the following tools to observe its progress.

To follow the job's logs, use the command printed by the above submission script.
```shell
# Subsitute the Ray Job's submission id.
# Substitute the Ray Job's submission id.
ray job logs 'raysubmit_xxxxxxxxxxxxxxxx' --address="http://localhost:8265" --follow
```

Expand Down
2 changes: 1 addition & 1 deletion doc/source/custom_directives.py
Original file line number Diff line number Diff line change
Expand Up @@ -563,7 +563,7 @@ def from_path(cls, path: Union[pathlib.Path, str]) -> "Library":


class Example:
"""Class containing metadata about an example to be shown in the exmaple gallery."""
"""Class containing metadata about an example to be shown in the example gallery."""

def __init__(
self, config: Dict[str, str], library: Library, config_dir: pathlib.Path
Expand Down
4 changes: 2 additions & 2 deletions doc/source/data/loading-data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,7 @@ To read formats other than Parquet, see the :ref:`Input/Output reference <input-

Ray Data relies on PyArrow for authentication with Amazon S3. For more on how to configure
your credentials to be compatible with PyArrow, see their
`S3 Filesytem docs <https://arrow.apache.org/docs/python/filesystems.html#s3>`_.
`S3 Filesystem docs <https://arrow.apache.org/docs/python/filesystems.html#s3>`_.

.. tab-item:: GCS

Expand Down Expand Up @@ -256,7 +256,7 @@ To read formats other than Parquet, see the :ref:`Input/Output reference <input-

Ray Data relies on PyArrow for authentication with Google Cloud Storage. For more on how
to configure your credentials to be compatible with PyArrow, see their
`GCS Filesytem docs <https://arrow.apache.org/docs/python/filesystems.html#google-cloud-storage-file-system>`_.
`GCS Filesystem docs <https://arrow.apache.org/docs/python/filesystems.html#google-cloud-storage-file-system>`_.

.. tab-item:: ABS

Expand Down
8 changes: 4 additions & 4 deletions doc/source/data/saving-data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ To write data to formats other than Parquet, read the :ref:`Input/Output referen

Ray Data relies on PyArrow to authenticate with Amazon S3. For more on how to configure
your credentials to be compatible with PyArrow, see their
`S3 Filesytem docs <https://arrow.apache.org/docs/python/filesystems.html#s3>`_.
`S3 Filesystem docs <https://arrow.apache.org/docs/python/filesystems.html#s3>`_.

.. tab-item:: GCS

Expand All @@ -89,9 +89,9 @@ To write data to formats other than Parquet, read the :ref:`Input/Output referen
filesystem = gcsfs.GCSFileSystem(project="my-google-project")
ds.write_parquet("gcs://my-bucket/my-folder", filesystem=filesystem)

Ray Data relies on PyArrow for authenticaion with Google Cloud Storage. For more on how
Ray Data relies on PyArrow for authentication with Google Cloud Storage. For more on how
to configure your credentials to be compatible with PyArrow, see their
`GCS Filesytem docs <https://arrow.apache.org/docs/python/filesystems.html#google-cloud-storage-file-system>`_.
`GCS Filesystem docs <https://arrow.apache.org/docs/python/filesystems.html#google-cloud-storage-file-system>`_.

.. tab-item:: ABS

Expand All @@ -114,7 +114,7 @@ To write data to formats other than Parquet, read the :ref:`Input/Output referen
filesystem = adlfs.AzureBlobFileSystem(account_name="azureopendatastorage")
ds.write_parquet("az://my-bucket/my-folder", filesystem=filesystem)

Ray Data relies on PyArrow for authenticaion with Azure Blob Storage. For more on how
Ray Data relies on PyArrow for authentication with Azure Blob Storage. For more on how
to configure your credentials to be compatible with PyArrow, see their
`fsspec-compatible filesystems docs <https://arrow.apache.org/docs/python/filesystems.html#using-fsspec-compatible-filesystems-with-arrow>`_.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ def out_of_band_serialization_ray_cloudpickle():
# By default, it's allowed to serialize ray.ObjectRef using
# ray.cloudpickle.
ray.get(out_of_band_serialization_ray_cloudpickle.options().remote())
# you can see objects are stil pinned although it's GC'ed and not used anymore.
# you can see objects are still pinned although it's GC'ed and not used anymore.
print(memory_summary())

print(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,8 @@ def double(number):
start_time = time.time()
serial_doubled_numbers = [double(number) for number in numbers]
end_time = time.time()
print(f"Ordinary funciton call takes {end_time - start_time} seconds")
# Ordinary funciton call takes 0.16506004333496094 seconds
print(f"Ordinary function call takes {end_time - start_time} seconds")
# Ordinary function call takes 0.16506004333496094 seconds


@ray.remote
Expand Down
2 changes: 1 addition & 1 deletion doc/source/ray-core/doc_code/namespaces.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ class Actor:
except ValueError:
pass

# This succceeds because the name "orange" is unused in this namespace.
# This succeeds because the name "orange" is unused in this namespace.
Actor.options(name="orange", lifetime="detached").remote()
Actor.options(name="watermelon", lifetime="detached").remote()

Expand Down
2 changes: 1 addition & 1 deletion doc/source/ray-core/examples/lm/ray_train.py
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ def add_ray_args(parser):
type=lambda uf: options.eval_str_list(uf, type=int),
help="fix the actual batch size (max_sentences * update_freq "
"* n_GPUs) to be the fixed input values by adjusting update_freq "
"accroding to actual n_GPUs; the batch size is fixed to B_i for "
"according to actual n_GPUs; the batch size is fixed to B_i for "
"epoch i; all epochs >N are fixed to B_N",
)
return group
Expand Down
2 changes: 1 addition & 1 deletion doc/source/ray-core/namespaces.rst
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ Named actors are only accessible within their namespaces.
Ray.init();
// This fails because "orange" was defined in the "colors" namespace.
Ray.getActor("orange").isPresent(); // return false
// This succceeds because the name "orange" is unused in this namespace.
// This succeeds because the name "orange" is unused in this namespace.
Ray.actor(Actor::new).setName("orange").remote();
Ray.actor(Actor::new).setName("watermelon").remote();
} finally {
Expand Down
2 changes: 1 addition & 1 deletion doc/source/ray-core/patterns/limit-running-tasks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Pattern: Using resources to limit the number of concurrently running tasks
In this pattern, we use :ref:`resources <resource-requirements>` to limit the number of concurrently running tasks.

By default, Ray tasks require 1 CPU each and Ray actors require 0 CPU each, so the scheduler limits task concurrency to the available CPUs and actor concurrency to infinite.
Tasks that use more than 1 CPU (e.g., via mutlithreading) may experience slowdown due to interference from concurrent ones, but otherwise are safe to run.
Tasks that use more than 1 CPU (e.g., via multithreading) may experience slowdown due to interference from concurrent ones, but otherwise are safe to run.

However, tasks or actors that use more than their proportionate share of memory may overload a node and cause issues like OOM.
If that is the case, we can reduce the number of concurrently running tasks or actors on each node by increasing the amount of resources requested by them.
Expand Down
2 changes: 1 addition & 1 deletion doc/source/ray-core/ray-dag.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ executed as root node while iterating, or used as input args or kwargs of other
functions to form more complex DAGs.

Any IR node can be executed directly ``dag_node.execute()`` that acts as root
of the DAG, where all other non-reachable nodes from the root will be igored.
of the DAG, where all other non-reachable nodes from the root will be ignored.

.. tab-set::

Expand Down
2 changes: 1 addition & 1 deletion doc/source/ray-core/scheduling/ray-oom-prevention.rst
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ If, at this point, the node still runs out of memory, the process will repeat:

Let's create an application oom.py that runs a single task that requires more memory than what is available. It is set to infinite retry by setting ``max_retries`` to -1.

The worker killer policy sees that it is the last task of the caller, and will fail the workload when it kills the task as it is the last one for the caller, even when the task is set to retry forver.
The worker killer policy sees that it is the last task of the caller, and will fail the workload when it kills the task as it is the last one for the caller, even when the task is set to retry forever.

.. literalinclude:: ../doc_code/ray_oom_prevention.py
:language: python
Expand Down
2 changes: 1 addition & 1 deletion doc/source/ray-more-libs/ray-collective.rst
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ Note that the current set of collective communication API are imperative, and ex
* All the collective APIs are synchronous blocking calls
* Since each API only specifies a part of the collective communication, the API is expected to be called by each participating process of the (pre-declared) collective group.
Upon all the processes have made the call and rendezvous with each other, the collective communication happens and proceeds.
* The APIs are imperative and the communication happends out-of-band --- they need to be used inside the collective process (actor/task) code.
* The APIs are imperative and the communication happens out-of-band --- they need to be used inside the collective process (actor/task) code.

An example of using ``ray.util.collective.allreduce`` is below:

Expand Down
2 changes: 1 addition & 1 deletion doc/source/ray-more-libs/raydp.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Using Spark on Ray (RayDP)
**************************

RayDP combines your Spark and Ray clusters, making it easy to do large scale
data processing using the PySpark API and seemlessly use that data to train
data processing using the PySpark API and seamlessly use that data to train
your models using TensorFlow and PyTorch.

For more information and examples, see the RayDP Github page:
Expand Down
2 changes: 1 addition & 1 deletion doc/source/ray-observability/getting-started.rst
Original file line number Diff line number Diff line change
Expand Up @@ -362,7 +362,7 @@ For Actors, you can also see the system logs for the corresponding Worker proces

.. note::

Logs of aysnchronous Actor Tasks or threaded Actor Tasks (concurrency>1) are only available as part of the Actor logs. Follow the instruction in the Dashboard to view the Actor logs.
Logs of asynchronous Actor Tasks or threaded Actor Tasks (concurrency>1) are only available as part of the Actor logs. Follow the instruction in the Dashboard to view the Actor logs.

**Task and Actor errors**

Expand Down
2 changes: 1 addition & 1 deletion doc/source/ray-observability/user-guides/cli-sdk.rst
Original file line number Diff line number Diff line change
Expand Up @@ -737,7 +737,7 @@ through the APIs because they are already garbage collected.
API Reference
~~~~~~~~~~~~~~~~~~~~~~~~~~

- For the CLI Reference, see :ref:`State CLI Refernece <state-api-cli-ref>`.
- For the CLI Reference, see :ref:`State CLI Reference <state-api-cli-ref>`.
- For the SDK Reference, see :ref:`State API Reference <state-api-ref>`.
- For the Log CLI Reference, see :ref:`Log CLI Reference <ray-logs-api-cli-ref>`.

Expand Down
4 changes: 2 additions & 2 deletions doc/source/ray-observability/user-guides/configure-logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -239,7 +239,7 @@ ray_serve_logger = logging.getLogger("ray.serve")
ray_data_logger.setLevel(logging.WARNING)

# Other loggers can be modified similarly.
# Here's how to add an aditional file handler for Ray Tune:
# Here's how to add an additional file handler for Ray Tune:
ray_tune_logger.addHandler(logging.FileHandler("extra_ray_tune_log.log"))
```

Expand Down Expand Up @@ -589,4 +589,4 @@ The max size of a log file, including its backup, is `RAY_ROTATION_MAX_BYTES * R

## Log persistence

To process and export logs to external stroage or management systems, view {ref}`log persistence on Kubernetes <persist-kuberay-custom-resource-logs>` see {ref}`log persistence on VMs <vm-logging>` for more details.
To process and export logs to external storage or management systems, view {ref}`log persistence on Kubernetes <persist-kuberay-custom-resource-logs>` see {ref}`log persistence on VMs <vm-logging>` for more details.
Loading

0 comments on commit 3f7e102

Please sign in to comment.