Skip to content

Commit e619a07

Browse files
authored
Fix spelling in json, md, rst files (#1146)
1 parent 1638afb commit e619a07

File tree

8 files changed

+16
-16
lines changed

8 files changed

+16
-16
lines changed

README-development.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
The Oracle Accelerated Data Science (ADS) SDK used by data scientists and analysts for
55
data exploration and experimental machine learning to democratize machine learning and
66
analytics by providing easy-to-use,
7-
performant, and user friendly tools that
7+
performant, and user-friendly tools that
88
brings together the best of data science practices.
99

1010
The ADS SDK helps you connect to different data sources, perform exploratory data analysis,
@@ -176,7 +176,7 @@ pip install -r test-requirements.txt
176176
```
177177

178178
### Step 2: Create local .env files
179-
Running the local JuypterLab server requires setting OCI authentication, proxy, and OCI namespace parameters. Adapt this .env file with your specific OCI profile and OCIDs to set these variables.
179+
Running the local JupyterLab server requires setting OCI authentication, proxy, and OCI namespace parameters. Adapt this .env file with your specific OCI profile and OCIDs to set these variables.
180180

181181
```
182182
CONDA_BUCKET_NS="your_conda_bucket"

docs/source/user_guide/configuration/configuration.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -296,7 +296,7 @@ encryption keys.
296296

297297
Master encryption keys can be generated internally by the Vault service
298298
or imported to the service from an external source. Once a master
299-
encryption key has been created, the Oracle Cloud Infrastruture API can
299+
encryption key has been created, the Oracle Cloud Infrastructure API can
300300
be used to generate data encryption keys that the Vault service returns
301301
to you. by default, a wrapping key is included with each vault. A
302302
wrapping key is a 4096-bit asymmetric encryption key pair based on the
@@ -673,7 +673,7 @@ prints it. This shows that the password was actually updated.
673673
wait_for_states=[oci.vault.models.Secret.LIFECYCLE_STATE_ACTIVE]).data
674674
675675
# The secret OCID does not change.
676-
print("Orginal Secret OCID: {}".format(secret_id))
676+
print("Original Secret OCID: {}".format(secret_id))
677677
print("Updated Secret OCID: {}".format(secret_update.id))
678678
679679
### Read a secret's value.
@@ -685,7 +685,7 @@ prints it. This shows that the password was actually updated.
685685
686686
.. parsed-literal::
687687
688-
Orginal Secret OCID: ocid1.vaultsecret.oc1.iad.amaaaaaav66vvnia2bmkbroin34eu2ghmubvmrtjdgo4yr6daewakacwuk4q
688+
Original Secret OCID: ocid1.vaultsecret.oc1.iad.amaaaaaav66vvnia2bmkbroin34eu2ghmubvmrtjdgo4yr6daewakacwuk4q
689689
Updated Secret OCID: ocid1.vaultsecret.oc1.iad.amaaaaaav66vvnia2bmkbroin34eu2ghmubvmrtjdgo4yr6daewakacwuk4q
690690
{'database': 'datamart', 'username': 'admin', 'password': 'UpdatedPassword'}
691691

docs/source/user_guide/configuration/vault.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -239,7 +239,7 @@ also retrieves the updated secret, converts it into a dictionary, and prints it.
239239
wait_for_states=[oci.vault.models.Secret.LIFECYCLE_STATE_ACTIVE]).data
240240
241241
# The secret OCID does not change.
242-
print("Orginal Secret OCID: {}".format(secret_id))
242+
print("Original Secret OCID: {}".format(secret_id))
243243
print("Updated Secret OCID: {}".format(secret_update.id))
244244
245245
### Read a secret's value.
@@ -251,7 +251,7 @@ also retrieves the updated secret, converts it into a dictionary, and prints it.
251251
252252
.. parsed-literal::
253253
254-
Orginal Secret OCID: ocid1.vaultsecret..<unique_ID>
254+
Original Secret OCID: ocid1.vaultsecret..<unique_ID>
255255
Updated Secret OCID: ocid1.vaultsecret..<unique_ID>
256256
{'database': 'datamart', 'username': 'admin', 'password': 'UpdatedPassword'}
257257

docs/source/user_guide/data_flow/dataflow.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ In the preparation stage, you prepare the configuration object necessary to crea
6363
* ``pyspark_file_path``: The local path to your ``PySpark`` script.
6464
* ``script_bucket``: The bucket used to read/write the ``PySpark`` script in Object Storage.
6565

66-
ADS checks that the bucket exists, and that you can write to it from your notebook sesssion. Optionally, you can change values for these parameters:
66+
ADS checks that the bucket exists, and that you can write to it from your notebook session. Optionally, you can change values for these parameters:
6767

6868
* ``compartment_id``: The OCID of the compartment to create a Data Flow application. If it's not provided, the same compartment as your dataflow object is used.
6969
* ``driver_shape``: The driver shape used to create the application. The default value is ``"VM.Standard2.4"``.

docs/source/user_guide/data_flow/legacy_dataflow.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ In the preparation stage, you prepare the configuration object necessary to crea
6868
* ``pyspark_file_path``: The local path to your ``PySpark`` script.
6969
* ``script_bucket``: The bucket used to read/write the ``PySpark`` script in Object Storage.
7070

71-
ADS checks that the bucket exists, and that you can write to it from your notebook sesssion. Optionally, you can change values for these parameters:
71+
ADS checks that the bucket exists, and that you can write to it from your notebook session. Optionally, you can change values for these parameters:
7272

7373
* ``compartment_id``: The OCID of the compartment to create a application. If it's not provided, the same compartment as your dataflow object is used.
7474
* ``driver_shape``: The driver shape used to create the application. The default value is ``"VM.Standard2.4"``.

docs/source/user_guide/operators/anomaly_detection_operator/use_cases.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ As a low-code extensible framework, operators enable a wide range of use cases.
1414
**Which Model is Right for You?**
1515

1616
* Autots is a very comprehensive framework for time series data, winning the M6 benchmark. Parameters can be sent directly to AutoTS' AnomalyDetector class through the ``model_kwargs`` section of the yaml file.
17-
* AutoMLX is a propreitary modeling framework developed by Oracle's Labs team and distributed through OCI Data Science. Parameters can be sent directly to AutoMLX's AnomalyDetector class through the ``model_kwargs`` section of the yaml file.
17+
* AutoMLX is a proprietary modeling framework developed by Oracle's Labs team and distributed through OCI Data Science. Parameters can be sent directly to AutoMLX's AnomalyDetector class through the ``model_kwargs`` section of the yaml file.
1818
* Together these 2 frameworks train and tune more than 25 models, and deliver the est results.
1919

2020

@@ -39,9 +39,9 @@ As a low-code extensible framework, operators enable a wide range of use cases.
3939

4040
**Feature Engineering**
4141

42-
* The Operator will perform most feature engineering on your behalf, such as infering holidays, day of week,
42+
* The Operator will perform most feature engineering on your behalf, such as inferring holidays, day of week,
4343

4444

4545
**Latency**
4646

47-
* The Operator is effectively a container distributed through the OCI Data Science platform. When deployed through Jobs or Model Deployment, customers can scale up the compute shape, memory size, and load balancer to make the prediciton progressively faster. Please consult an OCI Data Science Platform expert for more specifc advice.
47+
* The Operator is effectively a container distributed through the OCI Data Science platform. When deployed through Jobs or Model Deployment, customers can scale up the compute shape, memory size, and load balancer to make the prediction progressively faster. Please consult an OCI Data Science Platform expert for more specific advice.

docs/source/user_guide/quick_start/quick_start.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -10,5 +10,5 @@ Quick Start
1010
* :doc:`Evaluate Trained Models<../model_training/model_evaluation/quick_start>`
1111
* :doc:`Register, Manage, and Deploy Models<../model_registration/quick_start>`
1212
* :doc:`Store and Retrieve your data source credentials<../secrets/quick_start>`
13-
* :doc:`Conect to existing OCI Big Data Service<../big_data_service/quick_start>`
13+
* :doc:`Connect to existing OCI Big Data Service<../big_data_service/quick_start>`
1414

tests/unitary/with_extras/model/index.json

+3-3
Original file line numberDiff line numberDiff line change
@@ -289,7 +289,7 @@
289289
{
290290
"arch_type": "CPU",
291291
"create_date": "Sat, Feb 12, 2022, 05:04:46 UTC",
292-
"description": "This environment is designed to provided to test and execute your ONNX model artifacts. ONNX is an open source, open model format which allows you to save a model from different machine learning (ML) libraries into a single, portable format that is independent of the training library. ONNX models can be deployed through Oracle Cloud Infrastruture Data Science Model Deployment service. Use this conda environment to convert models from most ML libraries into ONNX format. Then use the ONNX runtime to perform inferencing. Review the processing steps that your model makes by having ONNX generate a graph of the model workflow.\nTo get started with the ONNX environment, review the notebook example **getting-started.ipynb** in the **Notebook Examples launcher button**.\n",
292+
"description": "This environment is designed to provided to test and execute your ONNX model artifacts. ONNX is an open source, open model format which allows you to save a model from different machine learning (ML) libraries into a single, portable format that is independent of the training library. ONNX models can be deployed through Oracle Cloud Infrastructure Data Science Model Deployment service. Use this conda environment to convert models from most ML libraries into ONNX format. Then use the ONNX runtime to perform inferencing. Review the processing steps that your model makes by having ONNX generate a graph of the model workflow.\nTo get started with the ONNX environment, review the notebook example **getting-started.ipynb** in the **Notebook Examples launcher button**.\n",
293293
"libraries": [
294294
"onnx (v1.10.2)",
295295
"onnxconverter-common (v1.9.0)",
@@ -315,7 +315,7 @@
315315
{
316316
"arch_type": "CPU",
317317
"create_date": "Mon, Jun 06, 2022, 20:51:19 UTC",
318-
"description": "This environment is designed to provided to test and execute your ONNX model artifacts. ONNX is an open source, open model format which allows you to save a model from different machine learning (ML) libraries into a single, portable format that is independent of the training library. ONNX models can be deployed through Oracle Cloud Infrastruture Data Science Model Deployment service. Use this conda environment to convert models from most ML libraries into ONNX format. Then use the ONNX runtime to perform inferencing. Review the processing steps that your model makes by having ONNX generate a graph of the model workflow.\nTo get started with the ONNX environment, review the getting-started notebook.\n",
318+
"description": "This environment is designed to provided to test and execute your ONNX model artifacts. ONNX is an open source, open model format which allows you to save a model from different machine learning (ML) libraries into a single, portable format that is independent of the training library. ONNX models can be deployed through Oracle Cloud Infrastructure Data Science Model Deployment service. Use this conda environment to convert models from most ML libraries into ONNX format. Then use the ONNX runtime to perform inferencing. Review the processing steps that your model makes by having ONNX generate a graph of the model workflow.\nTo get started with the ONNX environment, review the getting-started notebook.\n",
319319
"libraries": [
320320
"onnx (v1.10.2)",
321321
"onnxconverter-common (v1.9.0)",
@@ -341,7 +341,7 @@
341341
{
342342
"arch_type": "CPU",
343343
"create_date": "Mon, Jun 06, 2022, 20:52:30 UTC",
344-
"description": "This environment is designed to provided to test and execute your ONNX model artifacts. ONNX is an open source, open model format which allows you to save a model from different machine learning (ML) libraries into a single, portable format that is independent of the training library. ONNX models can be deployed through Oracle Cloud Infrastruture Data Science Model Deployment service. Use this conda environment to convert models from most ML libraries into ONNX format. Then use the ONNX runtime to perform inferencing. Review the processing steps that your model makes by having ONNX generate a graph of the model workflow.\nTo get started with the ONNX environment, review the getting-started notebook.\n",
344+
"description": "This environment is designed to provided to test and execute your ONNX model artifacts. ONNX is an open source, open model format which allows you to save a model from different machine learning (ML) libraries into a single, portable format that is independent of the training library. ONNX models can be deployed through Oracle Cloud Infrastructure Data Science Model Deployment service. Use this conda environment to convert models from most ML libraries into ONNX format. Then use the ONNX runtime to perform inferencing. Review the processing steps that your model makes by having ONNX generate a graph of the model workflow.\nTo get started with the ONNX environment, review the getting-started notebook.\n",
345345
"libraries": [
346346
"onnx (v1.10.2)",
347347
"onnxconverter-common (v1.9.0)",

0 commit comments

Comments
 (0)