Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion providers/airbyte/docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@
:maxdepth: 1
:caption: Resources

Example DAGs <https://github.com/apache/airflow/tree/providers-airbyte/|version|/providers/airbyte/tests/system/airbyte>
Example Dags <https://github.com/apache/airflow/tree/providers-airbyte/|version|/providers/airbyte/tests/system/airbyte>
PyPI Repository <https://pypi.org/project/apache-airflow-providers-airbyte/>
Installing from sources <installing-providers-from-sources>

Expand Down
2 changes: 1 addition & 1 deletion providers/alibaba/docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@
:maxdepth: 1
:caption: Resources

Example DAGs <https://github.com/apache/airflow/tree/providers-alibaba/|version|/alibaba/tests/system/alibaba>
Example Dags <https://github.com/apache/airflow/tree/providers-alibaba/|version|/alibaba/tests/system/alibaba>
PyPI Repository <https://pypi.org/project/apache-airflow-providers-alibaba/>
Installing from sources <installing-providers-from-sources>

Expand Down
2 changes: 1 addition & 1 deletion providers/alibaba/docs/operators/analyticdb_spark.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ Develop Spark batch applications
Purpose
"""""""

This example dag uses ``AnalyticDBSparkBatchOperator`` to submit Spark Pi and Spark Logistic regression applications.
This example Dag uses ``AnalyticDBSparkBatchOperator`` to submit Spark Pi and Spark Logistic regression applications.

Defining tasks
""""""""""""""
Expand Down
2 changes: 1 addition & 1 deletion providers/alibaba/docs/operators/oss.rst
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Create and Delete Alibaba Cloud OSS Buckets
Purpose
"""""""

This example dag uses ``OSSCreateBucketOperator`` and ``OSSDeleteBucketOperator`` to create a
This example Dag uses ``OSSCreateBucketOperator`` and ``OSSDeleteBucketOperator`` to create a
new OSS bucket with a given bucket name then delete it.

Defining tasks
Expand Down
10 changes: 5 additions & 5 deletions providers/amazon/docs/auth-manager/manage/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -260,10 +260,10 @@ This is equivalent to the :doc:`Op role in Flask AppBuilder <apache-airflow-prov
resource
);

Give DAG specific permissions to a group of users
Give Dag specific permissions to a group of users
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The policy below gives all DAG related permissions of the DAG ``test`` to a group of users.
The policy below gives all Dag related permissions of the Dag ``test`` to a group of users.

::

Expand All @@ -273,7 +273,7 @@ The policy below gives all DAG related permissions of the DAG ``test`` to a grou
resource == Airflow::Dag::"test"
);

The policy below gives all DAG related permissions of the DAGs ``financial-1`` and ``financial-2`` to a group of users.
The policy below gives all Dag related permissions of the Dags ``financial-1`` and ``financial-2`` to a group of users.

::

Expand All @@ -283,7 +283,7 @@ The policy below gives all DAG related permissions of the DAGs ``financial-1`` a
resource in [Airflow::Dag::"financial-1", Airflow::Dag::"financial-2"]
);

The policy below gives access to logs of the DAG ``test`` to a group of users.
The policy below gives access to logs of the Dag ``test`` to a group of users.

::

Expand All @@ -303,7 +303,7 @@ For example, if both one **permit** and one **forbid** policies match the reques
This can be useful if, for example, you want to restrict access to a specific user who belongs to a group that is
granted all permissions.

The policy below removes access of DAGs ``secret-dag-1`` and ``secret-dag-2`` from a specific user.
The policy below removes access of Dags ``secret-dag-1`` and ``secret-dag-2`` from a specific user.

::

Expand Down
4 changes: 2 additions & 2 deletions providers/amazon/docs/example-dags.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,9 @@
specific language governing permissions and limitations
under the License.

Example DAGs
Example Dags
============

You can learn how to use Amazon AWS integrations by analyzing the source code of the example DAGs:
You can learn how to use Amazon AWS integrations by analyzing the source code of the example Dags:

* `Amazon AWS <https://github.com/apache/airflow/tree/providers-amazon/|version|/providers/amazon/tests/system/amazon/aws>`__
4 changes: 2 additions & 2 deletions providers/amazon/docs/executors/batch-executor.rst
Original file line number Diff line number Diff line change
Expand Up @@ -113,8 +113,8 @@ used by the Batch Executor, the appropriate policy needs to be attached to the E
Additionally, the role also needs to have at least the ``CloudWatchLogsFullAccess``
(or ``CloudWatchLogsFullAccessV2``) policies. The Job Role is the role that is
used by the containers to make AWS API requests. This role needs to have
permissions based on the tasks that are described in the DAG being run.
If you are loading DAGs via an S3 bucket, this role needs to have
permissions based on the tasks that are described in the Dag being run.
If you are loading Dags via an S3 bucket, this role needs to have
permission to read the S3 bucket.

To create a new Job Role or Execution Role, follow the steps
Expand Down
4 changes: 2 additions & 2 deletions providers/amazon/docs/executors/ecs-executor.rst
Original file line number Diff line number Diff line change
Expand Up @@ -134,8 +134,8 @@ ECS Executor, this role needs to have at least the
``AmazonECSTaskExecutionRolePolicy`` as well as the
``CloudWatchLogsFullAccess`` (or ``CloudWatchLogsFullAccessV2``) policies. The Task Role is the role that is
used by the containers to make AWS API requests. This role needs to have
permissions based on the tasks that are described in the DAG being run.
If you are loading DAGs via an S3 bucket, this role needs to have
permissions based on the tasks that are described in the Dag being run.
If you are loading Dags via an S3 bucket, this role needs to have
permission to read the S3 bucket.

To create a new Task Role or Task Execution Role, follow the steps
Expand Down
18 changes: 9 additions & 9 deletions providers/amazon/docs/executors/general.rst
Original file line number Diff line number Diff line change
Expand Up @@ -150,12 +150,12 @@ have python 3.10 installed.

.. BEGIN LOADING_DAGS_OVERVIEW

Loading DAGs
Loading Dags
~~~~~~~~~~~~

There are many ways to load DAGs on a container used by |executorName|. This Dockerfile
There are many ways to load Dags on a container used by |executorName|. This Dockerfile
is preconfigured with two possible ways: copying from a local folder, or
downloading from an S3 bucket. Other methods of loading DAGs are
downloading from an S3 bucket. Other methods of loading Dags are
possible as well.

.. END LOADING_DAGS_OVERVIEW
Expand All @@ -165,11 +165,11 @@ possible as well.
From S3 Bucket
^^^^^^^^^^^^^^

To load DAGs from an S3 bucket, uncomment the entrypoint line in the
Dockerfile to synchronize the DAGs from the specified S3 bucket to the
To load Dags from an S3 bucket, uncomment the entrypoint line in the
Dockerfile to synchronize the Dags from the specified S3 bucket to the
``/opt/airflow/dags`` directory inside the container. You can optionally
provide ``container_dag_path`` as a build argument if you want to store
the DAGs in a directory other than ``/opt/airflow/dags``.
the Dags in a directory other than ``/opt/airflow/dags``.

Add ``--build-arg s3_uri=YOUR_S3_URI`` in the docker build command.
Replace ``YOUR_S3_URI`` with the URI of your S3 bucket. Make sure you
Expand All @@ -194,18 +194,18 @@ build arguments.
From Local Folder
^^^^^^^^^^^^^^^^^

To load DAGs from a local folder, place your DAG files in a folder
To load Dags from a local folder, place your Dag files in a folder
within the docker build context on your host machine, and provide the
location of the folder using the ``host_dag_path`` build argument. By
default, the DAGs will be copied to ``/opt/airflow/dags``, but this can
default, the Dags will be copied to ``/opt/airflow/dags``, but this can
be changed by passing the ``container_dag_path`` build-time argument
during the Docker build process:

.. code-block:: bash

docker build -t my-airflow-image --build-arg host_dag_path=./dags_on_host --build-arg container_dag_path=/path/on/container .

If choosing to load DAGs onto a different path than
If choosing to load Dags onto a different path than
``/opt/airflow/dags``, then the new path will need to be updated in the
Airflow config.

Expand Down
6 changes: 3 additions & 3 deletions providers/amazon/docs/executors/lambda-executor.rst
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ provider package.
The most secure method is to use IAM roles. When creating a Lambda Function
Definition, you are able to select an execution role. This role needs
permissions to publish messages to the SQS queues and to write to CloudWatchLogs
or S3 if using AWS remote logging and/or using S3 to synchronize dags
or S3 if using AWS remote logging and/or using S3 to synchronize Dags
(e.g. ``CloudWatchLogsFullAccess`` or ``CloudWatchLogsFullAccessV2``).
The AWS credentials used on the Scheduler need permissions to
describe and invoke Lambda functions as well as to describe and read/delete
Expand Down Expand Up @@ -170,12 +170,12 @@ From S3 Bucket
^^^^^^^^^^^^^^

Dags can be loaded from S3 when using the provided example app.py, which
contains logic to synchronize the DAGs from S3 to the local filesystem of
contains logic to synchronize the Dags from S3 to the local filesystem of
the Lambda function (see the app.py code |appHandlerLink|).

To load Dags from an S3 bucket add ``--build-arg s3_uri=YOUR_S3_URI`` in
the docker build command. Replace ``YOUR_S3_URI`` with the URI of your S3
bucket/path containing your dags. Make sure you have the appropriate
bucket/path containing your Dags. Make sure you have the appropriate
permissions to read from the bucket.

.. code-block:: bash
Expand Down
2 changes: 1 addition & 1 deletion providers/amazon/docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@
:maxdepth: 1
:caption: Resources

Example DAGs <example-dags>
Example Dags <example-dags>
PyPI Repository <https://pypi.org/project/apache-airflow-providers-amazon/>
Installing from sources <installing-providers-from-sources>

Expand Down
6 changes: 3 additions & 3 deletions providers/amazon/docs/logging/s3-task-handler.rst
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ With the above configurations, Webserver and Worker Pods can access Amazon S3 bu

- Using Airflow Web UI

The final step to create connections under Airflow UI before executing the DAGs.
The final step to create connections under Airflow UI before executing the Dags.

* Login to Airflow Web UI with ``admin`` credentials and Navigate to ``Admin -> Connections``
* Create connection for ``Amazon Web Services`` and select the options (Connection ID and Connection Type) as shown in the image.
Expand All @@ -141,6 +141,6 @@ With the above configurations, Webserver and Worker Pods can access Amazon S3 bu

Step4: Verify the logs
~~~~~~~~~~~~~~~~~~~~~~
* Execute example DAGs
* Execute example Dags
* Verify the logs in S3 bucket
* Verify the logs from Airflow UI from DAGs log
* Verify the logs from Airflow UI from Dags log
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ How-to Guide for Chime notifications
Introduction
------------
Chime notifier (:class:`airflow.providers.amazon.aws.notifications.chime.ChimeNotifier`) allows users to send
messages to a Chime chat room setup via a webhook using the various ``on_*_callbacks`` at both the DAG level and Task level
messages to a Chime chat room setup via a webhook using the various ``on_*_callbacks`` at both the Dag level and Task level


Example Code:
Expand All @@ -30,16 +30,16 @@ Example Code:
.. code-block:: python

from datetime import datetime
from airflow import DAG
from airflow import Dag
from airflow.providers.standard.operators.bash import BashOperator
from airflow.providers.amazon.aws.notifications.chime import send_chime_notification

with DAG(
with Dag(
dag_id="mydag",
schedule="@once",
start_date=datetime(2023, 6, 27),
on_success_callback=[
send_chime_notification(chime_conn_id="my_chime_conn", message="The DAG {{ dag.dag_id }} succeeded")
send_chime_notification(chime_conn_id="my_chime_conn", message="The Dag {{ dag.dag_id }} succeeded")
],
catchup=False,
):
Expand Down
8 changes: 4 additions & 4 deletions providers/amazon/docs/notifications/sns.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ How-to Guide for Amazon Simple Notification Service (Amazon SNS) notifications
Introduction
------------
`Amazon SNS <https://aws.amazon.com/sns/>`__ notifier :class:`~airflow.providers.amazon.aws.notifications.sns.SnsNotifier`
allows users to push messages to a SNS Topic using the various ``on_*_callbacks`` at both the DAG level and Task level.
allows users to push messages to a SNS Topic using the various ``on_*_callbacks`` at both the Dag level and Task level.


Example Code:
Expand All @@ -32,14 +32,14 @@ Example Code:
.. code-block:: python

from datetime import datetime
from airflow import DAG
from airflow import Dag
from airflow.providers.standard.operators.bash import BashOperator
from airflow.providers.amazon.aws.notifications.sns import send_sns_notification

dag_failure_sns_notification = send_sns_notification(
aws_conn_id="aws_default",
region_name="eu-west-2",
message="The DAG {{ dag.dag_id }} failed",
message="The Dag {{ dag.dag_id }} failed",
target_arn="arn:aws:sns:us-west-2:123456789098:TopicName",
)
task_failure_sns_notification = send_sns_notification(
Expand All @@ -49,7 +49,7 @@ Example Code:
target_arn="arn:aws:sns:us-west-2:123456789098:AnotherTopicName",
)

with DAG(
with Dag(
dag_id="mydag",
schedule="@once",
start_date=datetime(2023, 1, 1),
Expand Down
8 changes: 4 additions & 4 deletions providers/amazon/docs/notifications/sqs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ How-to Guide for Amazon Simple Queue Service (Amazon SQS) notifications
Introduction
------------
`Amazon SQS <https://aws.amazon.com/sqs/>`__ notifier :class:`~airflow.providers.amazon.aws.notifications.sqs.SqsNotifier`
allows users to push messages to an Amazon SQS Queue using the various ``on_*_callbacks`` at both the DAG level and Task level.
allows users to push messages to an Amazon SQS Queue using the various ``on_*_callbacks`` at both the Dag level and Task level.


Example Code:
Expand All @@ -32,14 +32,14 @@ Example Code:
.. code-block:: python

from datetime import datetime, timezone
from airflow import DAG
from airflow import Dag
from airflow.providers.standard.operators.bash import BashOperator
from airflow.providers.amazon.aws.notifications.sqs import send_sqs_notification

dag_failure_sqs_notification = send_sqs_notification(
aws_conn_id="aws_default",
queue_url="https://sqs.eu-west-1.amazonaws.com/123456789098/MyQueue",
message_body="The DAG {{ dag.dag_id }} failed",
message_body="The Dag {{ dag.dag_id }} failed",
)
task_failure_sqs_notification = send_sqs_notification(
aws_conn_id="aws_default",
Expand All @@ -48,7 +48,7 @@ Example Code:
message_body="The task {{ ti.task_id }} failed",
)

with DAG(
with Dag(
dag_id="mydag",
schedule="@once",
start_date=datetime(2023, 1, 1, tzinfo=timezone.utc),
Expand Down
2 changes: 1 addition & 1 deletion providers/amazon/docs/operators/athena/athena_boto.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ to run a query in Amazon Athena.

In the following example, we query an existing Athena table and send the results to
an existing Amazon S3 bucket. For more examples of how to use this operator, please
see the `Sample DAG <https://github.com/apache/airflow/blob/|version|/providers/amazon/tests/system/amazon/aws/example_athena.py>`__.
see the `Sample Dag <https://github.com/apache/airflow/blob/|version|/providers/amazon/tests/system/amazon/aws/example_athena.py>`__.

.. exampleinclude:: /../../amazon/tests/system/amazon/aws/example_athena.py
:language: python
Expand Down
6 changes: 3 additions & 3 deletions providers/amazon/docs/operators/emr/emr.rst
Original file line number Diff line number Diff line change
Expand Up @@ -53,12 +53,12 @@ Create an EMR job flow
You can use :class:`~airflow.providers.amazon.aws.operators.emr.EmrCreateJobFlowOperator` to
create a new EMR job flow. The cluster will be terminated automatically after finishing the steps.

The default behaviour is to mark the DAG Task node as success as soon as the cluster is launched
The default behaviour is to mark the Dag Task node as success as soon as the cluster is launched
(``wait_policy=None``).
It is possible to modify this behaviour by using a different ``wait_policy``. Available options are:

- ``WaitPolicy.WAIT_FOR_COMPLETION`` - DAG Task node waits for the cluster to be running
- ``WaitPolicy.WAIT_FOR_STEPS_COMPLETION`` - DAG Task node waits for the cluster to terminate
- ``WaitPolicy.WAIT_FOR_COMPLETION`` - Dag Task node waits for the cluster to be running
- ``WaitPolicy.WAIT_FOR_STEPS_COMPLETION`` - Dag Task node waits for the cluster to terminate


This operator can be run in deferrable mode by passing ``deferrable=True`` as a parameter.
Expand Down
4 changes: 2 additions & 2 deletions providers/amazon/docs/operators/emr/emr_eks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ Create an Amazon EMR EKS virtual cluster


The ``EmrEksCreateClusterOperator`` will create an Amazon EMR on EKS virtual cluster.
The example DAG below shows how to create an EMR on EKS virtual cluster.
The example Dag below shows how to create an EMR on EKS virtual cluster.

To create an Amazon EMR cluster on Amazon EKS, you need to specify a virtual cluster name,
the eks cluster that you would like to use , and an eks namespace.
Expand Down Expand Up @@ -93,7 +93,7 @@ for more details on job configuration.
:end-before: [END howto_operator_emr_eks_config]

We pass the ``virtual_cluster_id`` and ``execution_role_arn`` values as operator parameters, but you
can store them in a connection or provide them in the DAG. Your AWS region should be defined either
can store them in a connection or provide them in the Dag. Your AWS region should be defined either
in the ``aws_default`` connection as ``{"region_name": "us-east-1"}`` or a custom connection name
that gets passed to the operator with the ``aws_conn_id`` parameter. The operator returns the Job ID of the job run.

Expand Down
Loading