diff --git a/providers/airbyte/docs/index.rst b/providers/airbyte/docs/index.rst index fcd506e5a5a47..ab1c246e7d6b7 100644 --- a/providers/airbyte/docs/index.rst +++ b/providers/airbyte/docs/index.rst @@ -55,7 +55,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/alibaba/docs/index.rst b/providers/alibaba/docs/index.rst index cc57508c9110e..6ab1498356fe9 100644 --- a/providers/alibaba/docs/index.rst +++ b/providers/alibaba/docs/index.rst @@ -56,7 +56,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/alibaba/docs/operators/analyticdb_spark.rst b/providers/alibaba/docs/operators/analyticdb_spark.rst index 1474b7702732c..afda8e3658dbc 100644 --- a/providers/alibaba/docs/operators/analyticdb_spark.rst +++ b/providers/alibaba/docs/operators/analyticdb_spark.rst @@ -32,7 +32,7 @@ Develop Spark batch applications Purpose """"""" -This example dag uses ``AnalyticDBSparkBatchOperator`` to submit Spark Pi and Spark Logistic regression applications. +This example Dag uses ``AnalyticDBSparkBatchOperator`` to submit Spark Pi and Spark Logistic regression applications. Defining tasks """""""""""""" diff --git a/providers/alibaba/docs/operators/oss.rst b/providers/alibaba/docs/operators/oss.rst index 5325ff1691d87..08ee51773f9fa 100644 --- a/providers/alibaba/docs/operators/oss.rst +++ b/providers/alibaba/docs/operators/oss.rst @@ -37,7 +37,7 @@ Create and Delete Alibaba Cloud OSS Buckets Purpose """"""" -This example dag uses ``OSSCreateBucketOperator`` and ``OSSDeleteBucketOperator`` to create a +This example Dag uses ``OSSCreateBucketOperator`` and ``OSSDeleteBucketOperator`` to create a new OSS bucket with a given bucket name then delete it. Defining tasks diff --git a/providers/amazon/docs/auth-manager/manage/index.rst b/providers/amazon/docs/auth-manager/manage/index.rst index 3da22ce864795..cfd9637257665 100644 --- a/providers/amazon/docs/auth-manager/manage/index.rst +++ b/providers/amazon/docs/auth-manager/manage/index.rst @@ -260,10 +260,10 @@ This is equivalent to the :doc:`Op role in Flask AppBuilder `__ diff --git a/providers/amazon/docs/executors/batch-executor.rst b/providers/amazon/docs/executors/batch-executor.rst index ebd13d45ac143..64ac98672ecce 100644 --- a/providers/amazon/docs/executors/batch-executor.rst +++ b/providers/amazon/docs/executors/batch-executor.rst @@ -113,8 +113,8 @@ used by the Batch Executor, the appropriate policy needs to be attached to the E Additionally, the role also needs to have at least the ``CloudWatchLogsFullAccess`` (or ``CloudWatchLogsFullAccessV2``) policies. The Job Role is the role that is used by the containers to make AWS API requests. This role needs to have -permissions based on the tasks that are described in the DAG being run. -If you are loading DAGs via an S3 bucket, this role needs to have +permissions based on the tasks that are described in the Dag being run. +If you are loading Dags via an S3 bucket, this role needs to have permission to read the S3 bucket. To create a new Job Role or Execution Role, follow the steps diff --git a/providers/amazon/docs/executors/ecs-executor.rst b/providers/amazon/docs/executors/ecs-executor.rst index 1ea85aaf6229c..9b98e3fc79a3e 100644 --- a/providers/amazon/docs/executors/ecs-executor.rst +++ b/providers/amazon/docs/executors/ecs-executor.rst @@ -134,8 +134,8 @@ ECS Executor, this role needs to have at least the ``AmazonECSTaskExecutionRolePolicy`` as well as the ``CloudWatchLogsFullAccess`` (or ``CloudWatchLogsFullAccessV2``) policies. The Task Role is the role that is used by the containers to make AWS API requests. This role needs to have -permissions based on the tasks that are described in the DAG being run. -If you are loading DAGs via an S3 bucket, this role needs to have +permissions based on the tasks that are described in the Dag being run. +If you are loading Dags via an S3 bucket, this role needs to have permission to read the S3 bucket. To create a new Task Role or Task Execution Role, follow the steps diff --git a/providers/amazon/docs/executors/general.rst b/providers/amazon/docs/executors/general.rst index 9edcc7bd1aa60..a337fffcbf800 100644 --- a/providers/amazon/docs/executors/general.rst +++ b/providers/amazon/docs/executors/general.rst @@ -150,12 +150,12 @@ have python 3.10 installed. .. BEGIN LOADING_DAGS_OVERVIEW -Loading DAGs +Loading Dags ~~~~~~~~~~~~ -There are many ways to load DAGs on a container used by |executorName|. This Dockerfile +There are many ways to load Dags on a container used by |executorName|. This Dockerfile is preconfigured with two possible ways: copying from a local folder, or -downloading from an S3 bucket. Other methods of loading DAGs are +downloading from an S3 bucket. Other methods of loading Dags are possible as well. .. END LOADING_DAGS_OVERVIEW @@ -165,11 +165,11 @@ possible as well. From S3 Bucket ^^^^^^^^^^^^^^ -To load DAGs from an S3 bucket, uncomment the entrypoint line in the -Dockerfile to synchronize the DAGs from the specified S3 bucket to the +To load Dags from an S3 bucket, uncomment the entrypoint line in the +Dockerfile to synchronize the Dags from the specified S3 bucket to the ``/opt/airflow/dags`` directory inside the container. You can optionally provide ``container_dag_path`` as a build argument if you want to store -the DAGs in a directory other than ``/opt/airflow/dags``. +the Dags in a directory other than ``/opt/airflow/dags``. Add ``--build-arg s3_uri=YOUR_S3_URI`` in the docker build command. Replace ``YOUR_S3_URI`` with the URI of your S3 bucket. Make sure you @@ -194,10 +194,10 @@ build arguments. From Local Folder ^^^^^^^^^^^^^^^^^ -To load DAGs from a local folder, place your DAG files in a folder +To load Dags from a local folder, place your Dag files in a folder within the docker build context on your host machine, and provide the location of the folder using the ``host_dag_path`` build argument. By -default, the DAGs will be copied to ``/opt/airflow/dags``, but this can +default, the Dags will be copied to ``/opt/airflow/dags``, but this can be changed by passing the ``container_dag_path`` build-time argument during the Docker build process: @@ -205,7 +205,7 @@ during the Docker build process: docker build -t my-airflow-image --build-arg host_dag_path=./dags_on_host --build-arg container_dag_path=/path/on/container . -If choosing to load DAGs onto a different path than +If choosing to load Dags onto a different path than ``/opt/airflow/dags``, then the new path will need to be updated in the Airflow config. diff --git a/providers/amazon/docs/executors/lambda-executor.rst b/providers/amazon/docs/executors/lambda-executor.rst index 8853d9a48085d..c1504c9bde8a3 100644 --- a/providers/amazon/docs/executors/lambda-executor.rst +++ b/providers/amazon/docs/executors/lambda-executor.rst @@ -122,7 +122,7 @@ provider package. The most secure method is to use IAM roles. When creating a Lambda Function Definition, you are able to select an execution role. This role needs permissions to publish messages to the SQS queues and to write to CloudWatchLogs -or S3 if using AWS remote logging and/or using S3 to synchronize dags +or S3 if using AWS remote logging and/or using S3 to synchronize Dags (e.g. ``CloudWatchLogsFullAccess`` or ``CloudWatchLogsFullAccessV2``). The AWS credentials used on the Scheduler need permissions to describe and invoke Lambda functions as well as to describe and read/delete @@ -170,12 +170,12 @@ From S3 Bucket ^^^^^^^^^^^^^^ Dags can be loaded from S3 when using the provided example app.py, which -contains logic to synchronize the DAGs from S3 to the local filesystem of +contains logic to synchronize the Dags from S3 to the local filesystem of the Lambda function (see the app.py code |appHandlerLink|). To load Dags from an S3 bucket add ``--build-arg s3_uri=YOUR_S3_URI`` in the docker build command. Replace ``YOUR_S3_URI`` with the URI of your S3 -bucket/path containing your dags. Make sure you have the appropriate +bucket/path containing your Dags. Make sure you have the appropriate permissions to read from the bucket. .. code-block:: bash diff --git a/providers/amazon/docs/index.rst b/providers/amazon/docs/index.rst index e9ee9fed0adee..94fde8a425f62 100644 --- a/providers/amazon/docs/index.rst +++ b/providers/amazon/docs/index.rst @@ -66,7 +66,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/amazon/docs/logging/s3-task-handler.rst b/providers/amazon/docs/logging/s3-task-handler.rst index 67dfcb3fbb5c5..f0100d705299b 100644 --- a/providers/amazon/docs/logging/s3-task-handler.rst +++ b/providers/amazon/docs/logging/s3-task-handler.rst @@ -125,7 +125,7 @@ With the above configurations, Webserver and Worker Pods can access Amazon S3 bu - Using Airflow Web UI - The final step to create connections under Airflow UI before executing the DAGs. + The final step to create connections under Airflow UI before executing the Dags. * Login to Airflow Web UI with ``admin`` credentials and Navigate to ``Admin -> Connections`` * Create connection for ``Amazon Web Services`` and select the options (Connection ID and Connection Type) as shown in the image. @@ -141,6 +141,6 @@ With the above configurations, Webserver and Worker Pods can access Amazon S3 bu Step4: Verify the logs ~~~~~~~~~~~~~~~~~~~~~~ -* Execute example DAGs +* Execute example Dags * Verify the logs in S3 bucket -* Verify the logs from Airflow UI from DAGs log +* Verify the logs from Airflow UI from Dags log diff --git a/providers/amazon/docs/notifications/chime_notifier_howto_guide.rst b/providers/amazon/docs/notifications/chime_notifier_howto_guide.rst index e15c3a8c0c8e4..fb615ee6ce2db 100644 --- a/providers/amazon/docs/notifications/chime_notifier_howto_guide.rst +++ b/providers/amazon/docs/notifications/chime_notifier_howto_guide.rst @@ -21,7 +21,7 @@ How-to Guide for Chime notifications Introduction ------------ Chime notifier (:class:`airflow.providers.amazon.aws.notifications.chime.ChimeNotifier`) allows users to send -messages to a Chime chat room setup via a webhook using the various ``on_*_callbacks`` at both the DAG level and Task level +messages to a Chime chat room setup via a webhook using the various ``on_*_callbacks`` at both the Dag level and Task level Example Code: @@ -30,16 +30,16 @@ Example Code: .. code-block:: python from datetime import datetime - from airflow import DAG + from airflow import Dag from airflow.providers.standard.operators.bash import BashOperator from airflow.providers.amazon.aws.notifications.chime import send_chime_notification - with DAG( + with Dag( dag_id="mydag", schedule="@once", start_date=datetime(2023, 6, 27), on_success_callback=[ - send_chime_notification(chime_conn_id="my_chime_conn", message="The DAG {{ dag.dag_id }} succeeded") + send_chime_notification(chime_conn_id="my_chime_conn", message="The Dag {{ dag.dag_id }} succeeded") ], catchup=False, ): diff --git a/providers/amazon/docs/notifications/sns.rst b/providers/amazon/docs/notifications/sns.rst index 262cd966ae418..10c16389c0354 100644 --- a/providers/amazon/docs/notifications/sns.rst +++ b/providers/amazon/docs/notifications/sns.rst @@ -23,7 +23,7 @@ How-to Guide for Amazon Simple Notification Service (Amazon SNS) notifications Introduction ------------ `Amazon SNS `__ notifier :class:`~airflow.providers.amazon.aws.notifications.sns.SnsNotifier` -allows users to push messages to a SNS Topic using the various ``on_*_callbacks`` at both the DAG level and Task level. +allows users to push messages to a SNS Topic using the various ``on_*_callbacks`` at both the Dag level and Task level. Example Code: @@ -32,14 +32,14 @@ Example Code: .. code-block:: python from datetime import datetime - from airflow import DAG + from airflow import Dag from airflow.providers.standard.operators.bash import BashOperator from airflow.providers.amazon.aws.notifications.sns import send_sns_notification dag_failure_sns_notification = send_sns_notification( aws_conn_id="aws_default", region_name="eu-west-2", - message="The DAG {{ dag.dag_id }} failed", + message="The Dag {{ dag.dag_id }} failed", target_arn="arn:aws:sns:us-west-2:123456789098:TopicName", ) task_failure_sns_notification = send_sns_notification( @@ -49,7 +49,7 @@ Example Code: target_arn="arn:aws:sns:us-west-2:123456789098:AnotherTopicName", ) - with DAG( + with Dag( dag_id="mydag", schedule="@once", start_date=datetime(2023, 1, 1), diff --git a/providers/amazon/docs/notifications/sqs.rst b/providers/amazon/docs/notifications/sqs.rst index d74a2477d62ca..3023a2c06f199 100644 --- a/providers/amazon/docs/notifications/sqs.rst +++ b/providers/amazon/docs/notifications/sqs.rst @@ -23,7 +23,7 @@ How-to Guide for Amazon Simple Queue Service (Amazon SQS) notifications Introduction ------------ `Amazon SQS `__ notifier :class:`~airflow.providers.amazon.aws.notifications.sqs.SqsNotifier` -allows users to push messages to an Amazon SQS Queue using the various ``on_*_callbacks`` at both the DAG level and Task level. +allows users to push messages to an Amazon SQS Queue using the various ``on_*_callbacks`` at both the Dag level and Task level. Example Code: @@ -32,14 +32,14 @@ Example Code: .. code-block:: python from datetime import datetime, timezone - from airflow import DAG + from airflow import Dag from airflow.providers.standard.operators.bash import BashOperator from airflow.providers.amazon.aws.notifications.sqs import send_sqs_notification dag_failure_sqs_notification = send_sqs_notification( aws_conn_id="aws_default", queue_url="https://sqs.eu-west-1.amazonaws.com/123456789098/MyQueue", - message_body="The DAG {{ dag.dag_id }} failed", + message_body="The Dag {{ dag.dag_id }} failed", ) task_failure_sqs_notification = send_sqs_notification( aws_conn_id="aws_default", @@ -48,7 +48,7 @@ Example Code: message_body="The task {{ ti.task_id }} failed", ) - with DAG( + with Dag( dag_id="mydag", schedule="@once", start_date=datetime(2023, 1, 1, tzinfo=timezone.utc), diff --git a/providers/amazon/docs/operators/athena/athena_boto.rst b/providers/amazon/docs/operators/athena/athena_boto.rst index 8d7bbe8c46f72..b6839be8532c8 100644 --- a/providers/amazon/docs/operators/athena/athena_boto.rst +++ b/providers/amazon/docs/operators/athena/athena_boto.rst @@ -48,7 +48,7 @@ to run a query in Amazon Athena. In the following example, we query an existing Athena table and send the results to an existing Amazon S3 bucket. For more examples of how to use this operator, please -see the `Sample DAG `__. +see the `Sample Dag `__. .. exampleinclude:: /../../amazon/tests/system/amazon/aws/example_athena.py :language: python diff --git a/providers/amazon/docs/operators/emr/emr.rst b/providers/amazon/docs/operators/emr/emr.rst index d75da06a559f7..2d63fafcc2612 100644 --- a/providers/amazon/docs/operators/emr/emr.rst +++ b/providers/amazon/docs/operators/emr/emr.rst @@ -53,12 +53,12 @@ Create an EMR job flow You can use :class:`~airflow.providers.amazon.aws.operators.emr.EmrCreateJobFlowOperator` to create a new EMR job flow. The cluster will be terminated automatically after finishing the steps. -The default behaviour is to mark the DAG Task node as success as soon as the cluster is launched +The default behaviour is to mark the Dag Task node as success as soon as the cluster is launched (``wait_policy=None``). It is possible to modify this behaviour by using a different ``wait_policy``. Available options are: -- ``WaitPolicy.WAIT_FOR_COMPLETION`` - DAG Task node waits for the cluster to be running -- ``WaitPolicy.WAIT_FOR_STEPS_COMPLETION`` - DAG Task node waits for the cluster to terminate +- ``WaitPolicy.WAIT_FOR_COMPLETION`` - Dag Task node waits for the cluster to be running +- ``WaitPolicy.WAIT_FOR_STEPS_COMPLETION`` - Dag Task node waits for the cluster to terminate This operator can be run in deferrable mode by passing ``deferrable=True`` as a parameter. diff --git a/providers/amazon/docs/operators/emr/emr_eks.rst b/providers/amazon/docs/operators/emr/emr_eks.rst index a18c36b483ea9..bb016698522d5 100644 --- a/providers/amazon/docs/operators/emr/emr_eks.rst +++ b/providers/amazon/docs/operators/emr/emr_eks.rst @@ -45,7 +45,7 @@ Create an Amazon EMR EKS virtual cluster The ``EmrEksCreateClusterOperator`` will create an Amazon EMR on EKS virtual cluster. -The example DAG below shows how to create an EMR on EKS virtual cluster. +The example Dag below shows how to create an EMR on EKS virtual cluster. To create an Amazon EMR cluster on Amazon EKS, you need to specify a virtual cluster name, the eks cluster that you would like to use , and an eks namespace. @@ -93,7 +93,7 @@ for more details on job configuration. :end-before: [END howto_operator_emr_eks_config] We pass the ``virtual_cluster_id`` and ``execution_role_arn`` values as operator parameters, but you -can store them in a connection or provide them in the DAG. Your AWS region should be defined either +can store them in a connection or provide them in the Dag. Your AWS region should be defined either in the ``aws_default`` connection as ``{"region_name": "us-east-1"}`` or a custom connection name that gets passed to the operator with the ``aws_conn_id`` parameter. The operator returns the Job ID of the job run. diff --git a/providers/amazon/docs/operators/mwaa.rst b/providers/amazon/docs/operators/mwaa.rst index fc248288c10f8..d841c023cdd5e 100644 --- a/providers/amazon/docs/operators/mwaa.rst +++ b/providers/amazon/docs/operators/mwaa.rst @@ -42,13 +42,13 @@ Operators .. _howto/operator:MwaaTriggerDagRunOperator: -Trigger a DAG run in an Amazon MWAA environment +Trigger a Dag run in an Amazon MWAA environment =============================================== -To trigger a DAG run in an Amazon MWAA environment you can use the +To trigger a Dag run in an Amazon MWAA environment you can use the :class:`~airflow.providers.amazon.aws.operators.mwaa.MwaaTriggerDagRunOperator` -In the following example, the task ``trigger_dag_run`` triggers a DAG run for the DAG ``hello_world`` in the environment +In the following example, the task ``trigger_dag_run`` triggers a Dag run for the Dag ``hello_world`` in the environment ``MyAirflowEnvironment`` and waits for the run to complete. .. exampleinclude:: /../../amazon/tests/system/amazon/aws/example_mwaa.py @@ -62,13 +62,13 @@ Sensors .. _howto/sensor:MwaaDagRunSensor: -Wait on the state of an AWS MWAA DAG Run +Wait on the state of an AWS MWAA Dag Run ======================================== -To wait for a DAG Run running on Amazon MWAA until it reaches one of the given states, you can use the +To wait for a Dag Run running on Amazon MWAA until it reaches one of the given states, you can use the :class:`~airflow.providers.amazon.aws.sensors.mwaa.MwaaDagRunSensor` -In the following example, the task ``wait_for_dag_run`` waits for the DAG run created in the above task to complete. +In the following example, the task ``wait_for_dag_run`` waits for the Dag run created in the above task to complete. .. exampleinclude:: /../../amazon/tests/system/amazon/aws/example_mwaa.py :language: python @@ -81,10 +81,10 @@ In the following example, the task ``wait_for_dag_run`` waits for the DAG run cr Wait on the state of an AWS MWAA Task ======================================== -To wait for a DAG task instance across MWAA environments until it reaches one of the given states, you can use the +To wait for a Dag task instance across MWAA environments until it reaches one of the given states, you can use the :class:`~airflow.providers.amazon.aws.sensors.mwaa.MwaaTaskSensor` -In the following example, the task ``wait_for_task`` waits for the DAG run created in the above task to complete. +In the following example, the task ``wait_for_task`` waits for the Dag run created in the above task to complete. .. exampleinclude:: /../../amazon/tests/system/amazon/aws/example_mwaa.py :language: python diff --git a/providers/amazon/docs/operators/sagemaker.rst b/providers/amazon/docs/operators/sagemaker.rst index 4601d9a4e88b1..5103b868455bb 100644 --- a/providers/amazon/docs/operators/sagemaker.rst +++ b/providers/amazon/docs/operators/sagemaker.rst @@ -189,7 +189,7 @@ The result of executing this operator is a model package. A model package is a reusable model artifacts abstraction that packages all ingredients necessary for inference. It consists of an inference specification that defines the inference image to use along with a model weights location. A model package group is a collection of model packages. -You can use this operator to add a new version and model package to the group for every DAG run. +You can use this operator to add a new version and model package to the group for every Dag run. .. exampleinclude:: /../../amazon/tests/system/amazon/aws/example_sagemaker.py :language: python diff --git a/providers/amazon/docs/transfer/google_api_to_s3.rst b/providers/amazon/docs/transfer/google_api_to_s3.rst index 88bcb94824d78..dbd3843344a35 100644 --- a/providers/amazon/docs/transfer/google_api_to_s3.rst +++ b/providers/amazon/docs/transfer/google_api_to_s3.rst @@ -50,7 +50,7 @@ You can find more information about the Google API endpoint used Google Youtube to Amazon S3 =========================== -This is a more advanced example dag for using ``GoogleApiToS3Operator`` which uses xcom to pass data between +This is a more advanced example Dag for using ``GoogleApiToS3Operator`` which uses xcom to pass data between tasks to retrieve specific information about YouTube videos. It searches for up to 50 videos (due to pagination) in a given time range diff --git a/providers/apache/beam/docs/index.rst b/providers/apache/beam/docs/index.rst index 65a4558415c89..daabdecad4f99 100644 --- a/providers/apache/beam/docs/index.rst +++ b/providers/apache/beam/docs/index.rst @@ -48,7 +48,7 @@ :caption: Resources PyPI Repository - Example DAGs + Example Dags .. toctree:: :hidden: diff --git a/providers/apache/cassandra/docs/index.rst b/providers/apache/cassandra/docs/index.rst index b11fdbfc5fc2f..36b340eb38b2e 100644 --- a/providers/apache/cassandra/docs/index.rst +++ b/providers/apache/cassandra/docs/index.rst @@ -55,7 +55,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/apache/drill/docs/index.rst b/providers/apache/drill/docs/index.rst index d073f03c4b0e6..ff2e0e72dd70a 100644 --- a/providers/apache/drill/docs/index.rst +++ b/providers/apache/drill/docs/index.rst @@ -55,7 +55,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/apache/druid/docs/index.rst b/providers/apache/druid/docs/index.rst index cfa721461836c..b44b0c7f982c8 100644 --- a/providers/apache/druid/docs/index.rst +++ b/providers/apache/druid/docs/index.rst @@ -57,7 +57,7 @@ PyPI Repository Installing from sources - Example DAGs + Example Dags .. THE REMAINDER OF THE FILE IS AUTOMATICALLY GENERATED. IT WILL BE OVERWRITTEN AT RELEASE TIME! diff --git a/providers/apache/flink/docs/index.rst b/providers/apache/flink/docs/index.rst index d5df649576ddc..fde752e48ec72 100644 --- a/providers/apache/flink/docs/index.rst +++ b/providers/apache/flink/docs/index.rst @@ -47,7 +47,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/apache/hive/docs/index.rst b/providers/apache/hive/docs/index.rst index c96acb55a85f2..8c953c8e372f0 100644 --- a/providers/apache/hive/docs/index.rst +++ b/providers/apache/hive/docs/index.rst @@ -57,7 +57,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources Macros diff --git a/providers/apache/iceberg/docs/index.rst b/providers/apache/iceberg/docs/index.rst index 409ef97b646c0..ea1f6a33064ad 100644 --- a/providers/apache/iceberg/docs/index.rst +++ b/providers/apache/iceberg/docs/index.rst @@ -49,7 +49,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources Python API <_api/airflow/providers/apache/iceberg/index> diff --git a/providers/apache/kafka/docs/index.rst b/providers/apache/kafka/docs/index.rst index 8e86319766d0b..ad4e14c06a6e8 100644 --- a/providers/apache/kafka/docs/index.rst +++ b/providers/apache/kafka/docs/index.rst @@ -62,7 +62,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/apache/kafka/docs/message-queues/index.rst b/providers/apache/kafka/docs/message-queues/index.rst index 34785e34977b4..abb8a04d7ed9e 100644 --- a/providers/apache/kafka/docs/message-queues/index.rst +++ b/providers/apache/kafka/docs/message-queues/index.rst @@ -83,7 +83,7 @@ Inherited from :class:`~airflow.providers.common.messaging.triggers.msg_queue.Me Wait for a message in a queue ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Below is an example of how you can configure an Airflow DAG to be triggered by a message in Apache Kafka. +Below is an example of how you can configure an Airflow Dag to be triggered by a message in Apache Kafka. .. exampleinclude:: /../tests/system/apache/kafka/example_dag_kafka_message_queue_trigger.py :language: python @@ -99,7 +99,7 @@ How it works The ``AssetWatcher`` associate a trigger with a name. This name helps you identify which trigger is associated to which asset. -3. **Event-Driven DAG**: Instead of running on a fixed schedule, the DAG executes when the asset receives an update +3. **Event-Driven Dag**: Instead of running on a fixed schedule, the Dag executes when the asset receives an update (e.g., a new message in the queue). For how to use the trigger, refer to the documentation of the diff --git a/providers/apache/kylin/docs/index.rst b/providers/apache/kylin/docs/index.rst index fc5565e37b3d9..41af8783e2760 100644 --- a/providers/apache/kylin/docs/index.rst +++ b/providers/apache/kylin/docs/index.rst @@ -56,7 +56,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/apache/livy/docs/index.rst b/providers/apache/livy/docs/index.rst index 397744c69229f..094a654f87749 100644 --- a/providers/apache/livy/docs/index.rst +++ b/providers/apache/livy/docs/index.rst @@ -55,7 +55,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/apache/pig/docs/index.rst b/providers/apache/pig/docs/index.rst index 5c7f9b2d0dc04..f5963857441c1 100644 --- a/providers/apache/pig/docs/index.rst +++ b/providers/apache/pig/docs/index.rst @@ -54,7 +54,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/apache/pinot/docs/index.rst b/providers/apache/pinot/docs/index.rst index 3c6c9170d6637..d5a3c05d8d4f4 100644 --- a/providers/apache/pinot/docs/index.rst +++ b/providers/apache/pinot/docs/index.rst @@ -41,7 +41,7 @@ :maxdepth: 1 :caption: References - Example DAGs + Example Dags Python API <_api/airflow/providers/apache/pinot/index> PyPI Repository Installing from sources diff --git a/providers/apache/spark/docs/index.rst b/providers/apache/spark/docs/index.rst index 815aad2c3776c..407a98e110b57 100644 --- a/providers/apache/spark/docs/index.rst +++ b/providers/apache/spark/docs/index.rst @@ -56,7 +56,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/apache/tinkerpop/docs/index.rst b/providers/apache/tinkerpop/docs/index.rst index e6cb0188c0bd7..089085423cd51 100644 --- a/providers/apache/tinkerpop/docs/index.rst +++ b/providers/apache/tinkerpop/docs/index.rst @@ -55,7 +55,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/apprise/docs/notifications/apprise_notifier_howto_guide.rst b/providers/apprise/docs/notifications/apprise_notifier_howto_guide.rst index 2a0aeaaa10764..392ed2e694f8c 100644 --- a/providers/apprise/docs/notifications/apprise_notifier_howto_guide.rst +++ b/providers/apprise/docs/notifications/apprise_notifier_howto_guide.rst @@ -21,7 +21,7 @@ How-to Guide for Apprise notifications Introduction ------------ The apprise notifier (:class:`airflow.providers.apprise.notifications.apprise.AppriseNotifier`) allows users to send -messages to `multiple service `_ using the various ``on_*_callbacks`` at both the DAG level and Task level. +messages to `multiple service `_ using the various ``on_*_callbacks`` at both the Dag level and Task level. Example Code: ------------- @@ -29,18 +29,18 @@ Example Code: .. code-block:: python from datetime import datetime - from airflow import DAG + from airflow import Dag from airflow.providers.standard.operators.bash import BashOperator from airflow.providers.apprise.notifications.apprise import send_apprise_notification from apprise import NotifyType - with DAG( + with Dag( dag_id="apprise_notifier_testing", schedule=None, start_date=datetime(2024, 1, 1), catchup=False, on_success_callback=[ - send_apprise_notification(body="The dag {{ dag.dag_id }} succeeded", notify_type=NotifyType.SUCCESS) + send_apprise_notification(body="The Dag {{ dag.dag_id }} succeeded", notify_type=NotifyType.SUCCESS) ], ): BashOperator( diff --git a/providers/arangodb/docs/index.rst b/providers/arangodb/docs/index.rst index 2cca11d37dc5d..6909edc5fe789 100644 --- a/providers/arangodb/docs/index.rst +++ b/providers/arangodb/docs/index.rst @@ -49,7 +49,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs <_api/airflow/providers/arangodb/example_dags/index> + Example Dags <_api/airflow/providers/arangodb/example_dags/index> PyPI Repository Installing from sources diff --git a/providers/arangodb/docs/operators/index.rst b/providers/arangodb/docs/operators/index.rst index 83e1893ea80f3..c6e787421b3d8 100644 --- a/providers/arangodb/docs/operators/index.rst +++ b/providers/arangodb/docs/operators/index.rst @@ -38,7 +38,7 @@ An example of Listing all Documents in **students** collection can be implemente :end-before: [END howto_aql_operator_arangodb] You can also provide file template (.sql) to load query, remember path is relative to **dags/** folder, if you want to provide any other path -please provide **template_searchpath** while creating **DAG** object, +please provide **template_searchpath** while creating **Dag** object, .. exampleinclude:: /../../arangodb/src/airflow/providers/arangodb/example_dags/example_arangodb.py :language: python diff --git a/providers/asana/docs/index.rst b/providers/asana/docs/index.rst index be58cc52483f2..3366e0bc099fc 100644 --- a/providers/asana/docs/index.rst +++ b/providers/asana/docs/index.rst @@ -56,7 +56,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/atlassian/jira/docs/notifications/jira-notifier-howto-guide.rst b/providers/atlassian/jira/docs/notifications/jira-notifier-howto-guide.rst index a5617b9035de0..bca418cf39d1c 100644 --- a/providers/atlassian/jira/docs/notifications/jira-notifier-howto-guide.rst +++ b/providers/atlassian/jira/docs/notifications/jira-notifier-howto-guide.rst @@ -22,7 +22,7 @@ How-to guide for Atlassian Jira notifications Introduction ------------ The Atlassian Jira notifier (:class:`airflow.providers.atlassian.jira.notifications.jira.JiraNotifier`) allows users to create -issues in a Jira instance using the various ``on_*_callbacks`` available at both the DAG level and Task level +issues in a Jira instance using the various ``on_*_callbacks`` available at both the Dag level and Task level Example Code ------------ @@ -30,18 +30,18 @@ Example Code .. code-block:: python from datetime import datetime - from airflow import DAG + from airflow import Dag from airflow.providers.standard.operators.bash import BashOperator from airflow.providers.atlassian.jira.notifications.jira import send_jira_notification - with DAG( + with Dag( "test-dag", start_date=datetime(2023, 11, 3), on_failure_callback=[ send_jira_notification( jira_conn_id="my-jira-conn", - description="Failure in the DAG {{ dag.dag_id }}", - summary="Airflow DAG Issue", + description="Failure in the Dag {{ dag.dag_id }}", + summary="Airflow Dag Issue", project_id=10000, issue_type_id=10003, labels=["airflow-dag-failure"], diff --git a/providers/celery/docs/celery_executor.rst b/providers/celery/docs/celery_executor.rst index 37ac8e6a2dab0..6017c68c83c04 100644 --- a/providers/celery/docs/celery_executor.rst +++ b/providers/celery/docs/celery_executor.rst @@ -116,7 +116,7 @@ Architecture scheduler[label="Scheduler"] web[label="Web server"] database[label="Database"] - dag[label="DAG files"] + dag[label="Dag files"] subgraph cluster_queue { label="Celery"; @@ -145,8 +145,8 @@ Airflow consist of several components: * **Workers** - Execute the assigned tasks * **Scheduler** - Responsible for adding the necessary tasks to the queue -* **Web server** - HTTP Server provides access to DAG/task status information -* **Database** - Contains information about the status of tasks, DAGs, Variables, connections, etc. +* **Web server** - HTTP Server provides access to Dag/task status information +* **Database** - Contains information about the status of tasks, Dags, Variables, connections, etc. * **Celery** - Queue mechanism Please note that the queue at Celery consists of two components: @@ -157,14 +157,14 @@ Please note that the queue at Celery consists of two components: The components communicate with each other in many places * [1] **Web server** --> **Workers** - Fetches task execution logs -* [2] **Web server** --> **DAG files** - Reveal the DAG structure +* [2] **Web server** --> **Dag files** - Reveal the Dag structure * [3] **Web server** --> **Database** - Fetch the status of the tasks -* [4] **Workers** --> **DAG files** - Reveal the DAG structure and execute the tasks +* [4] **Workers** --> **Dag files** - Reveal the Dag structure and execute the tasks * [5] **Workers** --> **Database** - Gets and stores information about connection configuration, variables and XCOM. * [6] **Workers** --> **Celery's result backend** - Saves the status of tasks * [7] **Workers** --> **Celery's broker** - Stores commands for execution -* [8] **Scheduler** --> **DAG files** - Reveal the DAG structure and execute the tasks -* [9] **Scheduler** --> **Database** - Store a DAG run and related tasks +* [8] **Scheduler** --> **Dag files** - Reveal the Dag structure and execute the tasks +* [9] **Scheduler** --> **Database** - Store a Dag run and related tasks * [10] **Scheduler** --> **Celery's result backend** - Gets information about the status of completed tasks * [11] **Scheduler** --> **Celery's broker** - Put the commands to be executed diff --git a/providers/cncf/kubernetes/docs/index.rst b/providers/cncf/kubernetes/docs/index.rst index a4a54b93346a6..1ee98c04ad9ff 100644 --- a/providers/cncf/kubernetes/docs/index.rst +++ b/providers/cncf/kubernetes/docs/index.rst @@ -66,7 +66,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/cncf/kubernetes/docs/kubernetes_executor.rst b/providers/cncf/kubernetes/docs/kubernetes_executor.rst index 9af51570c254a..9be3eac152d47 100644 --- a/providers/cncf/kubernetes/docs/kubernetes_executor.rst +++ b/providers/cncf/kubernetes/docs/kubernetes_executor.rst @@ -36,7 +36,7 @@ not necessarily need to be running on Kubernetes, but does need access to a Kube KubernetesExecutor requires a non-sqlite database in the backend. -When a DAG submits a task, the KubernetesExecutor requests a worker pod from the Kubernetes API. The worker pod then runs the task, reports the result, and terminates. +When a Dag submits a task, the KubernetesExecutor requests a worker pod from the Kubernetes API. The worker pod then runs the task, reports the result, and terminates. .. image:: img/arch-diag-kubernetes.png @@ -45,7 +45,7 @@ One example of an Airflow deployment running on a distributed set of five nodes .. image:: img/arch-diag-kubernetes2.png -Consistent with the regular Airflow architecture, the Workers need access to the DAG files to execute the tasks within those DAGs and interact with the Metadata repository. Also, configuration information specific to the Kubernetes Executor, such as the worker namespace and image information, needs to be specified in the Airflow Configuration file. +Consistent with the regular Airflow architecture, the Workers need access to the Dag files to execute the tasks within those Dags and interact with the Metadata repository. Also, configuration information specific to the Kubernetes Executor, such as the worker namespace and image information, needs to be specified in the Airflow Configuration file. Additionally, the Kubernetes Executor enables specification of additional features on a per-task basis using the Executor config. @@ -103,24 +103,24 @@ With these requirements in mind, here are some examples of basic ``pod_template_ The examples below should work when using default Airflow configuration values. However, many custom configuration values need to be explicitly passed to the pod via this template too. This includes, - but is not limited to, sql configuration, required Airflow connections, DAGs folder path and + but is not limited to, sql configuration, required Airflow connections, Dags folder path and logging settings. See :doc:`../../configurations-ref` for details. -Storing DAGs in the image: +Storing Dags in the image: .. literalinclude:: /../src/airflow/providers/cncf/kubernetes/pod_template_file_examples/dags_in_image_template.yaml :language: yaml :start-after: [START template_with_dags_in_image] :end-before: [END template_with_dags_in_image] -Storing DAGs in a ``persistentVolume``: +Storing Dags in a ``persistentVolume``: .. literalinclude:: /../src/airflow/providers/cncf/kubernetes/pod_template_file_examples/dags_in_volume_template.yaml :language: yaml :start-after: [START template_with_dags_in_volume] :end-before: [END template_with_dags_in_volume] -Pulling DAGs from ``git``: +Pulling Dags from ``git``: .. literalinclude:: /../src/airflow/providers/cncf/kubernetes/pod_template_file_examples/git_sync_template.yaml :language: yaml @@ -165,14 +165,14 @@ Here is an example of a task with both features: import pendulum - from airflow import DAG + from airflow import Dag from airflow.decorators import task from airflow.example_dags.libs.helper import print_stuff from airflow.settings import AIRFLOW_HOME from kubernetes.client import models as k8s - with DAG( + with Dag( dag_id="example_pod_template_file", schedule=None, start_date=pendulum.datetime(2021, 1, 1, tz="UTC"), @@ -189,18 +189,18 @@ Here is an example of a task with both features: print_stuff() -Managing DAGs and logs +Managing Dags and logs ~~~~~~~~~~~~~~~~~~~~~~ Use of persistent volumes is optional and depends on your configuration. - **Dags**: -To get the DAGs into the workers, you can: +To get the Dags into the workers, you can: - - Include DAGs in the image. - - Use ``git-sync`` which, before starting the worker container, will run a ``git pull`` of the DAGs repository. - - Storing DAGs on a persistent volume, which can be mounted on all workers. + - Include Dags in the image. + - Use ``git-sync`` which, before starting the worker container, will run a ``git pull`` of the Dags repository. + - Storing Dags on a persistent volume, which can be mounted on all workers. - **Logs**: diff --git a/providers/cncf/kubernetes/docs/operators.rst b/providers/cncf/kubernetes/docs/operators.rst index a18e7b2d8f75a..6d8d21cf8afd5 100644 --- a/providers/cncf/kubernetes/docs/operators.rst +++ b/providers/cncf/kubernetes/docs/operators.rst @@ -111,7 +111,7 @@ Difference between ``KubernetesPodOperator`` and Kubernetes object spec ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The :class:`~airflow.providers.cncf.kubernetes.operators.pod.KubernetesPodOperator` can be considered a substitute for a Kubernetes object spec definition that is able -to be run in the Airflow scheduler in the DAG context. If using the operator, there is no need to create the +to be run in the Airflow scheduler in the Dag context. If using the operator, there is no need to create the equivalent YAML/JSON object spec for the Pod you would like to run. The YAML file can still be provided with the ``pod_template_file`` or even the Pod Spec constructed in Python via the ``full_pod_spec`` parameter which requires a Kubernetes ``V1Pod``. diff --git a/providers/common/io/docs/index.rst b/providers/common/io/docs/index.rst index 1c77218341df0..a3e240ca85842 100644 --- a/providers/common/io/docs/index.rst +++ b/providers/common/io/docs/index.rst @@ -58,7 +58,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/common/messaging/docs/triggers.rst b/providers/common/messaging/docs/triggers.rst index 8d02836f4d91a..c841ba90deff4 100644 --- a/providers/common/messaging/docs/triggers.rst +++ b/providers/common/messaging/docs/triggers.rst @@ -33,7 +33,7 @@ Additional parameters can be provided depending on the queue provider. Connectio default connection ID, for example, when connecting to a queue in AWS SQS, the connection ID should be ``aws_default``. -Below is an example of how you can configure an Airflow DAG to be triggered by a message in Amazon SQS. +Below is an example of how you can configure an Airflow Dag to be triggered by a message in Amazon SQS. .. exampleinclude:: /../tests/system/common/messaging/example_message_queue_trigger.py :language: python @@ -49,5 +49,5 @@ How it works The ``AssetWatcher`` associate a trigger with a name. This name helps you identify which trigger is associated to which asset. -3. **Event-Driven DAG**: Instead of running on a fixed schedule, the DAG executes when the asset receives an update +3. **Event-Driven Dag**: Instead of running on a fixed schedule, the Dag executes when the asset receives an update (e.g., a new message in the queue). diff --git a/providers/common/sql/docs/index.rst b/providers/common/sql/docs/index.rst index ff40678da5954..db37246ccd097 100644 --- a/providers/common/sql/docs/index.rst +++ b/providers/common/sql/docs/index.rst @@ -58,7 +58,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/databricks/docs/index.rst b/providers/databricks/docs/index.rst index d7371ef02c8ae..dc6554b563fc7 100644 --- a/providers/databricks/docs/index.rst +++ b/providers/databricks/docs/index.rst @@ -57,7 +57,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/databricks/docs/operators/task.rst b/providers/databricks/docs/operators/task.rst index 64df29e976cfe..5c446593531a0 100644 --- a/providers/databricks/docs/operators/task.rst +++ b/providers/databricks/docs/operators/task.rst @@ -22,7 +22,7 @@ DatabricksTaskOperator ====================== Use the :class:`~airflow.providers.databricks.operators.databricks.DatabricksTaskOperator` to launch and monitor -task runs on Databricks as Airflow tasks. This can be used as a standalone operator in a DAG and as well as part of a +task runs on Databricks as Airflow tasks. This can be used as a standalone operator in a Dag and as well as part of a Databricks Workflow by using it as an operator(task) within the :class:`~airflow.providers.databricks.operators.databricks_workflow.DatabricksWorkflowTaskGroup`. diff --git a/providers/databricks/docs/operators/workflow.rst b/providers/databricks/docs/operators/workflow.rst index 646737c293983..16a5b769ec825 100644 --- a/providers/databricks/docs/operators/workflow.rst +++ b/providers/databricks/docs/operators/workflow.rst @@ -28,7 +28,7 @@ Databricks notebook job runs as Airflow tasks. The task group launches a `Databr There are a few advantages to defining your Databricks Workflows in Airflow: ======================================= ============================================= ================================= -Authoring interface via Databricks (Web-based with Databricks UI) via Airflow(Code with Airflow DAG) +Authoring interface via Databricks (Web-based with Databricks UI) via Airflow(Code with Airflow Dag) ======================================= ============================================= ================================= Workflow compute pricing ✅ ✅ Notebook code in source control ✅ ✅ @@ -36,14 +36,14 @@ Workflow structure in source control ✅ Retry from beginning ✅ ✅ Retry single task ✅ ✅ Task groups within Workflows ✅ -Trigger workflows from other DAGs ✅ +Trigger workflows from other Dags ✅ Workflow-level parameters ✅ ======================================= ============================================= ================================= Examples -------- -Example of what a DAG looks like with a DatabricksWorkflowTaskGroup +Example of what a Dag looks like with a DatabricksWorkflowTaskGroup ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. exampleinclude:: /../../databricks/tests/system/databricks/example_databricks_workflow.py :language: python @@ -53,13 +53,13 @@ Example of what a DAG looks like with a DatabricksWorkflowTaskGroup With this example, Airflow will produce a job named ``.test_workflow__`` that will run task ``notebook_1`` and then ``notebook_2``. The job will be created in the databricks workspace if it does not already exist. If the job already exists, it will be updated to match -the workflow defined in the DAG. +the workflow defined in the Dag. The following image displays the resulting Databricks Workflow in the Airflow UI (based on the above example provided) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. image:: ../img/databricks_workflow_task_group_airflow_graph_view.png -The corresponding Databricks Workflow in the Databricks UI for the run triggered from the Airflow DAG is depicted below +The corresponding Databricks Workflow in the Databricks UI for the run triggered from the Airflow Dag is depicted below ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. image:: ../img/workflow_run_databricks_graph_view.png diff --git a/providers/dbt/cloud/docs/index.rst b/providers/dbt/cloud/docs/index.rst index ce0409d66535f..3c9d3163f5a8e 100644 --- a/providers/dbt/cloud/docs/index.rst +++ b/providers/dbt/cloud/docs/index.rst @@ -60,7 +60,7 @@ an Integrated Developer Environment (IDE). :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/dbt/cloud/docs/operators.rst b/providers/dbt/cloud/docs/operators.rst index 2626034a384cd..fe4d7b0661973 100644 --- a/providers/dbt/cloud/docs/operators.rst +++ b/providers/dbt/cloud/docs/operators.rst @@ -65,7 +65,7 @@ configurations or overrides for the job run such as ``threads_override``, ``gene The below examples demonstrate how to instantiate DbtCloudRunJobOperator tasks with both synchronous and asynchronous waiting for run termination, respectively. To note, the ``account_id`` for the operators is -referenced within the ``default_args`` of the example DAG. +referenced within the ``default_args`` of the example Dag. .. exampleinclude:: /../tests/system/dbt/cloud/example_dbt_cloud.py :language: python @@ -104,7 +104,7 @@ functionality available with the :class:`~airflow.sensors.base.BaseSensorOperato In the example below, the ``run_id`` value in the example below comes from the output of a previous DbtCloudRunJobOperator task by utilizing the ``.output`` property exposed for all operators. Also, to note, -the ``account_id`` for the task is referenced within the ``default_args`` of the example DAG. +the ``account_id`` for the task is referenced within the ``default_args`` of the example Dag. .. exampleinclude:: /../tests/system/dbt/cloud/example_dbt_cloud.py :language: python diff --git a/providers/dingding/docs/index.rst b/providers/dingding/docs/index.rst index 16e0b5bba46ab..9acdd76cd30ca 100644 --- a/providers/dingding/docs/index.rst +++ b/providers/dingding/docs/index.rst @@ -55,7 +55,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/docker/docs/index.rst b/providers/docker/docs/index.rst index 42473e25851c3..eb4df6623e0a5 100644 --- a/providers/docker/docs/index.rst +++ b/providers/docker/docs/index.rst @@ -49,7 +49,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/edge3/docs/architecture.rst b/providers/edge3/docs/architecture.rst index 515f0a165b1d9..286368f3d5e8d 100644 --- a/providers/edge3/docs/architecture.rst +++ b/providers/edge3/docs/architecture.rst @@ -37,7 +37,7 @@ deployed outside of the central Airflow cluster is connected via HTTP(s) to the scheduler[label="Scheduler"] api[label="API server"] database[label="Database"] - dag[label="DAG files"] + dag[label="Dag files"] api->workers api->database @@ -52,7 +52,7 @@ deployed outside of the central Airflow cluster is connected via HTTP(s) to the label="Edge site"; {rank = same; edge_worker; edge_dag} edge_worker[label="Edge Worker"] - edge_dag[label="DAG files (Remote copy)"] + edge_dag[label="Dag files (Remote copy)"] edge_worker->edge_dag } @@ -63,7 +63,7 @@ deployed outside of the central Airflow cluster is connected via HTTP(s) to the * **Workers** - Execute the assigned tasks - most standard setup has local or centralized workers, e.g. via Celery * **Edge Workers** - Special workers which pull tasks via HTTP(s) as provided as feature via this provider package * **Scheduler** - Responsible for adding the necessary tasks to the queue. The EdgeExecutor is running as a module inside the scheduler. -* **API server** - HTTP REST API Server provides access to DAG/task status information. The required end-points are +* **API server** - HTTP REST API Server provides access to Dag/task status information. The required end-points are provided by the Edge provider plugin. The Edge Worker uses this API to pull tasks and send back the results. * **Database** - Contains information about the status of tasks, Dags, Variables, connections, etc. diff --git a/providers/edge3/docs/edge_executor.rst b/providers/edge3/docs/edge_executor.rst index 7bc465cfaaaa7..9cf853f6636ea 100644 --- a/providers/edge3/docs/edge_executor.rst +++ b/providers/edge3/docs/edge_executor.rst @@ -52,7 +52,7 @@ infrastructure is available). When using EdgeExecutor in addition to other executors and EdgeExecutor not being the default executor (that is to say the first one in the list of executors), be reminded to also define EdgeExecutor -as the executor at task or dag level in addition to the queues you are targeting. +as the executor at task or Dag level in addition to the queues you are targeting. For more details on multiple executors please see :ref:`apache-airflow:using-multiple-executors-concurrently`. .. _edge_executor:concurrency_slots: @@ -80,12 +80,12 @@ Here is an example setting pool_slots for a task: import pendulum - from airflow import DAG + from airflow import Dag from airflow.decorators import task from airflow.example_dags.libs.helper import print_stuff from airflow.settings import AIRFLOW_HOME - with DAG( + with Dag( dag_id="example_edge_pool_slots", schedule=None, start_date=pendulum.datetime(2021, 1, 1, tz="UTC"), diff --git a/providers/edge3/docs/index.rst b/providers/edge3/docs/index.rst index 1e27692fb616c..88b101e00ccff 100644 --- a/providers/edge3/docs/index.rst +++ b/providers/edge3/docs/index.rst @@ -57,7 +57,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs <_api/airflow/providers/edge3/example_dags/index> + Example Dags <_api/airflow/providers/edge3/example_dags/index> PyPI Repository Installing from sources diff --git a/providers/edge3/docs/install_on_windows.rst b/providers/edge3/docs/install_on_windows.rst index 7144800e9b200..8ca3607c687fa 100644 --- a/providers/edge3/docs/install_on_windows.rst +++ b/providers/edge3/docs/install_on_windows.rst @@ -35,8 +35,8 @@ To setup a instance of Edge Worker on Windows, you need to follow the steps belo 4. Activate the virtual environment via: ``venv\Scripts\activate.bat`` 5. Install Edge provider using the Airflow constraints as of your Airflow version via ``pip install apache-airflow-providers-edge3 --constraint https://raw.githubusercontent.com/apache/airflow/constraints-2.10.5/constraints-3.12.txt``. -6. Create a new folder ``dags`` in ``C:\Airflow`` and copy the relevant DAG files in it. - (At least the DAG files which should be executed on the edge alongside the dependencies.) +6. Create a new folder ``dags`` in ``C:\Airflow`` and copy the relevant Dag files in it. + (At least the Dag files which should be executed on the edge alongside the dependencies.) 7. Collect needed parameters from your running Airflow backend, at least the following: - ``api_auth`` / ``jwt_token``: The shared secret key between the api-server and the Edge Worker @@ -65,7 +65,7 @@ To setup a instance of Edge Worker on Windows, you need to follow the steps belo @REM Add if needed: set https_proxy=http://my-company-proxy.com:3128 airflow edge worker --concurrency 4 --queues windows -9. Note on logs: Per default the DAG Run ID is used as path in the log structure and per default the date and time +9. Note on logs: Per default the Dag Run ID is used as path in the log structure and per default the date and time is contained in the Run ID. Windows fails with a colon (":") in a file or folder name and this also the Edge Worker fails. Therefore you might consider changing the config ``AIRFLOW__LOGGING__LOG_FILENAME_TEMPLATE`` to avoid the colon. @@ -73,7 +73,7 @@ To setup a instance of Edge Worker on Windows, you need to follow the steps belo Note that the log filename template is resolved on server side and not on the worker side. So you need to make this as a global change. Alternatively for testing purposes only you must use Run IDs without a colon, e.g. set the Run ID manually when - starting a DAG run. + starting a Dag run. 10. Start the worker via: ``start_worker.bat`` Watch the console for errors. -11. Run a DAG as test and see if the result is as expected. +11. Run a Dag as test and see if the result is as expected. diff --git a/providers/elasticsearch/docs/index.rst b/providers/elasticsearch/docs/index.rst index 48f97f04a1a1c..d839468e6c106 100644 --- a/providers/elasticsearch/docs/index.rst +++ b/providers/elasticsearch/docs/index.rst @@ -58,7 +58,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/fab/docs/auth-manager/access-control.rst b/providers/fab/docs/auth-manager/access-control.rst index af46d61077c0d..99f0cb7121e11 100644 --- a/providers/fab/docs/auth-manager/access-control.rst +++ b/providers/fab/docs/auth-manager/access-control.rst @@ -82,12 +82,12 @@ other users. ``Admin`` users have ``Op`` permission plus additional permissions: Custom Roles ''''''''''''' -DAG Level Role +Dag Level Role ^^^^^^^^^^^^^^ -``Admin`` can create a set of roles which are only allowed to view a certain set of DAGs. This is called DAG level access. Each DAG defined in the DAG model table +``Admin`` can create a set of roles which are only allowed to view a certain set of Dags. This is called Dag level access. Each Dag defined in the Dag model table is treated as a ``View`` which has two permissions associated with it (``can_read`` and ``can_edit``. ``can_dag_read`` and ``can_dag_edit`` are deprecated since 2.0.0). -There is a special view called ``DAGs`` (it was called ``all_dags`` in versions 1.10.*) which -allows the role to access all the DAGs. The default ``Admin``, ``Viewer``, ``User``, ``Op`` roles can all access ``DAGs`` view. +There is a special view called ``Dags`` (it was called ``all_dags`` in versions 1.10.*) which +allows the role to access all the Dags. The default ``Admin``, ``Viewer``, ``User``, ``Op`` roles can all access ``Dags`` view. .. image:: /img/add-role.png .. image:: /img/new-role.png @@ -135,15 +135,15 @@ Permissions (each consistent of a resource + action pair) are then added to role There are five default roles: Public, Viewer, User, Op, and Admin. Each one has the permissions of the preceding role, as well as additional permissions. -DAG-level permissions +Dag-level permissions ^^^^^^^^^^^^^^^^^^^^^ -For DAG-level permissions exclusively, access can be controlled at the level of all DAGs or individual DAG objects. +For Dag-level permissions exclusively, access can be controlled at the level of all Dags or individual Dag objects. This includes ``DAGs.can_read``, ``DAGs.can_edit``, ``DAGs.can_delete``, ``DAG Runs.can_read``, ``DAG Runs.can_create``, ``DAG Runs.can_delete``, and ``DAG Runs.menu_access``. -When these permissions are listed, access is granted to users who either have the listed permission or the same permission for the specific DAG being acted upon. -For individual DAGs, the resource name is ``DAG:`` + the DAG ID, or for the DAG Runs resource the resource name is ``DAG Run:``. +When these permissions are listed, access is granted to users who either have the listed permission or the same permission for the specific Dag being acted upon. +For individual Dags, the resource name is ``Dag:`` + the Dag ID, or for the Dag Runs resource the resource name is ``Dag Run:``. -For example, if a user is trying to view DAG information for the ``example_dag_id``, and the endpoint requires ``DAGs.can_read`` access, access will be granted if the user has either ``DAGs.can_read`` or ``DAG:example_dag_id.can_read`` access. +For example, if a user is trying to view Dag information for the ``example_dag_id``, and the endpoint requires ``DAGs.can_read`` access, access will be granted if the user has either ``DAGs.can_read`` or ``DAG:example_dag_id.can_read`` access. ================================================================================== ====== ================================================================= ============ Stable API Permissions @@ -156,19 +156,19 @@ Endpoint /connections/{connection_id} DELETE Connections.can_delete Op /connections/{connection_id} PATCH Connections.can_edit Op /connections/{connection_id} GET Connections.can_read Op -/dagSources/{file_token} GET DAG Code.can_read Viewer -/dags GET DAGs.can_read Viewer -/dags/{dag_id} GET DAGs.can_read Viewer -/dags/{dag_id} PATCH DAGs.can_edit User -/dags/{dag_id}/clearTaskInstances PUT DAGs.can_edit, DAG Runs.can_edit, Task Instances.can_edit User -/dags/{dag_id}/details GET DAGs.can_read Viewer -/dags/{dag_id}/tasks GET DAGs.can_read, Task Instances.can_read Viewer -/dags/{dag_id}/tasks/{task_id} GET DAGs.can_read, Task Instances.can_read Viewer -/dags/{dag_id}/dagRuns GET DAGs.can_read, DAG Runs.can_read Viewer -/dags/{dag_id}/dagRuns POST DAGs.can_edit, DAG Runs.can_create User -/dags/{dag_id}/dagRuns/{dag_run_id} DELETE DAGs.can_edit, DAG Runs.can_delete User -/dags/{dag_id}/dagRuns/{dag_run_id} GET DAGs.can_read, DAG Runs.can_read Viewer -/dags/~/dagRuns/list POST DAGs.can_edit, DAG Runs.can_read User +/dagSources/{file_token} GET Dag Code.can_read Viewer +/dags GET Dags.can_read Viewer +/dags/{dag_id} GET Dags.can_read Viewer +/dags/{dag_id} PATCH Dags.can_edit User +/dags/{dag_id}/clearTaskInstances PUT Dags.can_edit, Dag Runs.can_edit, Task Instances.can_edit User +/dags/{dag_id}/details GET Dags.can_read Viewer +/dags/{dag_id}/tasks GET Dags.can_read, Task Instances.can_read Viewer +/dags/{dag_id}/tasks/{task_id} GET Dags.can_read, Task Instances.can_read Viewer +/dags/{dag_id}/dagRuns GET Dags.can_read, Dag Runs.can_read Viewer +/dags/{dag_id}/dagRuns POST Dags.can_edit, Dag Runs.can_create User +/dags/{dag_id}/dagRuns/{dag_run_id} DELETE Dags.can_edit, Dag Runs.can_delete User +/dags/{dag_id}/dagRuns/{dag_run_id} GET Dags.can_read, Dag Runs.can_read Viewer +/dags/~/dagRuns/list POST Dags.can_edit, Dag Runs.can_read User /assets GET Assets.can_read Viewer /assets/{uri} GET Assets.can_read Viewer /assets/events GET Assets.can_read Viewer @@ -184,19 +184,19 @@ Endpoint /pools/{pool_name} GET Pools.can_read Op /pools/{pool_name} PATCH Pools.can_edit Op /providers GET Providers.can_read Op -/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances GET DAGs.can_read, DAG Runs.can_read, Task Instances.can_read Viewer -/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id} GET DAGs.can_read, DAG Runs.can_read, Task Instances.can_read Viewer -/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/links GET DAGs.can_read, DAG Runs.can_read, Task Instances.can_read Viewer -/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/logs/{task_try_number} GET DAGs.can_read, DAG Runs.can_read, Task Instances.can_read Viewer -/dags/~/dagRuns/~/taskInstances/list POST DAGs.can_edit, DAG Runs.can_read, Task Instances.can_read User +/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances GET Dags.can_read, Dag Runs.can_read, Task Instances.can_read Viewer +/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id} GET Dags.can_read, Dag Runs.can_read, Task Instances.can_read Viewer +/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/links GET Dags.can_read, Dag Runs.can_read, Task Instances.can_read Viewer +/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/logs/{task_try_number} GET Dags.can_read, Dag Runs.can_read, Task Instances.can_read Viewer +/dags/~/dagRuns/~/taskInstances/list POST Dags.can_edit, Dag Runs.can_read, Task Instances.can_read User /variables GET Variables.can_read Op /variables POST Variables.can_create Op /variables/{variable_key} DELETE Variables.can_delete Op /variables/{variable_key} GET Variables.can_read Op /variables/{variable_key} PATCH Variables.can_edit Op -/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/xcomEntries GET DAGs.can_read, DAG Runs.can_read, Viewer +/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/xcomEntries GET Dags.can_read, Dag Runs.can_read, Viewer Task Instances.can_read, XComs.can_read -/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/xcomEntries/{xcom_key} GET DAGs.can_read, DAG Runs.can_read, Viewer +/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/xcomEntries/{xcom_key} GET Dags.can_read, Dag Runs.can_read, Viewer Task Instances.can_read, XComs.can_read /users GET Users.can_read Admin /users POST Users.can_create Admin @@ -219,54 +219,54 @@ Action Permissions ====================================== ======================================================================= ============ Access homepage Website.can_read Viewer Show Browse menu Browse.menu_access Viewer -Show DAGs menu DAGs.menu_access Viewer -Get DAG stats DAGs.can_read, DAG Runs.can_read Viewer +Show Dags menu Dags.menu_access Viewer +Get Dag stats Dags.can_read, Dag Runs.can_read Viewer Show Task Instances menu Task Instances.menu_access Viewer -Get Task stats DAGs.can_read, DAG Runs.can_read, Task Instances.can_read Viewer -Get last DAG runs DAGs.can_read, DAG Runs.can_read Viewer -Get DAG code DAGs.can_read, DAG Code.can_read Viewer -Get DAG details DAGs.can_read, DAG Runs.can_read Viewer -Show DAG Dependencies menu DAG Dependencies.menu_access Viewer -Get DAG Dependencies DAG Dependencies.can_read Viewer -Get rendered DAG DAGs.can_read, Task Instances.can_read Viewer -Get Logs with metadata DAGs.can_read, Task Instances.can_read, Task Logs.can_read Viewer -Get Log DAGs.can_read, Task Instances.can_read, Task Logs.can_read Viewer -Redirect to external Log DAGs.can_read, Task Instances.can_read, Task Logs.can_read Viewer -Get Task DAGs.can_read, Task Instances.can_read Viewer +Get Task stats Dags.can_read, Dag Runs.can_read, Task Instances.can_read Viewer +Get last Dag runs Dags.can_read, Dag Runs.can_read Viewer +Get Dag code Dags.can_read, Dag Code.can_read Viewer +Get Dag details Dags.can_read, Dag Runs.can_read Viewer +Show Dag Dependencies menu Dag Dependencies.menu_access Viewer +Get Dag Dependencies Dag Dependencies.can_read Viewer +Get rendered Dag Dags.can_read, Task Instances.can_read Viewer +Get Logs with metadata Dags.can_read, Task Instances.can_read, Task Logs.can_read Viewer +Get Log Dags.can_read, Task Instances.can_read, Task Logs.can_read Viewer +Redirect to external Log Dags.can_read, Task Instances.can_read, Task Logs.can_read Viewer +Get Task Dags.can_read, Task Instances.can_read Viewer Show XCom menu XComs.menu_access Op -Get XCom DAGs.can_read, Task Instances.can_read, XComs.can_read Viewer +Get XCom Dags.can_read, Task Instances.can_read, XComs.can_read Viewer Create XCom XComs.can_create Op Delete XCom XComs.can_delete Op -Triggers Task Instance DAGs.can_edit, Task Instances.can_create User -Delete DAG DAGs.can_delete User -Show DAG Runs menu DAG Runs.menu_access Viewer -Trigger DAG run DAGs.can_edit, DAG Runs.can_create User -Clear DAG DAGs.can_edit, Task Instances.can_delete User -Clear DAG Run DAGs.can_edit, Task Instances.can_delete User -Mark DAG as blocked DAGS.can_edit, DAG Runs.can_read User -Mark DAG Run as failed DAGS.can_edit, DAG Runs.can_edit User -Mark DAG Run as success DAGS.can_edit, DAG Runs.can_edit User -Mark Task as failed DAGs.can_edit, Task Instances.can_edit User -Mark Task as success DAGs.can_edit, Task Instances.can_edit User -Get DAG as tree DAGs.can_read, Task Instances.can_read, Viewer +Triggers Task Instance Dags.can_edit, Task Instances.can_create User +Delete Dag Dags.can_delete User +Show Dag Runs menu Dag Runs.menu_access Viewer +Trigger Dag run Dags.can_edit, Dag Runs.can_create User +Clear Dag Dags.can_edit, Task Instances.can_delete User +Clear Dag Run Dags.can_edit, Task Instances.can_delete User +Mark Dag as blocked DAGS.can_edit, Dag Runs.can_read User +Mark Dag Run as failed DAGS.can_edit, Dag Runs.can_edit User +Mark Dag Run as success DAGS.can_edit, Dag Runs.can_edit User +Mark Task as failed Dags.can_edit, Task Instances.can_edit User +Mark Task as success Dags.can_edit, Task Instances.can_edit User +Get Dag as tree Dags.can_read, Task Instances.can_read, Viewer Task Logs.can_read -Get DAG as graph DAGs.can_read, Task Instances.can_read, Viewer +Get Dag as graph Dags.can_read, Task Instances.can_read, Viewer Task Logs.can_read -Get DAG as duration graph DAGs.can_read, Task Instances.can_read Viewer -Show all tries DAGs.can_read, Task Instances.can_read Viewer -Show landing times DAGs.can_read, Task Instances.can_read Viewer -Toggle DAG paused status DAGs.can_edit User -Show Gantt Chart DAGs.can_read, Task Instances.can_read Viewer -Get external links DAGs.can_read, Task Instances.can_read Viewer -Show Task Instances DAGs.can_read, Task Instances.can_read Viewer +Get Dag as duration graph Dags.can_read, Task Instances.can_read Viewer +Show all tries Dags.can_read, Task Instances.can_read Viewer +Show landing times Dags.can_read, Task Instances.can_read Viewer +Toggle Dag paused status Dags.can_edit User +Show Gantt Chart Dags.can_read, Task Instances.can_read Viewer +Get external links Dags.can_read, Task Instances.can_read Viewer +Show Task Instances Dags.can_read, Task Instances.can_read Viewer Show Configurations menu Configurations.menu_access Op Show Configs Configurations.can_read Viewer -Delete multiple records DAGs.can_edit User -Set Task Instance as running DAGs.can_edit User -Set Task Instance as failed DAGs.can_edit User -Set Task Instance as success DAGs.can_edit User -Set Task Instance as up_for_retry DAGs.can_edit User -Autocomplete DAGs.can_read Viewer +Delete multiple records Dags.can_edit User +Set Task Instance as running Dags.can_edit User +Set Task Instance as failed Dags.can_edit User +Set Task Instance as success Dags.can_edit User +Set Task Instance as up_for_retry Dags.can_edit User +Autocomplete Dags.can_read Viewer Show Asset menu Assets.menu_access Viewer Show Assets Assets.can_read Viewer Show Docs menu Docs.menu_access Viewer @@ -305,19 +305,19 @@ Delete Users Users.can_delete Reset user Passwords Passwords.can_edit, Passwords.can_read Admin ====================================== ======================================================================= ============ -These DAG-level controls can be set directly through the UI / CLI, or encoded in the dags themselves through the access_control arg. +These Dag-level controls can be set directly through the UI / CLI, or encoded in the dags themselves through the access_control arg. -Order of precedence for DAG-level permissions +Order of precedence for Dag-level permissions ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Since DAG-level access control can be configured in multiple places, conflicts are inevitable and a clear resolution strategy is required. As a result, -Airflow considers the ``access_control`` argument supplied on a DAG itself to be completely authoritative if present, which has a few effects: +Since Dag-level access control can be configured in multiple places, conflicts are inevitable and a clear resolution strategy is required. As a result, +Airflow considers the ``access_control`` argument supplied on a Dag itself to be completely authoritative if present, which has a few effects: -Setting ``access_control`` on a DAG will overwrite any previously existing DAG-level permissions if it is any value other than ``None``: +Setting ``access_control`` on a Dag will overwrite any previously existing Dag-level permissions if it is any value other than ``None``: .. code-block:: python - DAG( + Dag( dag_id="example_fine_grained_access", start_date=pendulum.datetime(2021, 1, 1, tz="UTC"), access_control={ @@ -325,38 +325,38 @@ Setting ``access_control`` on a DAG will overwrite any previously existing DAG-l }, ) -It's also possible to add DAG Runs resource permissions in a similar way, but explicit adding the resource name to identify which resource the permissions are for: +It's also possible to add Dag Runs resource permissions in a similar way, but explicit adding the resource name to identify which resource the permissions are for: .. code-block:: python - DAG( + Dag( dag_id="example_fine_grained_access", start_date=pendulum.datetime(2021, 1, 1, tz="UTC"), access_control={ - "Viewer": {"DAGs": {"can_edit", "can_read", "can_delete"}, "DAG Runs": {"can_create"}}, + "Viewer": {"Dags": {"can_edit", "can_read", "can_delete"}, "Dag Runs": {"can_create"}}, }, ) -This also means that setting ``access_control={}`` will wipe any existing DAG-level permissions for a given DAG from the DB: +This also means that setting ``access_control={}`` will wipe any existing Dag-level permissions for a given Dag from the DB: .. code-block:: python - DAG( + Dag( dag_id="example_no_fine_grained_access", start_date=pendulum.datetime(2021, 1, 1, tz="UTC"), access_control={}, ) -Conversely, removing the access_control block from a DAG altogether (or setting it to ``None``) won't make any changes and can leave dangling permissions. +Conversely, removing the access_control block from a Dag altogether (or setting it to ``None``) won't make any changes and can leave dangling permissions. .. code-block:: python - DAG( + Dag( dag_id="example_indifferent_to_fine_grained_access", start_date=pendulum.datetime(2021, 1, 1, tz="UTC"), ) -In the case that there is no ``access_control`` defined on the DAG itself, Airflow will defer to existing permissions defined in the DB, which -may have been set through the UI, CLI or by previous access_control args on the DAG in question. +In the case that there is no ``access_control`` defined on the Dag itself, Airflow will defer to existing permissions defined in the DB, which +may have been set through the UI, CLI or by previous access_control args on the Dag in question. In all cases, system-wide roles such as ``Can edit on DAG`` take precedence over dag-level access controls, such that they can be considered ``Can edit on DAG: *`` diff --git a/providers/ftp/docs/index.rst b/providers/ftp/docs/index.rst index 6fc7c5bd007b2..ba9edef40622e 100644 --- a/providers/ftp/docs/index.rst +++ b/providers/ftp/docs/index.rst @@ -56,7 +56,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/github/docs/index.rst b/providers/github/docs/index.rst index 291fcfbd7f489..b4298961f31f6 100644 --- a/providers/github/docs/index.rst +++ b/providers/github/docs/index.rst @@ -56,7 +56,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/google/docs/deprecation-policy.rst b/providers/google/docs/deprecation-policy.rst index bde3587e78c82..e6a00e9f535d4 100644 --- a/providers/google/docs/deprecation-policy.rst +++ b/providers/google/docs/deprecation-policy.rst @@ -30,7 +30,7 @@ Versioning of the package As mentioned in `Airflow's release process and version policy `__ Google provider package (and others) should follow SemVer, meaning that any breaking changes should be released together with bumping major version of the package. -The change is considered to be a breaking if a DAG that was working before stops to work after the change. +The change is considered to be a breaking if a Dag that was working before stops to work after the change. Deprecation Procedure ````````````````````` @@ -55,4 +55,4 @@ The entire procedure of deprecating (either method, parameter or operator) consi Additional Considerations ````````````````````````` - - By default all deprecations should allow a 6 months time period until they will be removed and not available. This period will give Airflow users enough time and flexibility to update their DAGs before actual removal happens. On a case by case basis this period can be adjusted given specific circumstances (e.g. in case deprecation is because of underlying API sunset which can happen earlier than in 6 months). + - By default all deprecations should allow a 6 months time period until they will be removed and not available. This period will give Airflow users enough time and flexibility to update their Dags before actual removal happens. On a case by case basis this period can be adjusted given specific circumstances (e.g. in case deprecation is because of underlying API sunset which can happen earlier than in 6 months). diff --git a/providers/google/docs/example-dags.rst b/providers/google/docs/example-dags.rst index 6c504998d75c6..e3c3d57628574 100644 --- a/providers/google/docs/example-dags.rst +++ b/providers/google/docs/example-dags.rst @@ -15,9 +15,9 @@ specific language governing permissions and limitations under the License. -Example DAGs +Example Dags ============ -You can learn how to use Google integrations by analyzing the source code of the example DAGs: +You can learn how to use Google integrations by analyzing the source code of the example Dags: * `Google Ads `__ * `Google Cloud `__ diff --git a/providers/google/docs/index.rst b/providers/google/docs/index.rst index 95a4681c9fc63..d1db4d8e61f1d 100644 --- a/providers/google/docs/index.rst +++ b/providers/google/docs/index.rst @@ -60,7 +60,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/google/docs/operators/cloud/cloud_composer.rst b/providers/google/docs/operators/cloud/cloud_composer.rst index 12bb751f75418..cd0ada3c38f8f 100644 --- a/providers/google/docs/operators/cloud/cloud_composer.rst +++ b/providers/google/docs/operators/cloud/cloud_composer.rst @@ -178,10 +178,10 @@ or you can define the same operator in the deferrable mode: :start-after: [START howto_operator_run_airflow_cli_command_deferrable_mode] :end-before: [END howto_operator_run_airflow_cli_command_deferrable_mode] -Check if a DAG run has completed +Check if a Dag run has completed -------------------------------- -You can use sensor that checks if a DAG run has completed in your environments, use: +You can use sensor that checks if a Dag run has completed in your environments, use: :class:`~airflow.providers.google.cloud.sensors.cloud_composer.CloudComposerDAGRunSensor` .. exampleinclude:: /../../google/tests/system/google/cloud/composer/example_cloud_composer.py diff --git a/providers/google/docs/operators/cloud/dataprep.rst b/providers/google/docs/operators/cloud/dataprep.rst index ea6025e4902d9..4cb0270b32c6f 100644 --- a/providers/google/docs/operators/cloud/dataprep.rst +++ b/providers/google/docs/operators/cloud/dataprep.rst @@ -32,7 +32,7 @@ You can check :doc:`apache-airflow:howto/connection` The DataprepRunJobGroupOperator will run specified job. Operator required a recipe id. To identify the recipe id please use `API documentation for runJobGroup `_ E.g. if the URL is /flows/10?recipe=7, the recipe id is 7. The recipe cannot be created via this operator. It can be created only via UI which is available `here `_. -Some of parameters can be override by DAG's body request. How to do it is shown in example dag. +Some of parameters can be override by Dag's body request. How to do it is shown in example dag. See following example: Set values for these fields: diff --git a/providers/google/docs/operators/cloud/functions.rst b/providers/google/docs/operators/cloud/functions.rst index dd7c570f82161..9efce085e0037 100644 --- a/providers/google/docs/operators/cloud/functions.rst +++ b/providers/google/docs/operators/cloud/functions.rst @@ -74,7 +74,7 @@ For parameter definition, take a look at Arguments """"""""" -When a DAG is created, the default_args dictionary can be used to pass +When a Dag is created, the default_args dictionary can be used to pass arguments common with other tasks: .. exampleinclude:: /../../google/tests/system/google/cloud/cloud_functions/example_functions.py diff --git a/providers/google/docs/operators/cloud/gcs.rst b/providers/google/docs/operators/cloud/gcs.rst index fdd337cf3b3eb..266b9afdb0f24 100644 --- a/providers/google/docs/operators/cloud/gcs.rst +++ b/providers/google/docs/operators/cloud/gcs.rst @@ -42,8 +42,8 @@ GCSTimeSpanFileTransformOperator Use the :class:`~airflow.providers.google.cloud.operators.gcs.GCSTimeSpanFileTransformOperator` to transform files that were modified in a specific time span (the data interval). -The time span is defined by the time span's start and end timestamps. If a DAG -does not have a *next* DAG instance scheduled, the time span end infinite, meaning the operator +The time span is defined by the time span's start and end timestamps. If a Dag +does not have a *next* Dag instance scheduled, the time span end infinite, meaning the operator processes all files older than ``data_interval_start``. .. exampleinclude:: /../../google/tests/system/google/cloud/gcs/example_gcs_transform_timespan.py diff --git a/providers/google/docs/operators/cloud/index.rst b/providers/google/docs/operators/cloud/index.rst index 7635148040a1b..7d70c03e773ec 100644 --- a/providers/google/docs/operators/cloud/index.rst +++ b/providers/google/docs/operators/cloud/index.rst @@ -29,4 +29,4 @@ Google Cloud Operators .. note:: You can learn how to use Google Cloud integrations by analyzing the - `source code `_ of the particular example DAGs. + `source code `_ of the particular example Dags. diff --git a/providers/google/docs/operators/cloud/kubernetes_engine.rst b/providers/google/docs/operators/cloud/kubernetes_engine.rst index 8a3176c0ce1b5..e1d661153ae05 100644 --- a/providers/google/docs/operators/cloud/kubernetes_engine.rst +++ b/providers/google/docs/operators/cloud/kubernetes_engine.rst @@ -181,7 +181,7 @@ Use of XCom We can enable the usage of :ref:`XCom ` on the operator. This works by launching a sidecar container with the pod specified. The sidecar is automatically mounted when the XCom usage is specified and its mount point is the path ``/airflow/xcom``. To provide values to the XCom, ensure your Pod writes it into a file called -``return.json`` in the sidecar. The contents of this can then be used downstream in your DAG. +``return.json`` in the sidecar. The contents of this can then be used downstream in your Dag. Here is an example of it being used: .. exampleinclude:: /../../google/tests/system/google/cloud/kubernetes_engine/example_kubernetes_engine.py diff --git a/providers/google/docs/operators/marketing_platform/index.rst b/providers/google/docs/operators/marketing_platform/index.rst index 8f2a33b82327e..4cfe153b19c5e 100644 --- a/providers/google/docs/operators/marketing_platform/index.rst +++ b/providers/google/docs/operators/marketing_platform/index.rst @@ -29,4 +29,4 @@ Google Marketing Platform Operators .. note:: You can learn how to use Google Cloud integrations by analyzing the - `source code `_ of the particular example DAGs. + `source code `_ of the particular example Dags. diff --git a/providers/hashicorp/docs/secrets-backends/hashicorp-vault.rst b/providers/hashicorp/docs/secrets-backends/hashicorp-vault.rst index 08de6270de367..7c6ad07c26d4f 100644 --- a/providers/hashicorp/docs/secrets-backends/hashicorp-vault.rst +++ b/providers/hashicorp/docs/secrets-backends/hashicorp-vault.rst @@ -235,7 +235,7 @@ Using multiple mount points You can use multiple mount points to store your secrets. For example, you might want to store the Airflow instance configurations in one Vault KV engine only accessible by your Airflow deployment tools, while storing the variables and connections in another KV engine -available to your DAGs, in order to grant them more specific Vault ACLs. +available to your Dags, in order to grant them more specific Vault ACLs. In order to do this, you will need to setup you configuration this way: diff --git a/providers/http/docs/index.rst b/providers/http/docs/index.rst index 9c911daf76532..edf59f8fc7634 100644 --- a/providers/http/docs/index.rst +++ b/providers/http/docs/index.rst @@ -56,7 +56,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/influxdb/docs/index.rst b/providers/influxdb/docs/index.rst index cabe1e2c8eb1f..4189583636f29 100644 --- a/providers/influxdb/docs/index.rst +++ b/providers/influxdb/docs/index.rst @@ -56,7 +56,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/jdbc/docs/index.rst b/providers/jdbc/docs/index.rst index e47da3f571824..80bdd2cf5669c 100644 --- a/providers/jdbc/docs/index.rst +++ b/providers/jdbc/docs/index.rst @@ -57,7 +57,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/jenkins/docs/index.rst b/providers/jenkins/docs/index.rst index 65c393dba180b..81caec566c488 100644 --- a/providers/jenkins/docs/index.rst +++ b/providers/jenkins/docs/index.rst @@ -55,7 +55,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/microsoft/azure/docs/index.rst b/providers/microsoft/azure/docs/index.rst index e0845db6a921c..3da4889a6f774 100644 --- a/providers/microsoft/azure/docs/index.rst +++ b/providers/microsoft/azure/docs/index.rst @@ -61,7 +61,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/microsoft/mssql/docs/index.rst b/providers/microsoft/mssql/docs/index.rst index a6ee283bb9c05..93848ec370ee0 100644 --- a/providers/microsoft/mssql/docs/index.rst +++ b/providers/microsoft/mssql/docs/index.rst @@ -57,7 +57,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/microsoft/mssql/docs/operators.rst b/providers/microsoft/mssql/docs/operators.rst index f114eeceb89d8..029070c9f2c25 100644 --- a/providers/microsoft/mssql/docs/operators.rst +++ b/providers/microsoft/mssql/docs/operators.rst @@ -102,10 +102,10 @@ To find the countries in Asian continent: :end-before: [END mssql_operator_howto_guide_params_passing_get_query] -The complete SQLExecuteQueryOperator DAG to connect to MSSQL +The complete SQLExecuteQueryOperator Dag to connect to MSSQL ------------------------------------------------------------ -When we put everything together, our DAG should look like this: +When we put everything together, our Dag should look like this: .. exampleinclude:: /../tests/system/microsoft/mssql/example_mssql.py :language: python diff --git a/providers/microsoft/psrp/docs/operators/index.rst b/providers/microsoft/psrp/docs/operators/index.rst index 176cf03967db5..30403d77a4895 100644 --- a/providers/microsoft/psrp/docs/operators/index.rst +++ b/providers/microsoft/psrp/docs/operators/index.rst @@ -103,7 +103,7 @@ the value and make it available in the remote session as a type. This ensures for example that the value is not accidentally logged. -Using the template filter requires the DAG to be configured to +Using the template filter requires the Dag to be configured to :ref:`render fields as native objects ` (the default is to coerce all values into strings which won't work here because we need a value diff --git a/providers/microsoft/winrm/docs/index.rst b/providers/microsoft/winrm/docs/index.rst index b3dd3f74a5587..0b479cafadc43 100644 --- a/providers/microsoft/winrm/docs/index.rst +++ b/providers/microsoft/winrm/docs/index.rst @@ -55,7 +55,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/mysql/docs/index.rst b/providers/mysql/docs/index.rst index 81fa1b30e3dc4..bc38b16ef4e33 100644 --- a/providers/mysql/docs/index.rst +++ b/providers/mysql/docs/index.rst @@ -56,7 +56,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/neo4j/docs/index.rst b/providers/neo4j/docs/index.rst index 284c903aa52c6..e69ca43acaa4c 100644 --- a/providers/neo4j/docs/index.rst +++ b/providers/neo4j/docs/index.rst @@ -56,7 +56,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/odbc/docs/index.rst b/providers/odbc/docs/index.rst index 30f8285de5c45..506807a5da551 100644 --- a/providers/odbc/docs/index.rst +++ b/providers/odbc/docs/index.rst @@ -56,7 +56,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/openlineage/docs/guides/developer.rst b/providers/openlineage/docs/guides/developer.rst index c4cf5348064ef..0de483527ddcf 100644 --- a/providers/openlineage/docs/guides/developer.rst +++ b/providers/openlineage/docs/guides/developer.rst @@ -180,7 +180,7 @@ Custom Extractors This approach is recommended when dealing with Operators that you can not modify (f.e. third party providers), but still want the lineage to be extracted from them. If you want to extract lineage from your own Operators, you may prefer directly implementing OpenLineage methods as described in :ref:`openlineage_methods:openlineage`. -This approach works by detecting which Airflow Operators your DAG is using, and extracting lineage data from them using corresponding Extractors class. +This approach works by detecting which Airflow Operators your Dag is using, and extracting lineage data from them using corresponding Extractors class. Interface ^^^^^^^^^ @@ -476,19 +476,19 @@ a string of semicolon separated full import path to the functions. Job Hierarchy ============= -Apache Airflow features an inherent job hierarchy: DAGs, large and independently schedulable units, comprise smaller, executable tasks. +Apache Airflow features an inherent job hierarchy: Dags, large and independently schedulable units, comprise smaller, executable tasks. OpenLineage reflects this structure in its Job Hierarchy model. -- Upon DAG scheduling, a START event is emitted. +- Upon Dag scheduling, a START event is emitted. - Subsequently, following Airflow's task order, each task triggers: - START events at TaskInstance start. - COMPLETE/FAILED events upon completion. -- Finally, upon DAG termination, a completion event (COMPLETE or FAILED) is emitted. +- Finally, upon Dag termination, a completion event (COMPLETE or FAILED) is emitted. -TaskInstance events' ParentRunFacet references the originating DAG run. +TaskInstance events' ParentRunFacet references the originating Dag run. .. _troubleshooting:openlineage: diff --git a/providers/openlineage/docs/guides/structure.rst b/providers/openlineage/docs/guides/structure.rst index 6d3d9584eb764..91372d9d793da 100644 --- a/providers/openlineage/docs/guides/structure.rst +++ b/providers/openlineage/docs/guides/structure.rst @@ -47,9 +47,9 @@ The metadata collected can answer questions like: - Are there redundant data processes that can be optimized or removed? - What data dependencies exist for this critical report? -Understanding complex inter-DAG dependencies and providing up-to-date runtime visibility into DAG execution can be challenging. -OpenLineage integrates with Airflow to collect DAG lineage metadata so that inter-DAG dependencies are easily maintained -and viewable via a lineage graph, while also keeping a catalog of historical runs of DAGs. +Understanding complex inter-Dag dependencies and providing up-to-date runtime visibility into Dag execution can be challenging. +OpenLineage integrates with Airflow to collect Dag lineage metadata so that inter-Dag dependencies are easily maintained +and viewable via a lineage graph, while also keeping a catalog of historical runs of Dags. For OpenLineage backend that will receive events, you can use `Marquez `_ @@ -60,8 +60,8 @@ OpenLineage integration implements `AirflowPlugin `_. -The ``OpenLineageListener`` is then called by Airflow when certain events happen - when DAGs or TaskInstances start, complete or fail. -For DAGs, the listener runs in Airflow Scheduler. For TaskInstances, the listener runs on Airflow Worker. +The ``OpenLineageListener`` is then called by Airflow when certain events happen - when Dags or TaskInstances start, complete or fail. +For Dags, the listener runs in Airflow Scheduler. For TaskInstances, the listener runs on Airflow Worker. When TaskInstance listener method gets called, the ``OpenLineageListener`` constructs metadata like event's unique ``run_id`` and event time. Then, it tries to extract metadata from Airflow Operators as described in :ref:`extraction_precedence:openlineage`. diff --git a/providers/openlineage/docs/guides/user.rst b/providers/openlineage/docs/guides/user.rst index 23ab4e0b74413..7e5f13275e705 100644 --- a/providers/openlineage/docs/guides/user.rst +++ b/providers/openlineage/docs/guides/user.rst @@ -24,7 +24,7 @@ Using OpenLineage integration OpenLineage is an open framework for data lineage collection and analysis. At its core is an extensible specification that systems can use to interoperate with lineage metadata. `Check out OpenLineage docs `_. -**No change to user DAG files is required to use OpenLineage**. Basic configuration is needed so that OpenLineage knows where to send events. +**No change to user Dag files is required to use OpenLineage**. Basic configuration is needed so that OpenLineage knows where to send events. Quickstart ========== @@ -57,7 +57,7 @@ This example is a basic demonstration of OpenLineage setup. AIRFLOW__OPENLINEAGE__TRANSPORT='{"type": "http", "url": "http://example.com:5000", "endpoint": "api/v1/lineage"}' -3. **That's it !** OpenLineage events should be sent to the configured backend when DAGs are run. +3. **That's it !** OpenLineage events should be sent to the configured backend when Dags are run. Usage ===== @@ -352,10 +352,10 @@ reproducing your environment setup by setting ``debug_mode`` option to ``true`` By setting this variable to true, OpenLineage integration may log and emit extensive details. It should only be enabled temporary for debugging purposes. -Enabling OpenLineage on DAG/task level +Enabling OpenLineage on Dag/task level ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -One can selectively enable OpenLineage for specific DAGs and tasks by using the ``selective_enable`` policy. +One can selectively enable OpenLineage for specific Dags and tasks by using the ``selective_enable`` policy. To enable this policy, set the ``selective_enable`` option to True in the [openlineage] section of your Airflow configuration file: .. code-block:: ini @@ -371,47 +371,47 @@ To enable this policy, set the ``selective_enable`` option to True in the [openl While ``selective_enable`` enables selective control, the ``disabled`` :ref:`option ` still has precedence. -If you set ``disabled`` to True in the configuration, OpenLineage will be disabled for all DAGs and tasks regardless of the ``selective_enable`` setting. +If you set ``disabled`` to True in the configuration, OpenLineage will be disabled for all Dags and tasks regardless of the ``selective_enable`` setting. Once the ``selective_enable`` policy is enabled, you can choose to enable OpenLineage -for individual DAGs and tasks using the ``enable_lineage`` and ``disable_lineage`` functions. +for individual Dags and tasks using the ``enable_lineage`` and ``disable_lineage`` functions. -1. Enabling Lineage on a DAG: +1. Enabling Lineage on a Dag: .. code-block:: python from airflow.providers.openlineage.utils.selective_enable import disable_lineage, enable_lineage - with enable_lineage(DAG(...)): - # Tasks within this DAG will have lineage tracking enabled + with enable_lineage(Dag(...)): + # Tasks within this Dag will have lineage tracking enabled MyOperator(...) AnotherOperator(...) 2. Enabling Lineage on a Task: -While enabling lineage on a DAG implicitly enables it for all tasks within that DAG, you can still selectively disable it for specific tasks: +While enabling lineage on a Dag implicitly enables it for all tasks within that Dag, you can still selectively disable it for specific tasks: .. code-block:: python from airflow.providers.openlineage.utils.selective_enable import disable_lineage, enable_lineage - with DAG(...) as dag: + with Dag(...) as dag: t1 = MyOperator(...) t2 = AnotherOperator(...) - # Enable lineage for the entire DAG + # Enable lineage for the entire Dag enable_lineage(dag) # Disable lineage for task t1 disable_lineage(t1) -Enabling lineage on the DAG level automatically enables it for all tasks within that DAG unless explicitly disabled per task. +Enabling lineage on the Dag level automatically enables it for all tasks within that Dag unless explicitly disabled per task. -Enabling lineage on the task level implicitly enables lineage on its DAG. +Enabling lineage on the task level implicitly enables lineage on its Dag. This is because each emitting task sends a `ParentRunFacet `_, -which requires the DAG-level lineage to be enabled in some OpenLineage backend systems. -Disabling DAG-level lineage while enabling task-level lineage might cause errors or inconsistencies. +which requires the Dag-level lineage to be enabled in some OpenLineage backend systems. +Disabling Dag-level lineage while enabling task-level lineage might cause errors or inconsistencies. .. _options:spark_inject_parent_job_info: diff --git a/providers/opsgenie/docs/index.rst b/providers/opsgenie/docs/index.rst index 71c55fb3e8aa1..9a0bfe82c8dec 100644 --- a/providers/opsgenie/docs/index.rst +++ b/providers/opsgenie/docs/index.rst @@ -57,7 +57,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/oracle/docs/index.rst b/providers/oracle/docs/index.rst index afe0b876000e6..7d1f67957a88e 100644 --- a/providers/oracle/docs/index.rst +++ b/providers/oracle/docs/index.rst @@ -56,7 +56,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs <_api/airflow/providers/oracle/example_dags/index> + Example Dags <_api/airflow/providers/oracle/example_dags/index> PyPI Repository Installing from sources diff --git a/providers/pagerduty/docs/notifications/pagerduty_notifier_howto_guide.rst b/providers/pagerduty/docs/notifications/pagerduty_notifier_howto_guide.rst index 658054bd0a5ec..da2ec56a19a12 100644 --- a/providers/pagerduty/docs/notifications/pagerduty_notifier_howto_guide.rst +++ b/providers/pagerduty/docs/notifications/pagerduty_notifier_howto_guide.rst @@ -21,7 +21,7 @@ How-to Guide for Pagerduty notifications Introduction ------------ The Pagerduty notifier (:class:`airflow.providers.pagerduty.notifications.pagerduty.PagerdutyNotifier`) allows users to send -messages to Pagerduty using the various ``on_*_callbacks`` at both the DAG level and Task level. +messages to Pagerduty using the various ``on_*_callbacks`` at both the Dag level and Task level. Example Code: @@ -30,16 +30,16 @@ Example Code: .. code-block:: python from datetime import datetime - from airflow import DAG + from airflow import Dag from airflow.providers.standard.operators.bash import BashOperator from airflow.providers.pagerduty.notifications.pagerduty import send_pagerduty_notification - with DAG( + with Dag( "pagerduty_notifier", start_date=datetime(2023, 1, 1), on_failure_callback=[ send_pagerduty_notification( - summary="The dag {{ dag.dag_id }} failed", + summary="The Dag {{ dag.dag_id }} failed", severity="critical", source="airflow dag_id: {{dag.dag_id}}", dedup_key="{{dag.dag_id}}-{{ti.task_id}}", diff --git a/providers/papermill/docs/index.rst b/providers/papermill/docs/index.rst index b909ae0f6b29e..4ed449134ae75 100644 --- a/providers/papermill/docs/index.rst +++ b/providers/papermill/docs/index.rst @@ -56,7 +56,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/papermill/docs/operators.rst b/providers/papermill/docs/operators.rst index 400ae7a319aa8..65f9b2f92ec7a 100644 --- a/providers/papermill/docs/operators.rst +++ b/providers/papermill/docs/operators.rst @@ -44,7 +44,7 @@ tagged with parameters the injected cell will be inserted at the top of the note Make sure that you save your notebook somewhere so that Airflow can access it. Papermill supports S3, GCS, Azure and Local. HDFS is **not supported**. -Example DAG +Example Dag ''''''''''' Use the :class:`~airflow.providers.papermill.operators.papermill.PapermillOperator` @@ -56,7 +56,7 @@ to execute a jupyter notebook: :start-after: [START howto_operator_papermill] :end-before: [END howto_operator_papermill] -Example DAG to Verify the message in the notebook: +Example Dag to Verify the message in the notebook: .. exampleinclude:: /../../papermill/tests/system/papermill/example_papermill_verify.py :language: python @@ -64,7 +64,7 @@ Example DAG to Verify the message in the notebook: :end-before: [END howto_verify_operator_papermill] -Example DAG to Verify the message in the notebook using a remote jupyter kernel: +Example Dag to Verify the message in the notebook using a remote jupyter kernel: .. exampleinclude:: /../../papermill/tests/system/papermill/example_papermill_remote_verify.py :language: python diff --git a/providers/postgres/docs/index.rst b/providers/postgres/docs/index.rst index d3782ad401ce4..1a47118be9e59 100644 --- a/providers/postgres/docs/index.rst +++ b/providers/postgres/docs/index.rst @@ -56,7 +56,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/postgres/docs/operators.rst b/providers/postgres/docs/operators.rst index f0c7dff3e5533..444ef79828c13 100644 --- a/providers/postgres/docs/operators.rst +++ b/providers/postgres/docs/operators.rst @@ -53,7 +53,7 @@ The code snippets below are based on Airflow-2.0 Dumping SQL statements into your operator isn't quite appealing and will create maintainability pains somewhere down to the road. To prevent this, Airflow offers an elegant solution. This is how it works: you simply create -a directory inside the DAG folder called ``sql`` and then put all the SQL files containing your SQL queries inside it. +a directory inside the Dag folder called ``sql`` and then put all the SQL files containing your SQL queries inside it. Your ``dags/sql/pet_schema.sql`` should like this: @@ -68,7 +68,7 @@ Your ``dags/sql/pet_schema.sql`` should like this: OWNER VARCHAR NOT NULL); -Now let's refactor ``create_pet_table`` in our DAG: +Now let's refactor ``create_pet_table`` in our Dag: .. code-block:: python @@ -187,10 +187,10 @@ sent to the server at connection start. :end-before: [END postgres_sql_execute_query_operator_howto_guide_get_birth_date] -The complete Postgres Operator DAG +The complete Postgres Operator Dag ---------------------------------- -When we put everything together, our DAG should look like this: +When we put everything together, our Dag should look like this: .. exampleinclude:: /../../postgres/tests/system/postgres/example_postgres.py :language: python diff --git a/providers/presto/docs/index.rst b/providers/presto/docs/index.rst index 8cf74cd64dc40..bef9cba086ec1 100644 --- a/providers/presto/docs/index.rst +++ b/providers/presto/docs/index.rst @@ -57,7 +57,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/redis/docs/index.rst b/providers/redis/docs/index.rst index 873b594c5b68b..48c941eabf6d7 100644 --- a/providers/redis/docs/index.rst +++ b/providers/redis/docs/index.rst @@ -51,7 +51,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/redis/docs/message-queues.rst b/providers/redis/docs/message-queues.rst index 84614e9a1f483..86c020443b1ed 100644 --- a/providers/redis/docs/message-queues.rst +++ b/providers/redis/docs/message-queues.rst @@ -63,7 +63,7 @@ Channels can also be specified via the Queue URI instead of the ``channels`` kwa :end-before: [END extract_channels] -Below is an example of how you can configure an Airflow DAG to be triggered by a message in Redis. +Below is an example of how you can configure an Airflow Dag to be triggered by a message in Redis. .. exampleinclude:: /../tests/system/redis/example_dag_message_queue_trigger.py :language: python diff --git a/providers/salesforce/docs/index.rst b/providers/salesforce/docs/index.rst index d26f2f4307860..18c93092425ff 100644 --- a/providers/salesforce/docs/index.rst +++ b/providers/salesforce/docs/index.rst @@ -56,7 +56,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/singularity/docs/index.rst b/providers/singularity/docs/index.rst index a43c2fdb8234b..1793566f1f1fe 100644 --- a/providers/singularity/docs/index.rst +++ b/providers/singularity/docs/index.rst @@ -48,7 +48,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/slack/docs/index.rst b/providers/slack/docs/index.rst index f2e2f5fba314b..f02b0aaaeef27 100644 --- a/providers/slack/docs/index.rst +++ b/providers/slack/docs/index.rst @@ -51,7 +51,7 @@ :caption: References Python API <_api/airflow/providers/slack/index> - Example DAGs + Example Dags .. toctree:: :hidden: diff --git a/providers/slack/docs/notifications/slack_notifier_howto_guide.rst b/providers/slack/docs/notifications/slack_notifier_howto_guide.rst index 3b6a1e7879924..b65899b2c907c 100644 --- a/providers/slack/docs/notifications/slack_notifier_howto_guide.rst +++ b/providers/slack/docs/notifications/slack_notifier_howto_guide.rst @@ -21,7 +21,7 @@ How-to Guide for Slack notifications Introduction ------------ Slack notifier (:class:`airflow.providers.slack.notifications.slack.SlackNotifier`) allows users to send -messages to a slack channel using the various ``on_*_callbacks`` at both the DAG level and Task level +messages to a slack channel using the various ``on_*_callbacks`` at both the Dag level and Task level Example Code: @@ -30,15 +30,15 @@ Example Code: .. code-block:: python from datetime import datetime - from airflow import DAG + from airflow import Dag from airflow.providers.standard.operators.bash import BashOperator from airflow.providers.slack.notifications.slack import send_slack_notification - with DAG( + with Dag( start_date=datetime(2023, 1, 1), on_success_callback=[ send_slack_notification( - text="The DAG {{ dag.dag_id }} succeeded", + text="The Dag {{ dag.dag_id }} succeeded", channel="#general", username="Airflow", ) diff --git a/providers/slack/docs/notifications/slackwebhook_notifier_howto_guide.rst b/providers/slack/docs/notifications/slackwebhook_notifier_howto_guide.rst index e6ef3ab41409c..9ba2b7efa8ae7 100644 --- a/providers/slack/docs/notifications/slackwebhook_notifier_howto_guide.rst +++ b/providers/slack/docs/notifications/slackwebhook_notifier_howto_guide.rst @@ -22,7 +22,7 @@ Introduction ------------ Slack Incoming Webhook notifier (:class:`airflow.providers.slack.notifications.slack_webhook.SlackWebhookNotifier`) allows users to send messages to a slack channel through `Incoming Webhook `__ -using the various ``on_*_callbacks`` at both the DAG level and Task level +using the various ``on_*_callbacks`` at both the Dag level and Task level Example Code: @@ -31,7 +31,7 @@ Example Code: .. code-block:: python from datetime import datetime, timezone - from airflow import DAG + from airflow import Dag from airflow.providers.standard.operators.bash import BashOperator from airflow.providers.slack.notifications.slack_webhook import send_slack_webhook_notification @@ -43,7 +43,7 @@ Example Code: text="The task {{ ti.task_id }} failed", ) - with DAG( + with Dag( dag_id="mydag", schedule="@once", start_date=datetime(2023, 1, 1, tzinfo=timezone.utc), diff --git a/providers/smtp/docs/connections/smtp.rst b/providers/smtp/docs/connections/smtp.rst index 153eb46d9e18a..68621a3aa7e20 100644 --- a/providers/smtp/docs/connections/smtp.rst +++ b/providers/smtp/docs/connections/smtp.rst @@ -239,7 +239,7 @@ connection via **CLI**: *and* supply a ``smtp_conn_id``, the hook's connection settings take precedence and the global ``[smtp]`` options may be ignored. -Using ``SmtpHook`` in a DAG +Using ``SmtpHook`` in a Dag ^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. code-block:: python @@ -247,7 +247,7 @@ Using ``SmtpHook`` in a DAG from datetime import datetime - from airflow import DAG + from airflow import Dag from airflow.operators.python import PythonOperator from airflow.providers.smtp.hooks.smtp import SmtpHook @@ -261,7 +261,7 @@ Using ``SmtpHook`` in a DAG ) - with DAG( + with Dag( dag_id="test_gmail_oauth2", start_date=datetime(2025, 7, 1), schedule=None, diff --git a/providers/smtp/docs/notifications/smtp_notifier_howto_guide.rst b/providers/smtp/docs/notifications/smtp_notifier_howto_guide.rst index e47f9e340c93b..1c1f5e8705d2d 100644 --- a/providers/smtp/docs/notifications/smtp_notifier_howto_guide.rst +++ b/providers/smtp/docs/notifications/smtp_notifier_howto_guide.rst @@ -21,7 +21,7 @@ How-to Guide for SMTP notifications Introduction ------------ The SMTP notifier (:class:`airflow.providers.smtp.notifications.smtp.SmtpNotifier`) allows users to send -messages to SMTP servers using the various ``on_*_callbacks`` at both the DAG level and Task level. +messages to SMTP servers using the various ``on_*_callbacks`` at both the Dag level and Task level. Example Code: @@ -30,11 +30,11 @@ Example Code: .. code-block:: python from datetime import datetime - from airflow import DAG + from airflow import Dag from airflow.providers.standard.operators.bash import BashOperator from airflow.providers.smtp.notifications.smtp import send_smtp_notification - with DAG( + with Dag( dag_id="smtp_notifier", schedule=None, start_date=datetime(2023, 1, 1), diff --git a/providers/snowflake/docs/index.rst b/providers/snowflake/docs/index.rst index 7a6dba068e60b..85325660b125a 100644 --- a/providers/snowflake/docs/index.rst +++ b/providers/snowflake/docs/index.rst @@ -57,7 +57,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/sqlite/docs/index.rst b/providers/sqlite/docs/index.rst index 0be09bbf8fda1..f8b3225942b4f 100644 --- a/providers/sqlite/docs/index.rst +++ b/providers/sqlite/docs/index.rst @@ -56,7 +56,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/standard/docs/index.rst b/providers/standard/docs/index.rst index f6ff128a94094..45094723315f3 100644 --- a/providers/standard/docs/index.rst +++ b/providers/standard/docs/index.rst @@ -43,7 +43,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs <_api/airflow/providers/standard/example_dags/index> + Example Dags <_api/airflow/providers/standard/example_dags/index> PyPI Repository Installing from sources Python API <_api/airflow/providers/standard/index> diff --git a/providers/standard/docs/operators/bash.rst b/providers/standard/docs/operators/bash.rst index 2026752478357..3fc97028f351e 100644 --- a/providers/standard/docs/operators/bash.rst +++ b/providers/standard/docs/operators/bash.rst @@ -246,7 +246,7 @@ into a temporary file. By default, the file is placed in a temporary directory To execute a bash script, place it in a location relative to the directory containing -the DAG file. So if your DAG file is in ``/usr/local/airflow/dags/test_dag.py``, you can +the Dag file. So if your Dag file is in ``/usr/local/airflow/dags/test_dag.py``, you can move your ``test.sh`` file to any location under ``/usr/local/airflow/dags/`` (Example: ``/usr/local/airflow/dags/scripts/test.sh``) and pass the relative path to ``bash_command`` as shown below: @@ -280,7 +280,7 @@ in files composed in different languages, and general flexibility in structuring pipelines. It is also possible to define your ``template_searchpath`` as pointing to any folder -locations in the DAG constructor call. +locations in the Dag constructor call. .. tab-set:: @@ -302,7 +302,7 @@ locations in the DAG constructor call. .. code-block:: python :emphasize-lines: 1 - with DAG("example_bash_dag", ..., template_searchpath="/opt/scripts"): + with Dag("example_bash_dag", ..., template_searchpath="/opt/scripts"): t2 = BashOperator( task_id="bash_example", bash_command="test.sh ", diff --git a/providers/standard/docs/operators/datetime.rst b/providers/standard/docs/operators/datetime.rst index 972c69b398c6f..4e780e3a8a7f1 100644 --- a/providers/standard/docs/operators/datetime.rst +++ b/providers/standard/docs/operators/datetime.rst @@ -24,16 +24,16 @@ Use the :class:`~airflow.providers.standard.operators.datetime.BranchDateTimeOpe depending on whether the time falls into the range given by two target arguments, This operator has two modes. First mode is to use current time (machine clock time at the -moment the DAG is executed), and the second mode is to use the ``logical_date`` of the DAG run it is run +moment the Dag is executed), and the second mode is to use the ``logical_date`` of the Dag run it is run with. Usage with current time ----------------------- -The usages above might be useful in certain situations - for example when DAG is used to perform cleanups -and maintenance and is not really supposed to be used for any DAGs that are supposed to be back-filled, -because the "current time" make back-filling non-idempotent, its result depend on the time when the DAG +The usages above might be useful in certain situations - for example when Dag is used to perform cleanups +and maintenance and is not really supposed to be used for any Dags that are supposed to be back-filled, +because the "current time" make back-filling non-idempotent, its result depend on the time when the Dag actually was run. It's also slightly non-deterministic potentially even if it is run on schedule. It can take some time between when the DAGRun was scheduled and executed and it might mean that even if the DAGRun was scheduled properly, the actual time used for branching decision will be different than the @@ -62,8 +62,8 @@ will raise an exception. Usage with logical date ----------------------- -The usage is much more "data range" friendly. The ``logical_date`` does not change when the DAG is re-run and -it is not affected by execution delays, so this approach is suitable for idempotent DAG runs that might be +The usage is much more "data range" friendly. The ``logical_date`` does not change when the Dag is re-run and +it is not affected by execution delays, so this approach is suitable for idempotent Dag runs that might be back-filled. .. exampleinclude:: /../src/airflow/providers/standard/example_dags/example_branch_datetime_operator.py diff --git a/providers/standard/docs/operators/python.rst b/providers/standard/docs/operators/python.rst index a6247db1ea4c4..bc8069e0a280f 100644 --- a/providers/standard/docs/operators/python.rst +++ b/providers/standard/docs/operators/python.rst @@ -165,7 +165,7 @@ If you want the context related to datetime objects like ``data_interval_start`` .. important:: - The Python function body defined to be executed is cut out of the DAG into a temporary file w/o surrounding code. + The Python function body defined to be executed is cut out of the Dag into a temporary file w/o surrounding code. As in the examples you need to add all imports again and you can not rely on variables from the global Python context. If you want to pass variables into the classic :class:`~airflow.providers.standard.operators.python.PythonVirtualenvOperator` use @@ -194,7 +194,7 @@ pip configuration as described in `pip config + Example Dags PyPI Repository Installing from sources diff --git a/providers/telegram/docs/index.rst b/providers/telegram/docs/index.rst index 8720d1b430d57..6085d68a51442 100644 --- a/providers/telegram/docs/index.rst +++ b/providers/telegram/docs/index.rst @@ -56,7 +56,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/teradata/docs/operators/azure_blob_to_teradata.rst b/providers/teradata/docs/operators/azure_blob_to_teradata.rst index fc977efea1b8a..501cb5cb6f14b 100644 --- a/providers/teradata/docs/operators/azure_blob_to_teradata.rst +++ b/providers/teradata/docs/operators/azure_blob_to_teradata.rst @@ -123,10 +123,10 @@ to teradata table is as follows: :start-after: [START azure_blob_to_teradata_transfer_operator_howto_guide_transfer_data_blob_to_teradata_parquet] :end-before: [END azure_blob_to_teradata_transfer_operator_howto_guide_transfer_data_blob_to_teradata_parquet] -The complete ``AzureBlobStorageToTeradataOperator`` Operator DAG +The complete ``AzureBlobStorageToTeradataOperator`` Operator Dag ---------------------------------------------------------------- -When we put everything together, our DAG should look like this: +When we put everything together, our Dag should look like this: .. exampleinclude:: /../../teradata/tests/system/teradata/example_azure_blob_to_teradata_transfer.py :language: python diff --git a/providers/teradata/docs/operators/bteq.rst b/providers/teradata/docs/operators/bteq.rst index 2424aa9855e1f..b3e61bbf5a6a6 100644 --- a/providers/teradata/docs/operators/bteq.rst +++ b/providers/teradata/docs/operators/bteq.rst @@ -224,7 +224,7 @@ The BteqOperator supports executing conditional logic within your BTEQ scripts. :start-after: [START bteq_operator_howto_guide_conditional_logic] :end-before: [END bteq_operator_howto_guide_conditional_logic] -Conditional execution enables more intelligent data pipelines that can adapt to different scenarios without requiring separate DAG branches. +Conditional execution enables more intelligent data pipelines that can adapt to different scenarios without requiring separate Dag branches. Error Handling in BTEQ Scripts @@ -253,10 +253,10 @@ When your workflow completes or requires cleanup, you can use the BteqOperator t :end-before: [END bteq_operator_howto_guide_drop_table] -The complete Teradata Operator DAG +The complete Teradata Operator Dag ---------------------------------- -When we put everything together, our DAG should look like this: +When we put everything together, our Dag should look like this: .. exampleinclude:: /../../teradata/tests/system/teradata/example_bteq.py :language: python diff --git a/providers/teradata/docs/operators/s3_to_teradata.rst b/providers/teradata/docs/operators/s3_to_teradata.rst index 800fa12020e24..ac4783761bd4a 100644 --- a/providers/teradata/docs/operators/s3_to_teradata.rst +++ b/providers/teradata/docs/operators/s3_to_teradata.rst @@ -70,10 +70,10 @@ An example usage of the S3ToTeradataOperator to transfer PARQUET data format fro :start-after: [START s3_to_teradata_transfer_operator_howto_guide_transfer_data_s3_to_teradata_parquet] :end-before: [END s3_to_teradata_transfer_operator_howto_guide_transfer_data_s3_to_teradata_parquet] -The complete ``S3ToTeradataOperator`` Operator DAG +The complete ``S3ToTeradataOperator`` Operator Dag -------------------------------------------------- -When we put everything together, our DAG should look like this: +When we put everything together, our Dag should look like this: .. exampleinclude:: /../../teradata/tests/system/teradata/example_s3_to_teradata_transfer.py :language: python diff --git a/providers/teradata/docs/operators/teradata.rst b/providers/teradata/docs/operators/teradata.rst index f19475d2cc7f8..f44b77ca7f886 100644 --- a/providers/teradata/docs/operators/teradata.rst +++ b/providers/teradata/docs/operators/teradata.rst @@ -104,10 +104,10 @@ We can then create a TeradataOperator task that drops the ``Users`` table. :start-after: [START teradata_operator_howto_guide_drop_users_table] :end-before: [END teradata_operator_howto_guide_drop_users_table] -The complete Teradata Operator DAG +The complete Teradata Operator Dag ---------------------------------- -When we put everything together, our DAG should look like this: +When we put everything together, our Dag should look like this: .. exampleinclude:: /../../teradata/tests/system/teradata/example_teradata.py :language: python @@ -218,10 +218,10 @@ with parameters passed positionally as a list: :start-after: [START howto_teradata_stored_procedure_operator_with_in_out_dynamic_result] :end-before: [END howto_teradata_stored_procedure_operator_with_in_out_dynamic_result] -The complete TeradataStoredProcedureOperator DAG +The complete TeradataStoredProcedureOperator Dag ------------------------------------------------ -When we put everything together, our DAG should look like this: +When we put everything together, our Dag should look like this: .. exampleinclude:: /../../teradata/tests/system/teradata/example_teradata_call_sp.py :language: python diff --git a/providers/teradata/docs/operators/teradata_to_teradata.rst b/providers/teradata/docs/operators/teradata_to_teradata.rst index 93770ccce9fb5..6a0bb44cda4c2 100644 --- a/providers/teradata/docs/operators/teradata_to_teradata.rst +++ b/providers/teradata/docs/operators/teradata_to_teradata.rst @@ -37,10 +37,10 @@ An example usage of the TeradataToTeradataOperator is as follows: :start-after: [START teradata_to_teradata_transfer_operator_howto_guide_transfer_data] :end-before: [END teradata_to_teradata_transfer_operator_howto_guide_transfer_data] -The complete TeradataToTeradata Transfer Operator DAG +The complete TeradataToTeradata Transfer Operator Dag ----------------------------------------------------- -When we put everything together, our DAG should look like this: +When we put everything together, our Dag should look like this: .. exampleinclude:: /../../teradata/tests/system/teradata/example_teradata.py :language: python diff --git a/providers/trino/docs/index.rst b/providers/trino/docs/index.rst index 39a5ff3363877..c909ee8afb5f9 100644 --- a/providers/trino/docs/index.rst +++ b/providers/trino/docs/index.rst @@ -57,7 +57,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/vertica/docs/index.rst b/providers/vertica/docs/index.rst index 33f649f998402..8608f2320aeb5 100644 --- a/providers/vertica/docs/index.rst +++ b/providers/vertica/docs/index.rst @@ -57,7 +57,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/yandex/docs/index.rst b/providers/yandex/docs/index.rst index 2b93c8c06c648..c8f1eea48cb25 100644 --- a/providers/yandex/docs/index.rst +++ b/providers/yandex/docs/index.rst @@ -58,7 +58,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/yandex/docs/operators/dataproc.rst b/providers/yandex/docs/operators/dataproc.rst index b7188e2ea52f6..64d99761f8366 100644 --- a/providers/yandex/docs/operators/dataproc.rst +++ b/providers/yandex/docs/operators/dataproc.rst @@ -34,4 +34,4 @@ that can be integrated with Apache Hadoop and other storage systems. Using the operators ^^^^^^^^^^^^^^^^^^^ To learn how to use Data Proc operators, -see `example DAGs `_. +see `example Dags `_. diff --git a/providers/yandex/docs/operators/yq.rst b/providers/yandex/docs/operators/yq.rst index 08a90bb817220..be3fca38a01da 100644 --- a/providers/yandex/docs/operators/yq.rst +++ b/providers/yandex/docs/operators/yq.rst @@ -25,4 +25,4 @@ Yandex Query Operators Using the operators ^^^^^^^^^^^^^^^^^^^ To learn how to use Yandex Query operator, -see `example DAG `__. +see `example Dag `__. diff --git a/providers/ydb/docs/index.rst b/providers/ydb/docs/index.rst index 605748a51078e..81b7c4db1b551 100644 --- a/providers/ydb/docs/index.rst +++ b/providers/ydb/docs/index.rst @@ -56,7 +56,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources diff --git a/providers/ydb/docs/operators/ydb_operator_howto_guide.rst b/providers/ydb/docs/operators/ydb_operator_howto_guide.rst index b7ef3bef4653c..24b2199aaa520 100644 --- a/providers/ydb/docs/operators/ydb_operator_howto_guide.rst +++ b/providers/ydb/docs/operators/ydb_operator_howto_guide.rst @@ -58,7 +58,7 @@ The code snippets below are based on Airflow-2.0 Dumping SQL statements into your operator isn't quite appealing and will create maintainability pains somewhere down to the road. To prevent this, Airflow offers an elegant solution. This is how it works: you simply create -a directory inside the DAG folder called ``sql`` and then put all the SQL files containing your SQL queries inside it. +a directory inside the Dag folder called ``sql`` and then put all the SQL files containing your SQL queries inside it. Your ``dags/sql/pet_schema.sql`` should like this: @@ -74,7 +74,7 @@ Your ``dags/sql/pet_schema.sql`` should like this: PRIMARY KEY (pet_id) ); -Now let's refactor ``create_pet_table`` in our DAG: +Now let's refactor ``create_pet_table`` in our Dag: .. code-block:: python @@ -162,10 +162,10 @@ by creating a sql file. ) -The complete YDB Operator DAG +The complete YDB Operator Dag ----------------------------- -When we put everything together, our DAG should look like this: +When we put everything together, our Dag should look like this: .. exampleinclude:: /../../ydb/tests/system/ydb/example_ydb.py :language: python diff --git a/providers/zendesk/docs/index.rst b/providers/zendesk/docs/index.rst index 6ce14aaf946df..286787e5f1f83 100644 --- a/providers/zendesk/docs/index.rst +++ b/providers/zendesk/docs/index.rst @@ -55,7 +55,7 @@ :maxdepth: 1 :caption: Resources - Example DAGs + Example Dags PyPI Repository Installing from sources